Dec 13 09:10:43.012748 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Dec 12 23:15:00 -00 2024 Dec 13 09:10:43.012794 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 09:10:43.012814 kernel: BIOS-provided physical RAM map: Dec 13 09:10:43.012826 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 09:10:43.012837 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 09:10:43.012848 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 09:10:43.012861 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Dec 13 09:10:43.012871 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Dec 13 09:10:43.012881 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 09:10:43.012893 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 09:10:43.012908 kernel: NX (Execute Disable) protection: active Dec 13 09:10:43.012919 kernel: APIC: Static calls initialized Dec 13 09:10:43.012928 kernel: SMBIOS 2.8 present. Dec 13 09:10:43.012940 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Dec 13 09:10:43.012954 kernel: Hypervisor detected: KVM Dec 13 09:10:43.012969 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 09:10:43.012985 kernel: kvm-clock: using sched offset of 3281825965 cycles Dec 13 09:10:43.012997 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 09:10:43.013009 kernel: tsc: Detected 2000.000 MHz processor Dec 13 09:10:43.013021 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 09:10:43.013034 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 09:10:43.013046 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Dec 13 09:10:43.013058 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 13 09:10:43.013071 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 09:10:43.013086 kernel: ACPI: Early table checksum verification disabled Dec 13 09:10:43.013098 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Dec 13 09:10:43.013109 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 09:10:43.013121 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 09:10:43.013133 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 09:10:43.013145 kernel: ACPI: FACS 0x000000007FFE0000 000040 Dec 13 09:10:43.013157 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 09:10:43.013168 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 09:10:43.013180 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 09:10:43.013196 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 09:10:43.013208 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Dec 13 09:10:43.013220 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Dec 13 09:10:43.013232 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Dec 13 09:10:43.013244 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Dec 13 09:10:43.013255 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Dec 13 09:10:43.013266 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Dec 13 09:10:43.013288 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Dec 13 09:10:43.013304 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 09:10:43.013316 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 09:10:43.013344 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 13 09:10:43.013357 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Dec 13 09:10:43.013370 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Dec 13 09:10:43.013383 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Dec 13 09:10:43.013399 kernel: Zone ranges: Dec 13 09:10:43.013412 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 09:10:43.013425 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Dec 13 09:10:43.013438 kernel: Normal empty Dec 13 09:10:43.013451 kernel: Movable zone start for each node Dec 13 09:10:43.013465 kernel: Early memory node ranges Dec 13 09:10:43.013477 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 09:10:43.013491 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Dec 13 09:10:43.013504 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Dec 13 09:10:43.013520 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 09:10:43.013537 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 09:10:43.013550 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Dec 13 09:10:43.013562 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 09:10:43.013575 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 09:10:43.013588 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 09:10:43.013601 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 09:10:43.013614 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 09:10:43.013627 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 09:10:43.013643 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 09:10:43.013656 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 09:10:43.013669 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 09:10:43.013682 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 09:10:43.013695 kernel: TSC deadline timer available Dec 13 09:10:43.013709 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 09:10:43.013724 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 13 09:10:43.013737 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Dec 13 09:10:43.013755 kernel: Booting paravirtualized kernel on KVM Dec 13 09:10:43.013768 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 09:10:43.013784 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 13 09:10:43.013797 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Dec 13 09:10:43.013810 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Dec 13 09:10:43.013822 kernel: pcpu-alloc: [0] 0 1 Dec 13 09:10:43.013836 kernel: kvm-guest: PV spinlocks disabled, no host support Dec 13 09:10:43.013851 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 09:10:43.013865 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 09:10:43.013878 kernel: random: crng init done Dec 13 09:10:43.013894 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 09:10:43.013908 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 09:10:43.013921 kernel: Fallback order for Node 0: 0 Dec 13 09:10:43.013935 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Dec 13 09:10:43.013948 kernel: Policy zone: DMA32 Dec 13 09:10:43.013962 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 09:10:43.013977 kernel: Memory: 1971204K/2096612K available (12288K kernel code, 2299K rwdata, 22724K rodata, 42844K init, 2348K bss, 125148K reserved, 0K cma-reserved) Dec 13 09:10:43.013990 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 09:10:43.014007 kernel: Kernel/User page tables isolation: enabled Dec 13 09:10:43.014019 kernel: ftrace: allocating 37902 entries in 149 pages Dec 13 09:10:43.014032 kernel: ftrace: allocated 149 pages with 4 groups Dec 13 09:10:43.014045 kernel: Dynamic Preempt: voluntary Dec 13 09:10:43.014057 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 09:10:43.014071 kernel: rcu: RCU event tracing is enabled. Dec 13 09:10:43.014085 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 09:10:43.014098 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 09:10:43.014110 kernel: Rude variant of Tasks RCU enabled. Dec 13 09:10:43.014122 kernel: Tracing variant of Tasks RCU enabled. Dec 13 09:10:43.014136 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 09:10:43.014148 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 09:10:43.014159 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 13 09:10:43.014176 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 09:10:43.014188 kernel: Console: colour VGA+ 80x25 Dec 13 09:10:43.014213 kernel: printk: console [tty0] enabled Dec 13 09:10:43.014243 kernel: printk: console [ttyS0] enabled Dec 13 09:10:43.014273 kernel: ACPI: Core revision 20230628 Dec 13 09:10:43.014303 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 13 09:10:43.014323 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 09:10:43.014364 kernel: x2apic enabled Dec 13 09:10:43.014377 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 09:10:43.014389 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 09:10:43.014403 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns Dec 13 09:10:43.014416 kernel: Calibrating delay loop (skipped) preset value.. 4000.00 BogoMIPS (lpj=2000000) Dec 13 09:10:43.014430 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Dec 13 09:10:43.014442 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Dec 13 09:10:43.014471 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 09:10:43.014486 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 09:10:43.014499 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 09:10:43.014516 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 09:10:43.014529 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Dec 13 09:10:43.014543 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 09:10:43.014557 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 13 09:10:43.014569 kernel: MDS: Mitigation: Clear CPU buffers Dec 13 09:10:43.014583 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 09:10:43.014604 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 09:10:43.014617 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 09:10:43.014630 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 09:10:43.014644 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 09:10:43.014657 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 13 09:10:43.014671 kernel: Freeing SMP alternatives memory: 32K Dec 13 09:10:43.014683 kernel: pid_max: default: 32768 minimum: 301 Dec 13 09:10:43.014696 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 09:10:43.014713 kernel: landlock: Up and running. Dec 13 09:10:43.014726 kernel: SELinux: Initializing. Dec 13 09:10:43.014740 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 09:10:43.014754 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 09:10:43.014768 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Dec 13 09:10:43.014781 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 09:10:43.014796 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 09:10:43.014809 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 09:10:43.014823 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Dec 13 09:10:43.014839 kernel: signal: max sigframe size: 1776 Dec 13 09:10:43.014853 kernel: rcu: Hierarchical SRCU implementation. Dec 13 09:10:43.014867 kernel: rcu: Max phase no-delay instances is 400. Dec 13 09:10:43.014881 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 09:10:43.014895 kernel: smp: Bringing up secondary CPUs ... Dec 13 09:10:43.014909 kernel: smpboot: x86: Booting SMP configuration: Dec 13 09:10:43.014927 kernel: .... node #0, CPUs: #1 Dec 13 09:10:43.014941 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 09:10:43.014955 kernel: smpboot: Max logical packages: 1 Dec 13 09:10:43.014972 kernel: smpboot: Total of 2 processors activated (8000.00 BogoMIPS) Dec 13 09:10:43.014986 kernel: devtmpfs: initialized Dec 13 09:10:43.015000 kernel: x86/mm: Memory block size: 128MB Dec 13 09:10:43.015014 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 09:10:43.015028 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 09:10:43.015042 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 09:10:43.015056 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 09:10:43.015070 kernel: audit: initializing netlink subsys (disabled) Dec 13 09:10:43.015084 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 09:10:43.015102 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 09:10:43.015116 kernel: audit: type=2000 audit(1734081042.555:1): state=initialized audit_enabled=0 res=1 Dec 13 09:10:43.015132 kernel: cpuidle: using governor menu Dec 13 09:10:43.015146 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 09:10:43.015160 kernel: dca service started, version 1.12.1 Dec 13 09:10:43.015175 kernel: PCI: Using configuration type 1 for base access Dec 13 09:10:43.015190 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 09:10:43.015205 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 09:10:43.015218 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 09:10:43.015236 kernel: ACPI: Added _OSI(Module Device) Dec 13 09:10:43.015250 kernel: ACPI: Added _OSI(Processor Device) Dec 13 09:10:43.015264 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 09:10:43.015277 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 09:10:43.015291 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 09:10:43.015304 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 09:10:43.015318 kernel: ACPI: Interpreter enabled Dec 13 09:10:43.015365 kernel: ACPI: PM: (supports S0 S5) Dec 13 09:10:43.015377 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 09:10:43.015394 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 09:10:43.015409 kernel: PCI: Using E820 reservations for host bridge windows Dec 13 09:10:43.015423 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Dec 13 09:10:43.015435 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 09:10:43.015782 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 13 09:10:43.015967 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Dec 13 09:10:43.016105 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Dec 13 09:10:43.016128 kernel: acpiphp: Slot [3] registered Dec 13 09:10:43.016142 kernel: acpiphp: Slot [4] registered Dec 13 09:10:43.016157 kernel: acpiphp: Slot [5] registered Dec 13 09:10:43.016171 kernel: acpiphp: Slot [6] registered Dec 13 09:10:43.016185 kernel: acpiphp: Slot [7] registered Dec 13 09:10:43.016197 kernel: acpiphp: Slot [8] registered Dec 13 09:10:43.016209 kernel: acpiphp: Slot [9] registered Dec 13 09:10:43.016220 kernel: acpiphp: Slot [10] registered Dec 13 09:10:43.016232 kernel: acpiphp: Slot [11] registered Dec 13 09:10:43.016249 kernel: acpiphp: Slot [12] registered Dec 13 09:10:43.016261 kernel: acpiphp: Slot [13] registered Dec 13 09:10:43.016274 kernel: acpiphp: Slot [14] registered Dec 13 09:10:43.016287 kernel: acpiphp: Slot [15] registered Dec 13 09:10:43.016300 kernel: acpiphp: Slot [16] registered Dec 13 09:10:43.016314 kernel: acpiphp: Slot [17] registered Dec 13 09:10:43.016355 kernel: acpiphp: Slot [18] registered Dec 13 09:10:43.016368 kernel: acpiphp: Slot [19] registered Dec 13 09:10:43.016379 kernel: acpiphp: Slot [20] registered Dec 13 09:10:43.016391 kernel: acpiphp: Slot [21] registered Dec 13 09:10:43.016421 kernel: acpiphp: Slot [22] registered Dec 13 09:10:43.016434 kernel: acpiphp: Slot [23] registered Dec 13 09:10:43.016448 kernel: acpiphp: Slot [24] registered Dec 13 09:10:43.016462 kernel: acpiphp: Slot [25] registered Dec 13 09:10:43.016476 kernel: acpiphp: Slot [26] registered Dec 13 09:10:43.016490 kernel: acpiphp: Slot [27] registered Dec 13 09:10:43.016503 kernel: acpiphp: Slot [28] registered Dec 13 09:10:43.016514 kernel: acpiphp: Slot [29] registered Dec 13 09:10:43.016526 kernel: acpiphp: Slot [30] registered Dec 13 09:10:43.016542 kernel: acpiphp: Slot [31] registered Dec 13 09:10:43.016555 kernel: PCI host bridge to bus 0000:00 Dec 13 09:10:43.016759 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 09:10:43.016906 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 09:10:43.017041 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 09:10:43.017166 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Dec 13 09:10:43.017292 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Dec 13 09:10:43.017448 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 09:10:43.017639 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Dec 13 09:10:43.017807 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Dec 13 09:10:43.018043 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Dec 13 09:10:43.018248 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Dec 13 09:10:43.018425 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Dec 13 09:10:43.018600 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Dec 13 09:10:43.020494 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Dec 13 09:10:43.020687 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Dec 13 09:10:43.020859 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Dec 13 09:10:43.021013 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Dec 13 09:10:43.021170 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Dec 13 09:10:43.021307 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Dec 13 09:10:43.021538 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Dec 13 09:10:43.021729 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Dec 13 09:10:43.024709 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Dec 13 09:10:43.024898 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Dec 13 09:10:43.025051 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Dec 13 09:10:43.025206 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Dec 13 09:10:43.025436 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 09:10:43.025615 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Dec 13 09:10:43.025807 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Dec 13 09:10:43.025964 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Dec 13 09:10:43.026117 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Dec 13 09:10:43.026297 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Dec 13 09:10:43.027654 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Dec 13 09:10:43.027850 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Dec 13 09:10:43.028032 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Dec 13 09:10:43.028206 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Dec 13 09:10:43.028449 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Dec 13 09:10:43.028618 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Dec 13 09:10:43.028765 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Dec 13 09:10:43.028941 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Dec 13 09:10:43.029097 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Dec 13 09:10:43.029264 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Dec 13 09:10:43.033040 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Dec 13 09:10:43.033240 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Dec 13 09:10:43.033381 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Dec 13 09:10:43.033527 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Dec 13 09:10:43.033675 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Dec 13 09:10:43.033830 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Dec 13 09:10:43.033992 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Dec 13 09:10:43.034138 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Dec 13 09:10:43.034157 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 09:10:43.034170 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 09:10:43.034182 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 09:10:43.034194 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 09:10:43.034205 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 13 09:10:43.034225 kernel: iommu: Default domain type: Translated Dec 13 09:10:43.034238 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 09:10:43.034250 kernel: PCI: Using ACPI for IRQ routing Dec 13 09:10:43.034263 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 09:10:43.034277 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 09:10:43.034289 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Dec 13 09:10:43.034457 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Dec 13 09:10:43.034552 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Dec 13 09:10:43.034655 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 09:10:43.034666 kernel: vgaarb: loaded Dec 13 09:10:43.034674 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 13 09:10:43.034683 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 13 09:10:43.034692 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 09:10:43.034706 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 09:10:43.034721 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 09:10:43.034735 kernel: pnp: PnP ACPI init Dec 13 09:10:43.034749 kernel: pnp: PnP ACPI: found 4 devices Dec 13 09:10:43.034763 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 09:10:43.034772 kernel: NET: Registered PF_INET protocol family Dec 13 09:10:43.034780 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 09:10:43.034789 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 09:10:43.034797 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 09:10:43.034805 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 09:10:43.034813 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 13 09:10:43.034821 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 09:10:43.034830 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 09:10:43.034841 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 09:10:43.034849 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 09:10:43.034857 kernel: NET: Registered PF_XDP protocol family Dec 13 09:10:43.034959 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 09:10:43.035045 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 09:10:43.035130 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 09:10:43.035245 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Dec 13 09:10:43.037408 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Dec 13 09:10:43.037633 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Dec 13 09:10:43.037761 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 13 09:10:43.037783 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Dec 13 09:10:43.037885 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 40486 usecs Dec 13 09:10:43.037896 kernel: PCI: CLS 0 bytes, default 64 Dec 13 09:10:43.037906 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 09:10:43.037915 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns Dec 13 09:10:43.037924 kernel: Initialise system trusted keyrings Dec 13 09:10:43.037939 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 09:10:43.037948 kernel: Key type asymmetric registered Dec 13 09:10:43.037956 kernel: Asymmetric key parser 'x509' registered Dec 13 09:10:43.037965 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 09:10:43.037973 kernel: io scheduler mq-deadline registered Dec 13 09:10:43.037981 kernel: io scheduler kyber registered Dec 13 09:10:43.037990 kernel: io scheduler bfq registered Dec 13 09:10:43.037998 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 09:10:43.038007 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Dec 13 09:10:43.038015 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Dec 13 09:10:43.038026 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Dec 13 09:10:43.038034 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 09:10:43.038042 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 09:10:43.038050 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 09:10:43.038059 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 09:10:43.038067 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 09:10:43.038209 kernel: rtc_cmos 00:03: RTC can wake from S4 Dec 13 09:10:43.038227 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 09:10:43.038321 kernel: rtc_cmos 00:03: registered as rtc0 Dec 13 09:10:43.038539 kernel: rtc_cmos 00:03: setting system clock to 2024-12-13T09:10:42 UTC (1734081042) Dec 13 09:10:43.038625 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Dec 13 09:10:43.038636 kernel: intel_pstate: CPU model not supported Dec 13 09:10:43.038645 kernel: NET: Registered PF_INET6 protocol family Dec 13 09:10:43.038653 kernel: Segment Routing with IPv6 Dec 13 09:10:43.038662 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 09:10:43.038670 kernel: NET: Registered PF_PACKET protocol family Dec 13 09:10:43.038685 kernel: Key type dns_resolver registered Dec 13 09:10:43.038698 kernel: IPI shorthand broadcast: enabled Dec 13 09:10:43.038711 kernel: sched_clock: Marking stable (1210008231, 153657746)->(1409290034, -45624057) Dec 13 09:10:43.038723 kernel: registered taskstats version 1 Dec 13 09:10:43.038736 kernel: Loading compiled-in X.509 certificates Dec 13 09:10:43.038751 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: c82d546f528d79a5758dcebbc47fb6daf92836a0' Dec 13 09:10:43.038764 kernel: Key type .fscrypt registered Dec 13 09:10:43.038777 kernel: Key type fscrypt-provisioning registered Dec 13 09:10:43.038791 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 09:10:43.038806 kernel: ima: Allocated hash algorithm: sha1 Dec 13 09:10:43.038815 kernel: ima: No architecture policies found Dec 13 09:10:43.038823 kernel: clk: Disabling unused clocks Dec 13 09:10:43.038831 kernel: Freeing unused kernel image (initmem) memory: 42844K Dec 13 09:10:43.038839 kernel: Write protecting the kernel read-only data: 36864k Dec 13 09:10:43.038865 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Dec 13 09:10:43.038876 kernel: Run /init as init process Dec 13 09:10:43.038884 kernel: with arguments: Dec 13 09:10:43.038893 kernel: /init Dec 13 09:10:43.038904 kernel: with environment: Dec 13 09:10:43.038912 kernel: HOME=/ Dec 13 09:10:43.038920 kernel: TERM=linux Dec 13 09:10:43.038928 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 09:10:43.038943 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 09:10:43.038955 systemd[1]: Detected virtualization kvm. Dec 13 09:10:43.038964 systemd[1]: Detected architecture x86-64. Dec 13 09:10:43.038972 systemd[1]: Running in initrd. Dec 13 09:10:43.038984 systemd[1]: No hostname configured, using default hostname. Dec 13 09:10:43.038992 systemd[1]: Hostname set to . Dec 13 09:10:43.039001 systemd[1]: Initializing machine ID from VM UUID. Dec 13 09:10:43.039010 systemd[1]: Queued start job for default target initrd.target. Dec 13 09:10:43.039019 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 09:10:43.039027 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 09:10:43.039037 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 09:10:43.039046 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 09:10:43.039057 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 09:10:43.039066 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 09:10:43.039077 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 09:10:43.039086 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 09:10:43.039095 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 09:10:43.039104 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 09:10:43.039112 systemd[1]: Reached target paths.target - Path Units. Dec 13 09:10:43.039124 systemd[1]: Reached target slices.target - Slice Units. Dec 13 09:10:43.039133 systemd[1]: Reached target swap.target - Swaps. Dec 13 09:10:43.039144 systemd[1]: Reached target timers.target - Timer Units. Dec 13 09:10:43.039156 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 09:10:43.039170 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 09:10:43.039187 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 09:10:43.039203 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 09:10:43.039214 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 09:10:43.039223 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 09:10:43.039232 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 09:10:43.039241 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 09:10:43.039250 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 09:10:43.039258 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 09:10:43.039268 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 09:10:43.039280 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 09:10:43.039288 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 09:10:43.039297 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 09:10:43.039306 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 09:10:43.039315 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 09:10:43.040486 systemd-journald[182]: Collecting audit messages is disabled. Dec 13 09:10:43.040534 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 09:10:43.040545 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 09:10:43.040556 systemd-journald[182]: Journal started Dec 13 09:10:43.040580 systemd-journald[182]: Runtime Journal (/run/log/journal/ee7ab1b757ba4aaabaf62da34301c6fa) is 4.9M, max 39.3M, 34.4M free. Dec 13 09:10:43.018805 systemd-modules-load[183]: Inserted module 'overlay' Dec 13 09:10:43.104128 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 09:10:43.104160 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 09:10:43.104174 kernel: Bridge firewalling registered Dec 13 09:10:43.104197 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 09:10:43.062135 systemd-modules-load[183]: Inserted module 'br_netfilter' Dec 13 09:10:43.105389 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 09:10:43.106585 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 09:10:43.112373 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 09:10:43.121635 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 09:10:43.124757 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 09:10:43.134590 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 09:10:43.150616 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 09:10:43.154406 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 09:10:43.156450 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 09:10:43.157713 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 09:10:43.167694 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 09:10:43.168953 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 09:10:43.178586 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 09:10:43.187471 dracut-cmdline[216]: dracut-dracut-053 Dec 13 09:10:43.193852 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 09:10:43.219850 systemd-resolved[220]: Positive Trust Anchors: Dec 13 09:10:43.219868 systemd-resolved[220]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 09:10:43.219903 systemd-resolved[220]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 09:10:43.228288 systemd-resolved[220]: Defaulting to hostname 'linux'. Dec 13 09:10:43.231852 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 09:10:43.233764 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 09:10:43.322382 kernel: SCSI subsystem initialized Dec 13 09:10:43.334399 kernel: Loading iSCSI transport class v2.0-870. Dec 13 09:10:43.350378 kernel: iscsi: registered transport (tcp) Dec 13 09:10:43.378406 kernel: iscsi: registered transport (qla4xxx) Dec 13 09:10:43.378509 kernel: QLogic iSCSI HBA Driver Dec 13 09:10:43.443358 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 09:10:43.449862 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 09:10:43.493788 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 09:10:43.493894 kernel: device-mapper: uevent: version 1.0.3 Dec 13 09:10:43.493908 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 09:10:43.544434 kernel: raid6: avx2x4 gen() 28917 MB/s Dec 13 09:10:43.561402 kernel: raid6: avx2x2 gen() 28426 MB/s Dec 13 09:10:43.578558 kernel: raid6: avx2x1 gen() 20150 MB/s Dec 13 09:10:43.578655 kernel: raid6: using algorithm avx2x4 gen() 28917 MB/s Dec 13 09:10:43.596611 kernel: raid6: .... xor() 9382 MB/s, rmw enabled Dec 13 09:10:43.596714 kernel: raid6: using avx2x2 recovery algorithm Dec 13 09:10:43.626366 kernel: xor: automatically using best checksumming function avx Dec 13 09:10:43.811387 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 09:10:43.826319 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 09:10:43.835644 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 09:10:43.863412 systemd-udevd[402]: Using default interface naming scheme 'v255'. Dec 13 09:10:43.869456 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 09:10:43.877910 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 09:10:43.899076 dracut-pre-trigger[406]: rd.md=0: removing MD RAID activation Dec 13 09:10:43.943009 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 09:10:43.949583 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 09:10:44.021641 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 09:10:44.029550 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 09:10:44.062401 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 09:10:44.064302 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 09:10:44.065114 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 09:10:44.069083 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 09:10:44.076940 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 09:10:44.108286 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 09:10:44.135445 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Dec 13 09:10:44.205236 kernel: scsi host0: Virtio SCSI HBA Dec 13 09:10:44.205525 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 09:10:44.205543 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Dec 13 09:10:44.205671 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 09:10:44.205683 kernel: GPT:9289727 != 125829119 Dec 13 09:10:44.205693 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 09:10:44.205703 kernel: GPT:9289727 != 125829119 Dec 13 09:10:44.205721 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 09:10:44.205732 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 09:10:44.205743 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 09:10:44.205754 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Dec 13 09:10:44.224701 kernel: virtio_blk virtio5: [vdb] 968 512-byte logical blocks (496 kB/484 KiB) Dec 13 09:10:44.225645 kernel: AES CTR mode by8 optimization enabled Dec 13 09:10:44.225676 kernel: libata version 3.00 loaded. Dec 13 09:10:44.197512 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 09:10:44.197657 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 09:10:44.233531 kernel: ata_piix 0000:00:01.1: version 2.13 Dec 13 09:10:44.251351 kernel: scsi host1: ata_piix Dec 13 09:10:44.251573 kernel: scsi host2: ata_piix Dec 13 09:10:44.251759 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Dec 13 09:10:44.251780 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Dec 13 09:10:44.199961 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 09:10:44.200832 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 09:10:44.201019 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 09:10:44.201952 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 09:10:44.218788 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 09:10:44.267908 kernel: ACPI: bus type USB registered Dec 13 09:10:44.267980 kernel: usbcore: registered new interface driver usbfs Dec 13 09:10:44.293366 kernel: usbcore: registered new interface driver hub Dec 13 09:10:44.293642 kernel: usbcore: registered new device driver usb Dec 13 09:10:44.312365 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (451) Dec 13 09:10:44.312465 kernel: BTRFS: device fsid c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (449) Dec 13 09:10:44.331243 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 13 09:10:44.352773 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 09:10:44.366322 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 13 09:10:44.373270 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 09:10:44.378226 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 13 09:10:44.379055 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 13 09:10:44.388768 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 09:10:44.393905 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 09:10:44.400705 disk-uuid[534]: Primary Header is updated. Dec 13 09:10:44.400705 disk-uuid[534]: Secondary Entries is updated. Dec 13 09:10:44.400705 disk-uuid[534]: Secondary Header is updated. Dec 13 09:10:44.412359 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 09:10:44.425371 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 09:10:44.446480 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 09:10:44.473165 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Dec 13 09:10:44.485943 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Dec 13 09:10:44.486283 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Dec 13 09:10:44.487475 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Dec 13 09:10:44.487674 kernel: hub 1-0:1.0: USB hub found Dec 13 09:10:44.487908 kernel: hub 1-0:1.0: 2 ports detected Dec 13 09:10:45.430987 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 09:10:45.432780 disk-uuid[535]: The operation has completed successfully. Dec 13 09:10:45.477669 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 09:10:45.477804 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 09:10:45.493610 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 09:10:45.499735 sh[563]: Success Dec 13 09:10:45.533395 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 09:10:45.602537 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 09:10:45.617555 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 09:10:45.623423 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 09:10:45.653373 kernel: BTRFS info (device dm-0): first mount of filesystem c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be Dec 13 09:10:45.653446 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 09:10:45.653460 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 09:10:45.657054 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 09:10:45.657137 kernel: BTRFS info (device dm-0): using free space tree Dec 13 09:10:45.667098 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 09:10:45.668732 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 09:10:45.675577 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 09:10:45.679579 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 09:10:45.691601 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 09:10:45.691697 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 09:10:45.694103 kernel: BTRFS info (device vda6): using free space tree Dec 13 09:10:45.701378 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 09:10:45.715723 kernel: BTRFS info (device vda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 09:10:45.715379 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 09:10:45.724899 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 09:10:45.734987 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 09:10:45.834147 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 09:10:45.841735 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 09:10:45.882420 systemd-networkd[748]: lo: Link UP Dec 13 09:10:45.882980 systemd-networkd[748]: lo: Gained carrier Dec 13 09:10:45.886633 systemd-networkd[748]: Enumeration completed Dec 13 09:10:45.887274 systemd-networkd[748]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Dec 13 09:10:45.887279 systemd-networkd[748]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Dec 13 09:10:45.888546 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 09:10:45.888547 systemd-networkd[748]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 09:10:45.888552 systemd-networkd[748]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 09:10:45.889496 systemd[1]: Reached target network.target - Network. Dec 13 09:10:45.889521 systemd-networkd[748]: eth0: Link UP Dec 13 09:10:45.889526 systemd-networkd[748]: eth0: Gained carrier Dec 13 09:10:45.889539 systemd-networkd[748]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Dec 13 09:10:45.895811 systemd-networkd[748]: eth1: Link UP Dec 13 09:10:45.895817 systemd-networkd[748]: eth1: Gained carrier Dec 13 09:10:45.895836 systemd-networkd[748]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 09:10:45.917436 systemd-networkd[748]: eth0: DHCPv4 address 146.190.157.113/20, gateway 146.190.144.1 acquired from 169.254.169.253 Dec 13 09:10:45.921430 ignition[649]: Ignition 2.19.0 Dec 13 09:10:45.921444 ignition[649]: Stage: fetch-offline Dec 13 09:10:45.921509 ignition[649]: no configs at "/usr/lib/ignition/base.d" Dec 13 09:10:45.923785 systemd-networkd[748]: eth1: DHCPv4 address 10.124.0.11/20, gateway 10.124.0.1 acquired from 169.254.169.253 Dec 13 09:10:45.921521 ignition[649]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 13 09:10:45.924916 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 09:10:45.921646 ignition[649]: parsed url from cmdline: "" Dec 13 09:10:45.921652 ignition[649]: no config URL provided Dec 13 09:10:45.921660 ignition[649]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 09:10:45.921672 ignition[649]: no config at "/usr/lib/ignition/user.ign" Dec 13 09:10:45.921679 ignition[649]: failed to fetch config: resource requires networking Dec 13 09:10:45.932725 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 09:10:45.921970 ignition[649]: Ignition finished successfully Dec 13 09:10:45.970846 ignition[757]: Ignition 2.19.0 Dec 13 09:10:45.970863 ignition[757]: Stage: fetch Dec 13 09:10:45.971181 ignition[757]: no configs at "/usr/lib/ignition/base.d" Dec 13 09:10:45.971201 ignition[757]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 13 09:10:45.971405 ignition[757]: parsed url from cmdline: "" Dec 13 09:10:45.971412 ignition[757]: no config URL provided Dec 13 09:10:45.971421 ignition[757]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 09:10:45.971436 ignition[757]: no config at "/usr/lib/ignition/user.ign" Dec 13 09:10:45.971464 ignition[757]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Dec 13 09:10:46.009520 ignition[757]: GET result: OK Dec 13 09:10:46.009740 ignition[757]: parsing config with SHA512: 317338646aa7b6b60384984f902b84d7477f21dd3ca3bb30129da06e30a980ec3324bee3e926659d88d01f7fc375ccee4fba40c166a7b79bebd759c81a6aeb39 Dec 13 09:10:46.014777 unknown[757]: fetched base config from "system" Dec 13 09:10:46.014793 unknown[757]: fetched base config from "system" Dec 13 09:10:46.015260 ignition[757]: fetch: fetch complete Dec 13 09:10:46.014804 unknown[757]: fetched user config from "digitalocean" Dec 13 09:10:46.015270 ignition[757]: fetch: fetch passed Dec 13 09:10:46.017987 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 09:10:46.015700 ignition[757]: Ignition finished successfully Dec 13 09:10:46.025633 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 09:10:46.048465 ignition[764]: Ignition 2.19.0 Dec 13 09:10:46.048484 ignition[764]: Stage: kargs Dec 13 09:10:46.048785 ignition[764]: no configs at "/usr/lib/ignition/base.d" Dec 13 09:10:46.048802 ignition[764]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 13 09:10:46.050039 ignition[764]: kargs: kargs passed Dec 13 09:10:46.053004 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 09:10:46.050109 ignition[764]: Ignition finished successfully Dec 13 09:10:46.074679 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 09:10:46.090750 ignition[770]: Ignition 2.19.0 Dec 13 09:10:46.090766 ignition[770]: Stage: disks Dec 13 09:10:46.091010 ignition[770]: no configs at "/usr/lib/ignition/base.d" Dec 13 09:10:46.091026 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 13 09:10:46.093860 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 09:10:46.092484 ignition[770]: disks: disks passed Dec 13 09:10:46.092574 ignition[770]: Ignition finished successfully Dec 13 09:10:46.100053 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 09:10:46.101424 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 09:10:46.102521 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 09:10:46.103701 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 09:10:46.104909 systemd[1]: Reached target basic.target - Basic System. Dec 13 09:10:46.118719 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 09:10:46.134137 systemd-fsck[779]: ROOT: clean, 14/553520 files, 52654/553472 blocks Dec 13 09:10:46.137044 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 09:10:46.145497 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 09:10:46.263367 kernel: EXT4-fs (vda9): mounted filesystem 390119fa-ab9c-4f50-b046-3b5c76c46193 r/w with ordered data mode. Quota mode: none. Dec 13 09:10:46.264559 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 09:10:46.265813 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 09:10:46.275550 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 09:10:46.278962 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 09:10:46.281694 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Dec 13 09:10:46.291408 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (787) Dec 13 09:10:46.291706 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 13 09:10:46.302569 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 09:10:46.302609 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 09:10:46.302628 kernel: BTRFS info (device vda6): using free space tree Dec 13 09:10:46.292603 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 09:10:46.292660 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 09:10:46.308188 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 09:10:46.315368 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 09:10:46.314696 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 09:10:46.325416 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 09:10:46.399549 initrd-setup-root[817]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 09:10:46.413385 coreos-metadata[789]: Dec 13 09:10:46.412 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Dec 13 09:10:46.416993 initrd-setup-root[824]: cut: /sysroot/etc/group: No such file or directory Dec 13 09:10:46.418786 coreos-metadata[790]: Dec 13 09:10:46.417 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Dec 13 09:10:46.424073 initrd-setup-root[831]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 09:10:46.427988 coreos-metadata[789]: Dec 13 09:10:46.426 INFO Fetch successful Dec 13 09:10:46.435064 initrd-setup-root[838]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 09:10:46.436225 coreos-metadata[790]: Dec 13 09:10:46.435 INFO Fetch successful Dec 13 09:10:46.442047 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Dec 13 09:10:46.442195 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Dec 13 09:10:46.445482 coreos-metadata[790]: Dec 13 09:10:46.443 INFO wrote hostname ci-4081.2.1-e-b721934136 to /sysroot/etc/hostname Dec 13 09:10:46.446617 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 09:10:46.564074 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 09:10:46.576661 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 09:10:46.581541 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 09:10:46.589405 kernel: BTRFS info (device vda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 09:10:46.617359 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 09:10:46.626559 ignition[908]: INFO : Ignition 2.19.0 Dec 13 09:10:46.627478 ignition[908]: INFO : Stage: mount Dec 13 09:10:46.627950 ignition[908]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 09:10:46.627950 ignition[908]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 13 09:10:46.629359 ignition[908]: INFO : mount: mount passed Dec 13 09:10:46.629359 ignition[908]: INFO : Ignition finished successfully Dec 13 09:10:46.630209 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 09:10:46.637579 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 09:10:46.650270 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 09:10:46.655624 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 09:10:46.675602 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (919) Dec 13 09:10:46.678604 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 09:10:46.678670 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 09:10:46.680569 kernel: BTRFS info (device vda6): using free space tree Dec 13 09:10:46.685404 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 09:10:46.687176 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 09:10:46.726287 ignition[935]: INFO : Ignition 2.19.0 Dec 13 09:10:46.727530 ignition[935]: INFO : Stage: files Dec 13 09:10:46.728192 ignition[935]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 09:10:46.728192 ignition[935]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 13 09:10:46.730113 ignition[935]: DEBUG : files: compiled without relabeling support, skipping Dec 13 09:10:46.731002 ignition[935]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 09:10:46.731002 ignition[935]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 09:10:46.734533 ignition[935]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 09:10:46.735726 ignition[935]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 09:10:46.735726 ignition[935]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 09:10:46.735122 unknown[935]: wrote ssh authorized keys file for user: core Dec 13 09:10:46.738283 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 09:10:46.738283 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 09:10:46.773914 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 09:10:46.835505 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 09:10:46.835505 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 09:10:46.838153 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 09:10:46.838153 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 09:10:46.838153 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 09:10:46.838153 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 09:10:46.838153 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 09:10:46.838153 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 09:10:46.838153 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 09:10:46.838153 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 09:10:46.838153 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 09:10:46.838153 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 09:10:46.838153 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 09:10:46.838153 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 09:10:46.838153 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Dec 13 09:10:46.893628 systemd-networkd[748]: eth0: Gained IPv6LL Dec 13 09:10:47.376997 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 13 09:10:47.405860 systemd-networkd[748]: eth1: Gained IPv6LL Dec 13 09:10:47.700142 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 09:10:47.700142 ignition[935]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 13 09:10:47.703157 ignition[935]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 09:10:47.703157 ignition[935]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 09:10:47.703157 ignition[935]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 13 09:10:47.703157 ignition[935]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 13 09:10:47.703157 ignition[935]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 09:10:47.703157 ignition[935]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 09:10:47.703157 ignition[935]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 09:10:47.703157 ignition[935]: INFO : files: files passed Dec 13 09:10:47.703157 ignition[935]: INFO : Ignition finished successfully Dec 13 09:10:47.704494 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 09:10:47.712748 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 09:10:47.722678 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 09:10:47.730003 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 09:10:47.730405 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 09:10:47.743882 initrd-setup-root-after-ignition[964]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 09:10:47.743882 initrd-setup-root-after-ignition[964]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 09:10:47.746786 initrd-setup-root-after-ignition[968]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 09:10:47.746471 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 09:10:47.748309 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 09:10:47.757654 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 09:10:47.800200 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 09:10:47.800484 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 09:10:47.802812 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 09:10:47.803744 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 09:10:47.805533 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 09:10:47.811637 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 09:10:47.835935 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 09:10:47.843710 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 09:10:47.865532 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 09:10:47.866747 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 09:10:47.869022 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 09:10:47.871117 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 09:10:47.871516 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 09:10:47.873901 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 09:10:47.874670 systemd[1]: Stopped target basic.target - Basic System. Dec 13 09:10:47.875166 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 09:10:47.876812 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 09:10:47.878261 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 09:10:47.881059 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 09:10:47.883601 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 09:10:47.894071 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 09:10:47.895675 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 09:10:47.896835 systemd[1]: Stopped target swap.target - Swaps. Dec 13 09:10:47.897760 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 09:10:47.898426 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 09:10:47.902816 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 09:10:47.906645 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 09:10:47.907829 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 09:10:47.908181 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 09:10:47.910361 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 09:10:47.910634 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 09:10:47.913510 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 09:10:47.914065 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 09:10:47.915884 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 09:10:47.916093 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 09:10:47.920534 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 13 09:10:47.920803 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 09:10:47.933890 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 09:10:47.936312 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 09:10:47.936648 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 09:10:47.941726 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 09:10:47.944534 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 09:10:47.944833 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 09:10:47.951433 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 09:10:47.951660 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 09:10:47.969162 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 09:10:47.970193 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 09:10:47.979443 ignition[988]: INFO : Ignition 2.19.0 Dec 13 09:10:47.979443 ignition[988]: INFO : Stage: umount Dec 13 09:10:47.979443 ignition[988]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 09:10:47.979443 ignition[988]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 13 09:10:47.988078 ignition[988]: INFO : umount: umount passed Dec 13 09:10:47.988078 ignition[988]: INFO : Ignition finished successfully Dec 13 09:10:47.981856 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 09:10:47.982020 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 09:10:47.983856 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 09:10:47.984029 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 09:10:47.990011 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 09:10:47.990139 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 09:10:47.990836 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 09:10:47.990896 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 09:10:47.997373 systemd[1]: Stopped target network.target - Network. Dec 13 09:10:48.001939 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 09:10:48.002097 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 09:10:48.005749 systemd[1]: Stopped target paths.target - Path Units. Dec 13 09:10:48.006469 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 09:10:48.011733 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 09:10:48.012871 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 09:10:48.013600 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 09:10:48.014204 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 09:10:48.014292 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 09:10:48.014994 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 09:10:48.015052 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 09:10:48.017084 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 09:10:48.017185 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 09:10:48.019776 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 09:10:48.019861 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 09:10:48.023833 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 09:10:48.026711 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 09:10:48.029533 systemd-networkd[748]: eth0: DHCPv6 lease lost Dec 13 09:10:48.033702 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 09:10:48.034653 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 09:10:48.034843 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 09:10:48.037725 systemd-networkd[748]: eth1: DHCPv6 lease lost Dec 13 09:10:48.042303 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 09:10:48.042563 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 09:10:48.044440 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 09:10:48.044644 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 09:10:48.057111 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 09:10:48.057195 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 09:10:48.058489 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 09:10:48.058596 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 09:10:48.068722 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 09:10:48.069906 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 09:10:48.070051 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 09:10:48.074913 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 09:10:48.075189 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 09:10:48.077375 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 09:10:48.077481 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 09:10:48.078700 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 09:10:48.078781 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 09:10:48.081622 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 09:10:48.116584 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 09:10:48.117051 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 09:10:48.132596 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 09:10:48.132802 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 09:10:48.135155 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 09:10:48.135299 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 09:10:48.136664 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 09:10:48.136730 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 09:10:48.139049 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 09:10:48.139158 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 09:10:48.141219 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 09:10:48.141356 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 09:10:48.142839 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 09:10:48.142932 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 09:10:48.153681 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 09:10:48.154582 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 09:10:48.154695 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 09:10:48.159409 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 09:10:48.159541 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 09:10:48.178391 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 09:10:48.178612 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 09:10:48.181038 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 09:10:48.211099 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 09:10:48.231079 systemd[1]: Switching root. Dec 13 09:10:48.279437 systemd-journald[182]: Journal stopped Dec 13 09:10:49.790585 systemd-journald[182]: Received SIGTERM from PID 1 (systemd). Dec 13 09:10:49.790665 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 09:10:49.790683 kernel: SELinux: policy capability open_perms=1 Dec 13 09:10:49.790694 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 09:10:49.790706 kernel: SELinux: policy capability always_check_network=0 Dec 13 09:10:49.790717 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 09:10:49.790728 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 09:10:49.790742 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 09:10:49.790757 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 09:10:49.790770 kernel: audit: type=1403 audit(1734081048.621:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 09:10:49.790784 systemd[1]: Successfully loaded SELinux policy in 48.906ms. Dec 13 09:10:49.790806 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.481ms. Dec 13 09:10:49.790819 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 09:10:49.790831 systemd[1]: Detected virtualization kvm. Dec 13 09:10:49.790842 systemd[1]: Detected architecture x86-64. Dec 13 09:10:49.790854 systemd[1]: Detected first boot. Dec 13 09:10:49.790876 systemd[1]: Hostname set to . Dec 13 09:10:49.790887 systemd[1]: Initializing machine ID from VM UUID. Dec 13 09:10:49.790898 zram_generator::config[1031]: No configuration found. Dec 13 09:10:49.790913 systemd[1]: Populated /etc with preset unit settings. Dec 13 09:10:49.790925 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 09:10:49.790936 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 09:10:49.790948 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 09:10:49.790960 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 09:10:49.790974 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 09:10:49.790985 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 09:10:49.790997 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 09:10:49.791009 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 09:10:49.791021 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 09:10:49.791033 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 09:10:49.791044 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 09:10:49.791057 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 09:10:49.791068 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 09:10:49.791081 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 09:10:49.791093 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 09:10:49.791104 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 09:10:49.791117 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 09:10:49.791135 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 09:10:49.791150 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 09:10:49.791168 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 09:10:49.791187 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 09:10:49.791205 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 09:10:49.791225 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 09:10:49.791242 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 09:10:49.791257 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 09:10:49.791274 systemd[1]: Reached target slices.target - Slice Units. Dec 13 09:10:49.791295 systemd[1]: Reached target swap.target - Swaps. Dec 13 09:10:49.791314 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 09:10:49.793365 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 09:10:49.793403 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 09:10:49.793418 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 09:10:49.793431 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 09:10:49.793443 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 09:10:49.793455 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 09:10:49.793466 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 09:10:49.793477 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 09:10:49.793489 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 09:10:49.793508 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 09:10:49.793520 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 09:10:49.793531 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 09:10:49.793543 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 09:10:49.793554 systemd[1]: Reached target machines.target - Containers. Dec 13 09:10:49.793565 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 09:10:49.793577 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 09:10:49.793588 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 09:10:49.793599 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 09:10:49.793614 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 09:10:49.793625 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 09:10:49.793637 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 09:10:49.793648 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 09:10:49.793659 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 09:10:49.793671 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 09:10:49.793683 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 09:10:49.793694 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 09:10:49.793708 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 09:10:49.793721 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 09:10:49.793732 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 09:10:49.793743 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 09:10:49.793754 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 09:10:49.793766 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 09:10:49.793782 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 09:10:49.793794 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 09:10:49.793805 systemd[1]: Stopped verity-setup.service. Dec 13 09:10:49.793820 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 09:10:49.793833 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 09:10:49.793846 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 09:10:49.793857 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 09:10:49.793868 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 09:10:49.793883 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 09:10:49.793894 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 09:10:49.793905 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 09:10:49.793916 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 09:10:49.793928 kernel: loop: module loaded Dec 13 09:10:49.793940 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 09:10:49.793953 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 09:10:49.793968 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 09:10:49.793980 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 09:10:49.793992 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 09:10:49.794003 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 09:10:49.794014 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 09:10:49.794025 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 09:10:49.794037 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 09:10:49.794051 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 09:10:49.794063 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 09:10:49.794128 systemd-journald[1111]: Collecting audit messages is disabled. Dec 13 09:10:49.794161 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 09:10:49.794178 systemd-journald[1111]: Journal started Dec 13 09:10:49.794213 systemd-journald[1111]: Runtime Journal (/run/log/journal/ee7ab1b757ba4aaabaf62da34301c6fa) is 4.9M, max 39.3M, 34.4M free. Dec 13 09:10:49.370616 systemd[1]: Queued start job for default target multi-user.target. Dec 13 09:10:49.392899 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 13 09:10:49.393471 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 09:10:49.797446 kernel: ACPI: bus type drm_connector registered Dec 13 09:10:49.808367 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 09:10:49.817543 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 09:10:49.817627 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 09:10:49.826369 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 09:10:49.831361 kernel: fuse: init (API version 7.39) Dec 13 09:10:49.835442 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 09:10:49.845462 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 09:10:49.850462 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 09:10:49.862392 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 09:10:49.862492 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 09:10:49.873362 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 09:10:49.876515 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 09:10:49.893413 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 09:10:49.900410 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 09:10:49.906367 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 09:10:49.912418 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 09:10:49.916744 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 09:10:49.916999 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 09:10:49.918236 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 09:10:49.918482 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 09:10:49.920990 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 09:10:49.922056 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 09:10:49.930962 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 09:10:49.960095 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 09:10:49.976289 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 09:10:49.987538 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 09:10:49.988450 kernel: loop0: detected capacity change from 0 to 8 Dec 13 09:10:50.020468 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 09:10:50.015680 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 09:10:50.029622 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 09:10:50.033834 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 09:10:50.039767 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 09:10:50.041203 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 09:10:50.075698 kernel: loop1: detected capacity change from 0 to 142488 Dec 13 09:10:50.114584 systemd-journald[1111]: Time spent on flushing to /var/log/journal/ee7ab1b757ba4aaabaf62da34301c6fa is 145.707ms for 995 entries. Dec 13 09:10:50.114584 systemd-journald[1111]: System Journal (/var/log/journal/ee7ab1b757ba4aaabaf62da34301c6fa) is 8.0M, max 195.6M, 187.6M free. Dec 13 09:10:50.289746 systemd-journald[1111]: Received client request to flush runtime journal. Dec 13 09:10:50.289793 kernel: loop2: detected capacity change from 0 to 140768 Dec 13 09:10:50.289809 kernel: loop3: detected capacity change from 0 to 210664 Dec 13 09:10:50.135256 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 09:10:50.139425 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 09:10:50.155073 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 09:10:50.159459 udevadm[1163]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 09:10:50.177554 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 09:10:50.279577 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. Dec 13 09:10:50.279592 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. Dec 13 09:10:50.286552 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 09:10:50.294004 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 09:10:50.318370 kernel: loop4: detected capacity change from 0 to 8 Dec 13 09:10:50.326531 kernel: loop5: detected capacity change from 0 to 142488 Dec 13 09:10:50.358464 kernel: loop6: detected capacity change from 0 to 140768 Dec 13 09:10:50.375380 kernel: loop7: detected capacity change from 0 to 210664 Dec 13 09:10:50.395363 (sd-merge)[1177]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Dec 13 09:10:50.396044 (sd-merge)[1177]: Merged extensions into '/usr'. Dec 13 09:10:50.419872 systemd[1]: Reloading requested from client PID 1132 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 09:10:50.419891 systemd[1]: Reloading... Dec 13 09:10:50.685078 zram_generator::config[1203]: No configuration found. Dec 13 09:10:50.774503 ldconfig[1128]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 09:10:50.922745 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 09:10:50.984059 systemd[1]: Reloading finished in 563 ms. Dec 13 09:10:51.005908 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 09:10:51.009564 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 09:10:51.022651 systemd[1]: Starting ensure-sysext.service... Dec 13 09:10:51.026415 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 09:10:51.044797 systemd[1]: Reloading requested from client PID 1246 ('systemctl') (unit ensure-sysext.service)... Dec 13 09:10:51.044809 systemd[1]: Reloading... Dec 13 09:10:51.093509 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 09:10:51.095807 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 09:10:51.097063 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 09:10:51.099777 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. Dec 13 09:10:51.100039 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. Dec 13 09:10:51.105973 systemd-tmpfiles[1247]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 09:10:51.106134 systemd-tmpfiles[1247]: Skipping /boot Dec 13 09:10:51.141866 systemd-tmpfiles[1247]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 09:10:51.143548 systemd-tmpfiles[1247]: Skipping /boot Dec 13 09:10:51.215392 zram_generator::config[1276]: No configuration found. Dec 13 09:10:51.394133 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 09:10:51.460372 systemd[1]: Reloading finished in 415 ms. Dec 13 09:10:51.484701 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 09:10:51.492219 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 09:10:51.509301 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 09:10:51.512656 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 09:10:51.516993 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 09:10:51.529713 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 09:10:51.532566 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 09:10:51.539616 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 09:10:51.549776 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 09:10:51.550049 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 09:10:51.557797 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 09:10:51.567753 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 09:10:51.571759 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 09:10:51.573703 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 09:10:51.573893 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 09:10:51.592776 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 09:10:51.599426 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 09:10:51.603075 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 09:10:51.604908 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 09:10:51.605216 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 09:10:51.612722 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 09:10:51.614528 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 09:10:51.621548 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 09:10:51.621862 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 09:10:51.634501 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 09:10:51.635416 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 09:10:51.635636 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 09:10:51.642019 systemd[1]: Finished ensure-sysext.service. Dec 13 09:10:51.655018 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 09:10:51.656931 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 09:10:51.657161 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 09:10:51.658808 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 09:10:51.659217 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 09:10:51.666147 systemd-udevd[1329]: Using default interface naming scheme 'v255'. Dec 13 09:10:51.669390 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 09:10:51.671548 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 09:10:51.680578 augenrules[1350]: No rules Dec 13 09:10:51.678436 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 09:10:51.678673 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 09:10:51.682229 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 09:10:51.682679 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 09:10:51.685102 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 09:10:51.692008 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 09:10:51.705420 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 09:10:51.736153 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 09:10:51.747149 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 09:10:51.758622 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 09:10:51.763439 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 09:10:51.764323 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 09:10:51.875542 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 09:10:51.876512 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 09:10:51.949472 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1374) Dec 13 09:10:51.948413 systemd-networkd[1367]: lo: Link UP Dec 13 09:10:51.948419 systemd-networkd[1367]: lo: Gained carrier Dec 13 09:10:51.950189 systemd-networkd[1367]: Enumeration completed Dec 13 09:10:51.950439 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 09:10:51.957377 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1374) Dec 13 09:10:51.959632 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 09:10:51.966553 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 13 09:10:51.976527 systemd-resolved[1325]: Positive Trust Anchors: Dec 13 09:10:51.976555 systemd-resolved[1325]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 09:10:51.976605 systemd-resolved[1325]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 09:10:51.995064 systemd-resolved[1325]: Using system hostname 'ci-4081.2.1-e-b721934136'. Dec 13 09:10:52.003742 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 09:10:52.005734 systemd[1]: Reached target network.target - Network. Dec 13 09:10:52.007610 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 09:10:52.014608 systemd-networkd[1367]: eth0: Configuring with /run/systemd/network/10-56:0a:0c:3d:9c:d9.network. Dec 13 09:10:52.016039 systemd-networkd[1367]: eth0: Link UP Dec 13 09:10:52.016165 systemd-networkd[1367]: eth0: Gained carrier Dec 13 09:10:52.019371 systemd-timesyncd[1346]: Network configuration changed, trying to establish connection. Dec 13 09:10:52.043586 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Dec 13 09:10:52.045435 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 09:10:52.045676 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 09:10:52.050613 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 09:10:52.057727 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 09:10:52.066373 kernel: ISO 9660 Extensions: RRIP_1991A Dec 13 09:10:52.071630 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 09:10:52.073536 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 09:10:52.073610 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 09:10:52.073633 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 09:10:52.079277 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Dec 13 09:10:52.093371 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Dec 13 09:10:52.102442 kernel: ACPI: button: Power Button [PWRF] Dec 13 09:10:52.131512 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Dec 13 09:10:52.160889 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1379) Dec 13 09:10:52.132779 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 09:10:52.133068 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 09:10:52.134996 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 09:10:52.135732 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 09:10:52.139123 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 09:10:52.148680 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 09:10:52.149481 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 09:10:52.151204 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 09:10:52.254377 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Dec 13 09:10:52.316701 systemd-networkd[1367]: eth1: Configuring with /run/systemd/network/10-0a:39:bd:18:81:f8.network. Dec 13 09:10:52.319920 systemd-networkd[1367]: eth1: Link UP Dec 13 09:10:52.320062 systemd-networkd[1367]: eth1: Gained carrier Dec 13 09:10:52.321319 systemd-timesyncd[1346]: Network configuration changed, trying to establish connection. Dec 13 09:10:52.326528 systemd-timesyncd[1346]: Network configuration changed, trying to establish connection. Dec 13 09:10:52.337911 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 09:10:52.370061 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 09:10:52.374568 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 09:10:52.383796 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 09:10:52.479528 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 09:10:52.510379 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Dec 13 09:10:52.517370 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Dec 13 09:10:52.589781 kernel: EDAC MC: Ver: 3.0.0 Dec 13 09:10:52.589902 kernel: Console: switching to colour dummy device 80x25 Dec 13 09:10:52.591363 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Dec 13 09:10:52.591496 kernel: [drm] features: -context_init Dec 13 09:10:52.595996 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 09:10:52.599303 kernel: [drm] number of scanouts: 1 Dec 13 09:10:52.599513 kernel: [drm] number of cap sets: 0 Dec 13 09:10:52.602433 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Dec 13 09:10:52.607028 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 09:10:52.607254 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 09:10:52.607511 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 09:10:52.612188 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Dec 13 09:10:52.616949 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 09:10:52.617189 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 09:10:52.626579 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Dec 13 09:10:52.634096 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 09:10:52.634592 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 09:10:52.652895 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 09:10:52.654103 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 09:10:52.661817 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 09:10:52.695546 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 09:10:52.698469 lvm[1426]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 09:10:52.738987 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 09:10:52.741145 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 09:10:52.741839 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 09:10:52.742071 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 09:10:52.742206 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 09:10:52.742541 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 09:10:52.742710 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 09:10:52.742788 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 09:10:52.742847 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 09:10:52.742871 systemd[1]: Reached target paths.target - Path Units. Dec 13 09:10:52.742926 systemd[1]: Reached target timers.target - Timer Units. Dec 13 09:10:52.744897 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 09:10:52.746992 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 09:10:52.754527 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 09:10:52.757721 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 09:10:52.758908 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 09:10:52.762109 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 09:10:52.762742 systemd[1]: Reached target basic.target - Basic System. Dec 13 09:10:52.763409 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 09:10:52.763441 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 09:10:52.767564 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 09:10:52.771538 lvm[1434]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 09:10:52.781773 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 13 09:10:52.798730 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 09:10:52.806534 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 09:10:52.812413 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 09:10:52.813105 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 09:10:52.822624 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 09:10:52.828540 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 09:10:52.839675 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 09:10:52.849649 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 09:10:52.864626 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 09:10:52.866750 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 09:10:52.867476 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 09:10:52.871645 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 09:10:52.880665 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 09:10:52.883468 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 09:10:52.883863 dbus-daemon[1437]: [system] SELinux support is enabled Dec 13 09:10:52.886985 extend-filesystems[1441]: Found loop4 Dec 13 09:10:52.904222 extend-filesystems[1441]: Found loop5 Dec 13 09:10:52.904222 extend-filesystems[1441]: Found loop6 Dec 13 09:10:52.904222 extend-filesystems[1441]: Found loop7 Dec 13 09:10:52.904222 extend-filesystems[1441]: Found vda Dec 13 09:10:52.904222 extend-filesystems[1441]: Found vda1 Dec 13 09:10:52.904222 extend-filesystems[1441]: Found vda2 Dec 13 09:10:52.904222 extend-filesystems[1441]: Found vda3 Dec 13 09:10:52.904222 extend-filesystems[1441]: Found usr Dec 13 09:10:52.904222 extend-filesystems[1441]: Found vda4 Dec 13 09:10:52.904222 extend-filesystems[1441]: Found vda6 Dec 13 09:10:52.904222 extend-filesystems[1441]: Found vda7 Dec 13 09:10:52.904222 extend-filesystems[1441]: Found vda9 Dec 13 09:10:52.904222 extend-filesystems[1441]: Checking size of /dev/vda9 Dec 13 09:10:52.895713 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 09:10:52.964487 jq[1438]: false Dec 13 09:10:52.964628 coreos-metadata[1436]: Dec 13 09:10:52.908 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Dec 13 09:10:52.964628 coreos-metadata[1436]: Dec 13 09:10:52.955 INFO Fetch successful Dec 13 09:10:52.910653 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 09:10:52.912464 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 09:10:52.913080 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 09:10:52.913421 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 09:10:52.942775 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 09:10:52.942861 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 09:10:52.957213 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 09:10:52.959426 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Dec 13 09:10:52.959486 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 09:10:52.978382 extend-filesystems[1441]: Resized partition /dev/vda9 Dec 13 09:10:52.989501 extend-filesystems[1469]: resize2fs 1.47.1 (20-May-2024) Dec 13 09:10:52.995147 update_engine[1448]: I20241213 09:10:52.987082 1448 main.cc:92] Flatcar Update Engine starting Dec 13 09:10:53.004138 update_engine[1448]: I20241213 09:10:53.002733 1448 update_check_scheduler.cc:74] Next update check in 3m4s Dec 13 09:10:53.004371 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Dec 13 09:10:53.011316 systemd[1]: Started update-engine.service - Update Engine. Dec 13 09:10:53.015218 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 09:10:53.015521 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 09:10:53.020396 jq[1449]: true Dec 13 09:10:53.028813 (ntainerd)[1470]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 09:10:53.057547 tar[1457]: linux-amd64/helm Dec 13 09:10:53.056174 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 09:10:53.110621 systemd-networkd[1367]: eth0: Gained IPv6LL Dec 13 09:10:53.137227 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1374) Dec 13 09:10:53.113609 systemd-timesyncd[1346]: Network configuration changed, trying to establish connection. Dec 13 09:10:53.121846 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 09:10:53.137700 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 09:10:53.152012 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 09:10:53.166224 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 09:10:53.168127 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 13 09:10:53.179894 jq[1476]: true Dec 13 09:10:53.185662 systemd-logind[1447]: New seat seat0. Dec 13 09:10:53.193469 systemd-logind[1447]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 09:10:53.194431 systemd-logind[1447]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 09:10:53.195650 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 09:10:53.210637 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 09:10:53.267305 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Dec 13 09:10:53.322765 extend-filesystems[1469]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 09:10:53.322765 extend-filesystems[1469]: old_desc_blocks = 1, new_desc_blocks = 8 Dec 13 09:10:53.322765 extend-filesystems[1469]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Dec 13 09:10:53.365630 bash[1504]: Updated "/home/core/.ssh/authorized_keys" Dec 13 09:10:53.362583 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 09:10:53.365918 extend-filesystems[1441]: Resized filesystem in /dev/vda9 Dec 13 09:10:53.365918 extend-filesystems[1441]: Found vdb Dec 13 09:10:53.366496 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 09:10:53.367466 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 09:10:53.386733 systemd[1]: Starting sshkeys.service... Dec 13 09:10:53.429978 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 13 09:10:53.438050 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 13 09:10:53.469943 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 09:10:53.489666 locksmithd[1479]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 09:10:53.601831 coreos-metadata[1514]: Dec 13 09:10:53.597 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Dec 13 09:10:53.618276 coreos-metadata[1514]: Dec 13 09:10:53.616 INFO Fetch successful Dec 13 09:10:53.651432 unknown[1514]: wrote ssh authorized keys file for user: core Dec 13 09:10:53.665314 sshd_keygen[1475]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 09:10:53.695068 update-ssh-keys[1529]: Updated "/home/core/.ssh/authorized_keys" Dec 13 09:10:53.700012 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 13 09:10:53.707871 systemd[1]: Finished sshkeys.service. Dec 13 09:10:53.784432 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 09:10:53.798603 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 09:10:53.820671 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 09:10:53.820968 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 09:10:53.834182 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 09:10:53.893885 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 09:10:53.901172 containerd[1470]: time="2024-12-13T09:10:53.901051219Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 09:10:53.904049 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 09:10:53.913939 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 09:10:53.916835 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 09:10:53.969373 containerd[1470]: time="2024-12-13T09:10:53.969109913Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 09:10:53.972096 containerd[1470]: time="2024-12-13T09:10:53.971549929Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 09:10:53.972096 containerd[1470]: time="2024-12-13T09:10:53.971599673Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 09:10:53.972096 containerd[1470]: time="2024-12-13T09:10:53.971618472Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 09:10:53.972096 containerd[1470]: time="2024-12-13T09:10:53.971783578Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 09:10:53.972096 containerd[1470]: time="2024-12-13T09:10:53.971803110Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 09:10:53.972096 containerd[1470]: time="2024-12-13T09:10:53.971856388Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 09:10:53.972096 containerd[1470]: time="2024-12-13T09:10:53.971870446Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 09:10:53.972410 containerd[1470]: time="2024-12-13T09:10:53.972113566Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 09:10:53.972410 containerd[1470]: time="2024-12-13T09:10:53.972140405Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 09:10:53.972410 containerd[1470]: time="2024-12-13T09:10:53.972165757Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 09:10:53.972410 containerd[1470]: time="2024-12-13T09:10:53.972181311Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 09:10:53.972410 containerd[1470]: time="2024-12-13T09:10:53.972317562Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 09:10:53.972735 containerd[1470]: time="2024-12-13T09:10:53.972639611Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 09:10:53.973450 containerd[1470]: time="2024-12-13T09:10:53.972816573Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 09:10:53.973450 containerd[1470]: time="2024-12-13T09:10:53.972839879Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 09:10:53.973450 containerd[1470]: time="2024-12-13T09:10:53.972941376Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 09:10:53.973450 containerd[1470]: time="2024-12-13T09:10:53.973140842Z" level=info msg="metadata content store policy set" policy=shared Dec 13 09:10:53.996233 containerd[1470]: time="2024-12-13T09:10:53.993164992Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 09:10:53.996233 containerd[1470]: time="2024-12-13T09:10:53.993239546Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 09:10:53.996233 containerd[1470]: time="2024-12-13T09:10:53.993259184Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 09:10:53.996233 containerd[1470]: time="2024-12-13T09:10:53.993274578Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 09:10:53.996233 containerd[1470]: time="2024-12-13T09:10:53.993289430Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 09:10:53.996233 containerd[1470]: time="2024-12-13T09:10:53.993493804Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 09:10:53.996233 containerd[1470]: time="2024-12-13T09:10:53.993844101Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 09:10:53.996233 containerd[1470]: time="2024-12-13T09:10:53.994026582Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 09:10:53.996233 containerd[1470]: time="2024-12-13T09:10:53.994068282Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 09:10:53.996233 containerd[1470]: time="2024-12-13T09:10:53.994086046Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 09:10:53.996233 containerd[1470]: time="2024-12-13T09:10:53.994110839Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 09:10:53.996233 containerd[1470]: time="2024-12-13T09:10:53.994133012Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 09:10:53.996233 containerd[1470]: time="2024-12-13T09:10:53.994151748Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 09:10:53.996233 containerd[1470]: time="2024-12-13T09:10:53.994170858Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 09:10:53.996770 containerd[1470]: time="2024-12-13T09:10:53.994189173Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 09:10:53.996770 containerd[1470]: time="2024-12-13T09:10:53.994209882Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 09:10:53.996770 containerd[1470]: time="2024-12-13T09:10:53.994229271Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 09:10:53.996770 containerd[1470]: time="2024-12-13T09:10:53.994248536Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 09:10:53.996770 containerd[1470]: time="2024-12-13T09:10:53.994274427Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 09:10:53.996770 containerd[1470]: time="2024-12-13T09:10:53.994318584Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 09:10:53.996770 containerd[1470]: time="2024-12-13T09:10:53.994374402Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 09:10:53.996770 containerd[1470]: time="2024-12-13T09:10:53.994396557Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 09:10:53.996770 containerd[1470]: time="2024-12-13T09:10:53.994414915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 09:10:53.996770 containerd[1470]: time="2024-12-13T09:10:53.994433684Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 09:10:53.996770 containerd[1470]: time="2024-12-13T09:10:53.994451716Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 09:10:53.996770 containerd[1470]: time="2024-12-13T09:10:53.994483993Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 09:10:53.996770 containerd[1470]: time="2024-12-13T09:10:53.994504464Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 09:10:53.996770 containerd[1470]: time="2024-12-13T09:10:53.994526010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 09:10:53.997084 containerd[1470]: time="2024-12-13T09:10:53.994547475Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 09:10:53.997084 containerd[1470]: time="2024-12-13T09:10:53.994566906Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 09:10:53.997084 containerd[1470]: time="2024-12-13T09:10:53.994586566Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 09:10:53.997084 containerd[1470]: time="2024-12-13T09:10:53.994626979Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 09:10:53.997084 containerd[1470]: time="2024-12-13T09:10:53.994663428Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 09:10:53.997084 containerd[1470]: time="2024-12-13T09:10:53.994681825Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 09:10:53.997084 containerd[1470]: time="2024-12-13T09:10:53.994699673Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 09:10:53.997084 containerd[1470]: time="2024-12-13T09:10:53.994787486Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 09:10:53.997084 containerd[1470]: time="2024-12-13T09:10:53.994817062Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 09:10:53.997084 containerd[1470]: time="2024-12-13T09:10:53.994834586Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 09:10:53.997084 containerd[1470]: time="2024-12-13T09:10:53.994852964Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 09:10:53.997084 containerd[1470]: time="2024-12-13T09:10:53.994867578Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 09:10:53.997084 containerd[1470]: time="2024-12-13T09:10:53.994885854Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 09:10:53.997084 containerd[1470]: time="2024-12-13T09:10:53.994907082Z" level=info msg="NRI interface is disabled by configuration." Dec 13 09:10:53.997472 containerd[1470]: time="2024-12-13T09:10:53.994922798Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 09:10:53.997512 containerd[1470]: time="2024-12-13T09:10:53.996292666Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 09:10:53.997512 containerd[1470]: time="2024-12-13T09:10:53.996402727Z" level=info msg="Connect containerd service" Dec 13 09:10:53.997512 containerd[1470]: time="2024-12-13T09:10:53.996454038Z" level=info msg="using legacy CRI server" Dec 13 09:10:53.997512 containerd[1470]: time="2024-12-13T09:10:53.996466427Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 09:10:53.997512 containerd[1470]: time="2024-12-13T09:10:53.996638415Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 09:10:53.997849 containerd[1470]: time="2024-12-13T09:10:53.997758566Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 09:10:54.000981 containerd[1470]: time="2024-12-13T09:10:53.998558281Z" level=info msg="Start subscribing containerd event" Dec 13 09:10:54.000981 containerd[1470]: time="2024-12-13T09:10:53.998624015Z" level=info msg="Start recovering state" Dec 13 09:10:54.000981 containerd[1470]: time="2024-12-13T09:10:53.998716258Z" level=info msg="Start event monitor" Dec 13 09:10:54.000981 containerd[1470]: time="2024-12-13T09:10:53.998728376Z" level=info msg="Start snapshots syncer" Dec 13 09:10:54.000981 containerd[1470]: time="2024-12-13T09:10:53.998739375Z" level=info msg="Start cni network conf syncer for default" Dec 13 09:10:54.000981 containerd[1470]: time="2024-12-13T09:10:53.998752112Z" level=info msg="Start streaming server" Dec 13 09:10:54.000981 containerd[1470]: time="2024-12-13T09:10:53.998990629Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 09:10:54.000981 containerd[1470]: time="2024-12-13T09:10:53.999064216Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 09:10:53.999618 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 09:10:54.004430 containerd[1470]: time="2024-12-13T09:10:54.004365827Z" level=info msg="containerd successfully booted in 0.123285s" Dec 13 09:10:54.126273 systemd-networkd[1367]: eth1: Gained IPv6LL Dec 13 09:10:54.128438 systemd-timesyncd[1346]: Network configuration changed, trying to establish connection. Dec 13 09:10:54.386313 tar[1457]: linux-amd64/LICENSE Dec 13 09:10:54.386313 tar[1457]: linux-amd64/README.md Dec 13 09:10:54.401859 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 09:10:54.869957 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 09:10:54.874832 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 09:10:54.878699 systemd[1]: Startup finished in 1.399s (kernel) + 5.852s (initrd) + 6.303s (userspace) = 13.556s. Dec 13 09:10:54.883658 (kubelet)[1560]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 09:10:55.632211 kubelet[1560]: E1213 09:10:55.632157 1560 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 09:10:55.635291 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 09:10:55.635991 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 09:10:55.636665 systemd[1]: kubelet.service: Consumed 1.506s CPU time. Dec 13 09:10:56.987728 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 09:10:56.989404 systemd[1]: Started sshd@0-146.190.157.113:22-147.75.109.163:58968.service - OpenSSH per-connection server daemon (147.75.109.163:58968). Dec 13 09:10:57.079137 sshd[1573]: Accepted publickey for core from 147.75.109.163 port 58968 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo Dec 13 09:10:57.081262 sshd[1573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:10:57.092317 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 09:10:57.100798 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 09:10:57.105518 systemd-logind[1447]: New session 1 of user core. Dec 13 09:10:57.124582 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 09:10:57.131980 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 09:10:57.147438 (systemd)[1577]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 09:10:57.280109 systemd[1577]: Queued start job for default target default.target. Dec 13 09:10:57.289643 systemd[1577]: Created slice app.slice - User Application Slice. Dec 13 09:10:57.289937 systemd[1577]: Reached target paths.target - Paths. Dec 13 09:10:57.290079 systemd[1577]: Reached target timers.target - Timers. Dec 13 09:10:57.291906 systemd[1577]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 09:10:57.314768 systemd[1577]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 09:10:57.314910 systemd[1577]: Reached target sockets.target - Sockets. Dec 13 09:10:57.314929 systemd[1577]: Reached target basic.target - Basic System. Dec 13 09:10:57.315001 systemd[1577]: Reached target default.target - Main User Target. Dec 13 09:10:57.315045 systemd[1577]: Startup finished in 156ms. Dec 13 09:10:57.315378 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 09:10:57.327689 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 09:10:57.396853 systemd[1]: Started sshd@1-146.190.157.113:22-147.75.109.163:58982.service - OpenSSH per-connection server daemon (147.75.109.163:58982). Dec 13 09:10:57.446112 sshd[1588]: Accepted publickey for core from 147.75.109.163 port 58982 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo Dec 13 09:10:57.448154 sshd[1588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:10:57.454915 systemd-logind[1447]: New session 2 of user core. Dec 13 09:10:57.459602 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 09:10:57.524841 sshd[1588]: pam_unix(sshd:session): session closed for user core Dec 13 09:10:57.537571 systemd[1]: sshd@1-146.190.157.113:22-147.75.109.163:58982.service: Deactivated successfully. Dec 13 09:10:57.540755 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 09:10:57.543533 systemd-logind[1447]: Session 2 logged out. Waiting for processes to exit. Dec 13 09:10:57.549813 systemd[1]: Started sshd@2-146.190.157.113:22-147.75.109.163:58998.service - OpenSSH per-connection server daemon (147.75.109.163:58998). Dec 13 09:10:57.552480 systemd-logind[1447]: Removed session 2. Dec 13 09:10:57.597547 sshd[1595]: Accepted publickey for core from 147.75.109.163 port 58998 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo Dec 13 09:10:57.599174 sshd[1595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:10:57.604621 systemd-logind[1447]: New session 3 of user core. Dec 13 09:10:57.615686 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 09:10:57.674898 sshd[1595]: pam_unix(sshd:session): session closed for user core Dec 13 09:10:57.687681 systemd[1]: sshd@2-146.190.157.113:22-147.75.109.163:58998.service: Deactivated successfully. Dec 13 09:10:57.690293 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 09:10:57.692693 systemd-logind[1447]: Session 3 logged out. Waiting for processes to exit. Dec 13 09:10:57.698994 systemd[1]: Started sshd@3-146.190.157.113:22-147.75.109.163:59012.service - OpenSSH per-connection server daemon (147.75.109.163:59012). Dec 13 09:10:57.701065 systemd-logind[1447]: Removed session 3. Dec 13 09:10:57.754726 sshd[1602]: Accepted publickey for core from 147.75.109.163 port 59012 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo Dec 13 09:10:57.756892 sshd[1602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:10:57.763623 systemd-logind[1447]: New session 4 of user core. Dec 13 09:10:57.772689 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 09:10:57.837240 sshd[1602]: pam_unix(sshd:session): session closed for user core Dec 13 09:10:57.851604 systemd[1]: sshd@3-146.190.157.113:22-147.75.109.163:59012.service: Deactivated successfully. Dec 13 09:10:57.854488 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 09:10:57.856797 systemd-logind[1447]: Session 4 logged out. Waiting for processes to exit. Dec 13 09:10:57.866894 systemd[1]: Started sshd@4-146.190.157.113:22-147.75.109.163:59020.service - OpenSSH per-connection server daemon (147.75.109.163:59020). Dec 13 09:10:57.869175 systemd-logind[1447]: Removed session 4. Dec 13 09:10:57.912964 sshd[1609]: Accepted publickey for core from 147.75.109.163 port 59020 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo Dec 13 09:10:57.914923 sshd[1609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:10:57.920831 systemd-logind[1447]: New session 5 of user core. Dec 13 09:10:57.928656 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 09:10:58.001380 sudo[1612]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 09:10:58.001761 sudo[1612]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 09:10:58.018890 sudo[1612]: pam_unix(sudo:session): session closed for user root Dec 13 09:10:58.022959 sshd[1609]: pam_unix(sshd:session): session closed for user core Dec 13 09:10:58.036570 systemd[1]: sshd@4-146.190.157.113:22-147.75.109.163:59020.service: Deactivated successfully. Dec 13 09:10:58.038736 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 09:10:58.040787 systemd-logind[1447]: Session 5 logged out. Waiting for processes to exit. Dec 13 09:10:58.047095 systemd[1]: Started sshd@5-146.190.157.113:22-147.75.109.163:59026.service - OpenSSH per-connection server daemon (147.75.109.163:59026). Dec 13 09:10:58.048959 systemd-logind[1447]: Removed session 5. Dec 13 09:10:58.098577 sshd[1617]: Accepted publickey for core from 147.75.109.163 port 59026 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo Dec 13 09:10:58.100934 sshd[1617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:10:58.108870 systemd-logind[1447]: New session 6 of user core. Dec 13 09:10:58.116734 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 09:10:58.181034 sudo[1621]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 09:10:58.181925 sudo[1621]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 09:10:58.187031 sudo[1621]: pam_unix(sudo:session): session closed for user root Dec 13 09:10:58.194658 sudo[1620]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 09:10:58.194974 sudo[1620]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 09:10:58.218858 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 13 09:10:58.236047 auditctl[1624]: No rules Dec 13 09:10:58.295808 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 09:10:58.296105 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 13 09:10:58.300703 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 09:10:58.408002 augenrules[1642]: No rules Dec 13 09:10:58.409208 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 09:10:58.410792 sudo[1620]: pam_unix(sudo:session): session closed for user root Dec 13 09:10:58.415564 sshd[1617]: pam_unix(sshd:session): session closed for user core Dec 13 09:10:58.425550 systemd[1]: sshd@5-146.190.157.113:22-147.75.109.163:59026.service: Deactivated successfully. Dec 13 09:10:58.428357 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 09:10:58.431120 systemd-logind[1447]: Session 6 logged out. Waiting for processes to exit. Dec 13 09:10:58.441021 systemd[1]: Started sshd@6-146.190.157.113:22-147.75.109.163:59034.service - OpenSSH per-connection server daemon (147.75.109.163:59034). Dec 13 09:10:58.444130 systemd-logind[1447]: Removed session 6. Dec 13 09:10:58.487178 sshd[1650]: Accepted publickey for core from 147.75.109.163 port 59034 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo Dec 13 09:10:58.489049 sshd[1650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:10:58.494684 systemd-logind[1447]: New session 7 of user core. Dec 13 09:10:58.506679 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 09:10:58.567435 sudo[1653]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 09:10:58.568369 sudo[1653]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 09:10:59.098180 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 09:10:59.106951 (dockerd)[1669]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 09:10:59.613482 dockerd[1669]: time="2024-12-13T09:10:59.612970301Z" level=info msg="Starting up" Dec 13 09:10:59.768253 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport707963310-merged.mount: Deactivated successfully. Dec 13 09:10:59.809133 dockerd[1669]: time="2024-12-13T09:10:59.809040720Z" level=info msg="Loading containers: start." Dec 13 09:11:00.004445 kernel: Initializing XFRM netlink socket Dec 13 09:11:00.045850 systemd-timesyncd[1346]: Network configuration changed, trying to establish connection. Dec 13 09:11:00.047010 systemd-timesyncd[1346]: Network configuration changed, trying to establish connection. Dec 13 09:11:00.058464 systemd-timesyncd[1346]: Network configuration changed, trying to establish connection. Dec 13 09:11:00.126932 systemd-networkd[1367]: docker0: Link UP Dec 13 09:11:00.127657 systemd-timesyncd[1346]: Network configuration changed, trying to establish connection. Dec 13 09:11:00.157149 dockerd[1669]: time="2024-12-13T09:11:00.157091094Z" level=info msg="Loading containers: done." Dec 13 09:11:00.181922 dockerd[1669]: time="2024-12-13T09:11:00.181852318Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 09:11:00.182273 dockerd[1669]: time="2024-12-13T09:11:00.182009960Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Dec 13 09:11:00.182273 dockerd[1669]: time="2024-12-13T09:11:00.182168595Z" level=info msg="Daemon has completed initialization" Dec 13 09:11:00.241625 dockerd[1669]: time="2024-12-13T09:11:00.241552486Z" level=info msg="API listen on /run/docker.sock" Dec 13 09:11:00.242482 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 09:11:00.762105 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4247272899-merged.mount: Deactivated successfully. Dec 13 09:11:01.471616 containerd[1470]: time="2024-12-13T09:11:01.471483828Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\"" Dec 13 09:11:02.163175 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount237603173.mount: Deactivated successfully. Dec 13 09:11:03.868557 containerd[1470]: time="2024-12-13T09:11:03.868490989Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:11:03.869913 containerd[1470]: time="2024-12-13T09:11:03.869853702Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.8: active requests=0, bytes read=32675642" Dec 13 09:11:03.872861 containerd[1470]: time="2024-12-13T09:11:03.872740354Z" level=info msg="ImageCreate event name:\"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:11:03.879664 containerd[1470]: time="2024-12-13T09:11:03.879502081Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:11:03.882135 containerd[1470]: time="2024-12-13T09:11:03.882059186Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.8\" with image id \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\", size \"32672442\" in 2.410509359s" Dec 13 09:11:03.882135 containerd[1470]: time="2024-12-13T09:11:03.882115262Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\" returns image reference \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\"" Dec 13 09:11:03.917561 containerd[1470]: time="2024-12-13T09:11:03.917288658Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\"" Dec 13 09:11:05.885841 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 09:11:05.895740 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 09:11:05.905906 containerd[1470]: time="2024-12-13T09:11:05.905846074Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:11:05.909016 containerd[1470]: time="2024-12-13T09:11:05.908961422Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.8: active requests=0, bytes read=29606409" Dec 13 09:11:05.909686 containerd[1470]: time="2024-12-13T09:11:05.909658002Z" level=info msg="ImageCreate event name:\"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:11:05.914577 containerd[1470]: time="2024-12-13T09:11:05.914536763Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:11:05.917367 containerd[1470]: time="2024-12-13T09:11:05.915724380Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.8\" with image id \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\", size \"31051521\" in 1.998390506s" Dec 13 09:11:05.917367 containerd[1470]: time="2024-12-13T09:11:05.915779248Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\" returns image reference \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\"" Dec 13 09:11:05.959084 containerd[1470]: time="2024-12-13T09:11:05.959034822Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\"" Dec 13 09:11:06.043563 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 09:11:06.049240 (kubelet)[1896]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 09:11:06.123741 kubelet[1896]: E1213 09:11:06.123673 1896 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 09:11:06.128838 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 09:11:06.128998 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 09:11:07.307859 containerd[1470]: time="2024-12-13T09:11:07.307786063Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:11:07.309540 containerd[1470]: time="2024-12-13T09:11:07.309461763Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.8: active requests=0, bytes read=17783035" Dec 13 09:11:07.310385 containerd[1470]: time="2024-12-13T09:11:07.310284944Z" level=info msg="ImageCreate event name:\"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:11:07.318574 containerd[1470]: time="2024-12-13T09:11:07.318490686Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:11:07.321371 containerd[1470]: time="2024-12-13T09:11:07.320692027Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.8\" with image id \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\", size \"19228165\" in 1.361601457s" Dec 13 09:11:07.321371 containerd[1470]: time="2024-12-13T09:11:07.320759933Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\" returns image reference \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\"" Dec 13 09:11:07.355126 containerd[1470]: time="2024-12-13T09:11:07.354963474Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Dec 13 09:11:07.761568 systemd-resolved[1325]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Dec 13 09:11:08.575173 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2707982456.mount: Deactivated successfully. Dec 13 09:11:09.346654 containerd[1470]: time="2024-12-13T09:11:09.346567945Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:11:09.347867 containerd[1470]: time="2024-12-13T09:11:09.347772746Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.8: active requests=0, bytes read=29057470" Dec 13 09:11:09.349138 containerd[1470]: time="2024-12-13T09:11:09.349055322Z" level=info msg="ImageCreate event name:\"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:11:09.352178 containerd[1470]: time="2024-12-13T09:11:09.351711383Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:11:09.352919 containerd[1470]: time="2024-12-13T09:11:09.352873004Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.8\" with image id \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\", size \"29056489\" in 1.997839415s" Dec 13 09:11:09.353003 containerd[1470]: time="2024-12-13T09:11:09.352926378Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\"" Dec 13 09:11:09.403289 containerd[1470]: time="2024-12-13T09:11:09.403243351Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 09:11:10.048935 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount55855997.mount: Deactivated successfully. Dec 13 09:11:10.831813 systemd-resolved[1325]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Dec 13 09:11:11.368318 containerd[1470]: time="2024-12-13T09:11:11.367224116Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:11:11.369632 containerd[1470]: time="2024-12-13T09:11:11.369553832Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Dec 13 09:11:11.371188 containerd[1470]: time="2024-12-13T09:11:11.371141801Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:11:11.375943 containerd[1470]: time="2024-12-13T09:11:11.375896381Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:11:11.378404 containerd[1470]: time="2024-12-13T09:11:11.378134679Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.974838635s" Dec 13 09:11:11.378404 containerd[1470]: time="2024-12-13T09:11:11.378197585Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 09:11:11.432369 containerd[1470]: time="2024-12-13T09:11:11.431686696Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 09:11:12.004947 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount273421226.mount: Deactivated successfully. Dec 13 09:11:12.013312 containerd[1470]: time="2024-12-13T09:11:12.011241851Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:11:12.015888 containerd[1470]: time="2024-12-13T09:11:12.015495483Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Dec 13 09:11:12.032812 containerd[1470]: time="2024-12-13T09:11:12.032717675Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:11:12.038814 containerd[1470]: time="2024-12-13T09:11:12.038733544Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:11:12.040718 containerd[1470]: time="2024-12-13T09:11:12.040635000Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 608.874451ms" Dec 13 09:11:12.043070 containerd[1470]: time="2024-12-13T09:11:12.041046796Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 09:11:12.091545 containerd[1470]: time="2024-12-13T09:11:12.091246210Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Dec 13 09:11:12.735096 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4664206.mount: Deactivated successfully. Dec 13 09:11:15.067579 containerd[1470]: time="2024-12-13T09:11:15.067462590Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:11:15.069823 containerd[1470]: time="2024-12-13T09:11:15.069699547Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Dec 13 09:11:15.072375 containerd[1470]: time="2024-12-13T09:11:15.071010110Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:11:15.080376 containerd[1470]: time="2024-12-13T09:11:15.078606635Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:11:15.083703 containerd[1470]: time="2024-12-13T09:11:15.083645319Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.99231336s" Dec 13 09:11:15.083703 containerd[1470]: time="2024-12-13T09:11:15.083702729Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Dec 13 09:11:16.380237 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 09:11:16.391803 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 09:11:16.592106 (kubelet)[2093]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 09:11:16.592559 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 09:11:16.673797 kubelet[2093]: E1213 09:11:16.672732 2093 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 09:11:16.676816 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 09:11:16.677621 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 09:11:19.460035 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 09:11:19.471815 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 09:11:19.499715 systemd[1]: Reloading requested from client PID 2109 ('systemctl') (unit session-7.scope)... Dec 13 09:11:19.499740 systemd[1]: Reloading... Dec 13 09:11:19.626444 zram_generator::config[2148]: No configuration found. Dec 13 09:11:19.780778 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 09:11:19.863465 systemd[1]: Reloading finished in 363 ms. Dec 13 09:11:19.916786 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 09:11:19.916908 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 09:11:19.917286 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 09:11:19.925751 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 09:11:20.082592 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 09:11:20.085069 (kubelet)[2203]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 09:11:20.148696 kubelet[2203]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 09:11:20.149514 kubelet[2203]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 09:11:20.149514 kubelet[2203]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 09:11:20.151368 kubelet[2203]: I1213 09:11:20.150843 2203 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 09:11:20.623981 kubelet[2203]: I1213 09:11:20.623803 2203 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 09:11:20.623981 kubelet[2203]: I1213 09:11:20.623840 2203 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 09:11:20.624275 kubelet[2203]: I1213 09:11:20.624243 2203 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 09:11:20.646422 kubelet[2203]: I1213 09:11:20.646371 2203 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 09:11:20.647411 kubelet[2203]: E1213 09:11:20.647222 2203 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://146.190.157.113:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 146.190.157.113:6443: connect: connection refused Dec 13 09:11:20.669449 kubelet[2203]: I1213 09:11:20.668609 2203 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 09:11:20.669449 kubelet[2203]: I1213 09:11:20.668963 2203 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 09:11:20.669449 kubelet[2203]: I1213 09:11:20.669016 2203 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.2.1-e-b721934136","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 09:11:20.670137 kubelet[2203]: I1213 09:11:20.670111 2203 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 09:11:20.670222 kubelet[2203]: I1213 09:11:20.670214 2203 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 09:11:20.670417 kubelet[2203]: I1213 09:11:20.670406 2203 state_mem.go:36] "Initialized new in-memory state store" Dec 13 09:11:20.671443 kubelet[2203]: I1213 09:11:20.671420 2203 kubelet.go:400] "Attempting to sync node with API server" Dec 13 09:11:20.671570 kubelet[2203]: I1213 09:11:20.671557 2203 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 09:11:20.671660 kubelet[2203]: I1213 09:11:20.671651 2203 kubelet.go:312] "Adding apiserver pod source" Dec 13 09:11:20.671726 kubelet[2203]: I1213 09:11:20.671717 2203 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 09:11:20.674882 kubelet[2203]: W1213 09:11:20.674825 2203 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://146.190.157.113:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.1-e-b721934136&limit=500&resourceVersion=0": dial tcp 146.190.157.113:6443: connect: connection refused Dec 13 09:11:20.675044 kubelet[2203]: E1213 09:11:20.675033 2203 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://146.190.157.113:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.1-e-b721934136&limit=500&resourceVersion=0": dial tcp 146.190.157.113:6443: connect: connection refused Dec 13 09:11:20.675219 kubelet[2203]: W1213 09:11:20.675179 2203 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://146.190.157.113:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 146.190.157.113:6443: connect: connection refused Dec 13 09:11:20.675396 kubelet[2203]: E1213 09:11:20.675376 2203 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://146.190.157.113:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 146.190.157.113:6443: connect: connection refused Dec 13 09:11:20.676022 kubelet[2203]: I1213 09:11:20.675993 2203 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 09:11:20.677847 kubelet[2203]: I1213 09:11:20.677806 2203 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 09:11:20.678002 kubelet[2203]: W1213 09:11:20.677989 2203 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 09:11:20.679268 kubelet[2203]: I1213 09:11:20.679245 2203 server.go:1264] "Started kubelet" Dec 13 09:11:20.683041 kubelet[2203]: I1213 09:11:20.682484 2203 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 09:11:20.690191 kubelet[2203]: I1213 09:11:20.684016 2203 server.go:455] "Adding debug handlers to kubelet server" Dec 13 09:11:20.691489 kubelet[2203]: I1213 09:11:20.691420 2203 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 09:11:20.691910 kubelet[2203]: I1213 09:11:20.691864 2203 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 09:11:20.692115 kubelet[2203]: I1213 09:11:20.692087 2203 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 09:11:20.693784 kubelet[2203]: E1213 09:11:20.693526 2203 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://146.190.157.113:6443/api/v1/namespaces/default/events\": dial tcp 146.190.157.113:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.2.1-e-b721934136.1810b18e299c2cbe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.2.1-e-b721934136,UID:ci-4081.2.1-e-b721934136,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.2.1-e-b721934136,},FirstTimestamp:2024-12-13 09:11:20.67921427 +0000 UTC m=+0.589269694,LastTimestamp:2024-12-13 09:11:20.67921427 +0000 UTC m=+0.589269694,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.2.1-e-b721934136,}" Dec 13 09:11:20.706726 kubelet[2203]: E1213 09:11:20.706685 2203 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.2.1-e-b721934136\" not found" Dec 13 09:11:20.707502 kubelet[2203]: I1213 09:11:20.707472 2203 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 09:11:20.707841 kubelet[2203]: I1213 09:11:20.707822 2203 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 09:11:20.708018 kubelet[2203]: I1213 09:11:20.708004 2203 reconciler.go:26] "Reconciler: start to sync state" Dec 13 09:11:20.710732 kubelet[2203]: E1213 09:11:20.710682 2203 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.157.113:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.1-e-b721934136?timeout=10s\": dial tcp 146.190.157.113:6443: connect: connection refused" interval="200ms" Dec 13 09:11:20.711373 kubelet[2203]: E1213 09:11:20.711027 2203 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 09:11:20.711373 kubelet[2203]: I1213 09:11:20.711312 2203 factory.go:221] Registration of the systemd container factory successfully Dec 13 09:11:20.711672 kubelet[2203]: I1213 09:11:20.711646 2203 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 09:11:20.712861 kubelet[2203]: W1213 09:11:20.712804 2203 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://146.190.157.113:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 146.190.157.113:6443: connect: connection refused Dec 13 09:11:20.713097 kubelet[2203]: E1213 09:11:20.713042 2203 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://146.190.157.113:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 146.190.157.113:6443: connect: connection refused Dec 13 09:11:20.713705 kubelet[2203]: I1213 09:11:20.713677 2203 factory.go:221] Registration of the containerd container factory successfully Dec 13 09:11:20.730459 kubelet[2203]: I1213 09:11:20.729492 2203 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 09:11:20.731504 kubelet[2203]: I1213 09:11:20.731463 2203 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 09:11:20.731621 kubelet[2203]: I1213 09:11:20.731516 2203 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 09:11:20.731621 kubelet[2203]: I1213 09:11:20.731553 2203 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 09:11:20.731760 kubelet[2203]: E1213 09:11:20.731621 2203 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 09:11:20.745229 kubelet[2203]: W1213 09:11:20.743021 2203 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://146.190.157.113:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 146.190.157.113:6443: connect: connection refused Dec 13 09:11:20.751389 kubelet[2203]: E1213 09:11:20.749501 2203 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://146.190.157.113:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 146.190.157.113:6443: connect: connection refused Dec 13 09:11:20.759295 kubelet[2203]: I1213 09:11:20.759251 2203 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 09:11:20.759295 kubelet[2203]: I1213 09:11:20.759279 2203 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 09:11:20.759295 kubelet[2203]: I1213 09:11:20.759311 2203 state_mem.go:36] "Initialized new in-memory state store" Dec 13 09:11:20.764371 kubelet[2203]: I1213 09:11:20.764300 2203 policy_none.go:49] "None policy: Start" Dec 13 09:11:20.765882 kubelet[2203]: I1213 09:11:20.765844 2203 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 09:11:20.766134 kubelet[2203]: I1213 09:11:20.766116 2203 state_mem.go:35] "Initializing new in-memory state store" Dec 13 09:11:20.778441 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 09:11:20.795187 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 09:11:20.801301 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 09:11:20.810157 kubelet[2203]: I1213 09:11:20.810128 2203 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 09:11:20.810642 kubelet[2203]: I1213 09:11:20.810585 2203 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 09:11:20.811004 kubelet[2203]: I1213 09:11:20.810915 2203 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.1-e-b721934136" Dec 13 09:11:20.811126 kubelet[2203]: I1213 09:11:20.810993 2203 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 09:11:20.812133 kubelet[2203]: E1213 09:11:20.812084 2203 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://146.190.157.113:6443/api/v1/nodes\": dial tcp 146.190.157.113:6443: connect: connection refused" node="ci-4081.2.1-e-b721934136" Dec 13 09:11:20.820458 kubelet[2203]: E1213 09:11:20.820307 2203 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.2.1-e-b721934136\" not found" Dec 13 09:11:20.832897 kubelet[2203]: I1213 09:11:20.832793 2203 topology_manager.go:215] "Topology Admit Handler" podUID="5b31988cb2ce81b4c9eed649a3f7f4e3" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.2.1-e-b721934136" Dec 13 09:11:20.834766 kubelet[2203]: I1213 09:11:20.834695 2203 topology_manager.go:215] "Topology Admit Handler" podUID="696b11355c31ebd3612ea6bfab989784" podNamespace="kube-system" podName="kube-scheduler-ci-4081.2.1-e-b721934136" Dec 13 09:11:20.837630 kubelet[2203]: I1213 09:11:20.837519 2203 topology_manager.go:215] "Topology Admit Handler" podUID="7d8a342f61aa246be3eee6a0a8d25cc9" podNamespace="kube-system" podName="kube-apiserver-ci-4081.2.1-e-b721934136" Dec 13 09:11:20.849747 systemd[1]: Created slice kubepods-burstable-pod5b31988cb2ce81b4c9eed649a3f7f4e3.slice - libcontainer container kubepods-burstable-pod5b31988cb2ce81b4c9eed649a3f7f4e3.slice. Dec 13 09:11:20.870318 systemd[1]: Created slice kubepods-burstable-pod696b11355c31ebd3612ea6bfab989784.slice - libcontainer container kubepods-burstable-pod696b11355c31ebd3612ea6bfab989784.slice. Dec 13 09:11:20.879626 systemd[1]: Created slice kubepods-burstable-pod7d8a342f61aa246be3eee6a0a8d25cc9.slice - libcontainer container kubepods-burstable-pod7d8a342f61aa246be3eee6a0a8d25cc9.slice. Dec 13 09:11:20.910068 kubelet[2203]: I1213 09:11:20.909997 2203 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7d8a342f61aa246be3eee6a0a8d25cc9-ca-certs\") pod \"kube-apiserver-ci-4081.2.1-e-b721934136\" (UID: \"7d8a342f61aa246be3eee6a0a8d25cc9\") " pod="kube-system/kube-apiserver-ci-4081.2.1-e-b721934136" Dec 13 09:11:20.910229 kubelet[2203]: I1213 09:11:20.910121 2203 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5b31988cb2ce81b4c9eed649a3f7f4e3-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.2.1-e-b721934136\" (UID: \"5b31988cb2ce81b4c9eed649a3f7f4e3\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-e-b721934136" Dec 13 09:11:20.910229 kubelet[2203]: I1213 09:11:20.910218 2203 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5b31988cb2ce81b4c9eed649a3f7f4e3-k8s-certs\") pod \"kube-controller-manager-ci-4081.2.1-e-b721934136\" (UID: \"5b31988cb2ce81b4c9eed649a3f7f4e3\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-e-b721934136" Dec 13 09:11:20.910373 kubelet[2203]: I1213 09:11:20.910262 2203 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5b31988cb2ce81b4c9eed649a3f7f4e3-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.2.1-e-b721934136\" (UID: \"5b31988cb2ce81b4c9eed649a3f7f4e3\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-e-b721934136" Dec 13 09:11:20.910373 kubelet[2203]: I1213 09:11:20.910299 2203 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/696b11355c31ebd3612ea6bfab989784-kubeconfig\") pod \"kube-scheduler-ci-4081.2.1-e-b721934136\" (UID: \"696b11355c31ebd3612ea6bfab989784\") " pod="kube-system/kube-scheduler-ci-4081.2.1-e-b721934136" Dec 13 09:11:20.910373 kubelet[2203]: I1213 09:11:20.910327 2203 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5b31988cb2ce81b4c9eed649a3f7f4e3-ca-certs\") pod \"kube-controller-manager-ci-4081.2.1-e-b721934136\" (UID: \"5b31988cb2ce81b4c9eed649a3f7f4e3\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-e-b721934136" Dec 13 09:11:20.910503 kubelet[2203]: I1213 09:11:20.910437 2203 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5b31988cb2ce81b4c9eed649a3f7f4e3-kubeconfig\") pod \"kube-controller-manager-ci-4081.2.1-e-b721934136\" (UID: \"5b31988cb2ce81b4c9eed649a3f7f4e3\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-e-b721934136" Dec 13 09:11:20.910503 kubelet[2203]: I1213 09:11:20.910470 2203 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7d8a342f61aa246be3eee6a0a8d25cc9-k8s-certs\") pod \"kube-apiserver-ci-4081.2.1-e-b721934136\" (UID: \"7d8a342f61aa246be3eee6a0a8d25cc9\") " pod="kube-system/kube-apiserver-ci-4081.2.1-e-b721934136" Dec 13 09:11:20.910578 kubelet[2203]: I1213 09:11:20.910503 2203 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7d8a342f61aa246be3eee6a0a8d25cc9-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.2.1-e-b721934136\" (UID: \"7d8a342f61aa246be3eee6a0a8d25cc9\") " pod="kube-system/kube-apiserver-ci-4081.2.1-e-b721934136" Dec 13 09:11:20.912737 kubelet[2203]: E1213 09:11:20.912673 2203 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.157.113:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.1-e-b721934136?timeout=10s\": dial tcp 146.190.157.113:6443: connect: connection refused" interval="400ms" Dec 13 09:11:21.014323 kubelet[2203]: I1213 09:11:21.013905 2203 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.1-e-b721934136" Dec 13 09:11:21.014518 kubelet[2203]: E1213 09:11:21.014371 2203 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://146.190.157.113:6443/api/v1/nodes\": dial tcp 146.190.157.113:6443: connect: connection refused" node="ci-4081.2.1-e-b721934136" Dec 13 09:11:21.166551 kubelet[2203]: E1213 09:11:21.166275 2203 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:21.167728 containerd[1470]: time="2024-12-13T09:11:21.167304573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.2.1-e-b721934136,Uid:5b31988cb2ce81b4c9eed649a3f7f4e3,Namespace:kube-system,Attempt:0,}" Dec 13 09:11:21.169909 systemd-resolved[1325]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. Dec 13 09:11:21.174653 kubelet[2203]: E1213 09:11:21.174534 2203 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:21.181669 containerd[1470]: time="2024-12-13T09:11:21.181587400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.2.1-e-b721934136,Uid:696b11355c31ebd3612ea6bfab989784,Namespace:kube-system,Attempt:0,}" Dec 13 09:11:21.184244 kubelet[2203]: E1213 09:11:21.183792 2203 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:21.184814 containerd[1470]: time="2024-12-13T09:11:21.184736639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.2.1-e-b721934136,Uid:7d8a342f61aa246be3eee6a0a8d25cc9,Namespace:kube-system,Attempt:0,}" Dec 13 09:11:21.315850 kubelet[2203]: E1213 09:11:21.315621 2203 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.157.113:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.1-e-b721934136?timeout=10s\": dial tcp 146.190.157.113:6443: connect: connection refused" interval="800ms" Dec 13 09:11:21.426572 kubelet[2203]: I1213 09:11:21.420469 2203 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.1-e-b721934136" Dec 13 09:11:21.427753 kubelet[2203]: E1213 09:11:21.427587 2203 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://146.190.157.113:6443/api/v1/nodes\": dial tcp 146.190.157.113:6443: connect: connection refused" node="ci-4081.2.1-e-b721934136" Dec 13 09:11:21.698442 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3943851382.mount: Deactivated successfully. Dec 13 09:11:21.711680 containerd[1470]: time="2024-12-13T09:11:21.711152077Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 09:11:21.713064 containerd[1470]: time="2024-12-13T09:11:21.712990599Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 09:11:21.714761 containerd[1470]: time="2024-12-13T09:11:21.714700047Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 09:11:21.716865 containerd[1470]: time="2024-12-13T09:11:21.716505408Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 09:11:21.717429 containerd[1470]: time="2024-12-13T09:11:21.717390170Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 09:11:21.718309 containerd[1470]: time="2024-12-13T09:11:21.717936530Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 09:11:21.718309 containerd[1470]: time="2024-12-13T09:11:21.718246794Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Dec 13 09:11:21.722852 containerd[1470]: time="2024-12-13T09:11:21.722625148Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 09:11:21.724216 containerd[1470]: time="2024-12-13T09:11:21.724152790Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 542.449172ms" Dec 13 09:11:21.728504 containerd[1470]: time="2024-12-13T09:11:21.728288377Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 560.833778ms" Dec 13 09:11:21.729199 containerd[1470]: time="2024-12-13T09:11:21.729093743Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 544.267482ms" Dec 13 09:11:21.791524 kubelet[2203]: W1213 09:11:21.790933 2203 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://146.190.157.113:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 146.190.157.113:6443: connect: connection refused Dec 13 09:11:21.791806 kubelet[2203]: E1213 09:11:21.791762 2203 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://146.190.157.113:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 146.190.157.113:6443: connect: connection refused Dec 13 09:11:21.796230 kubelet[2203]: W1213 09:11:21.795090 2203 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://146.190.157.113:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 146.190.157.113:6443: connect: connection refused Dec 13 09:11:21.796230 kubelet[2203]: E1213 09:11:21.795176 2203 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://146.190.157.113:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 146.190.157.113:6443: connect: connection refused Dec 13 09:11:21.874018 kubelet[2203]: W1213 09:11:21.873201 2203 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://146.190.157.113:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 146.190.157.113:6443: connect: connection refused Dec 13 09:11:21.874018 kubelet[2203]: E1213 09:11:21.873288 2203 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://146.190.157.113:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 146.190.157.113:6443: connect: connection refused Dec 13 09:11:21.943771 containerd[1470]: time="2024-12-13T09:11:21.943634162Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 09:11:21.944536 containerd[1470]: time="2024-12-13T09:11:21.944111620Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 09:11:21.944536 containerd[1470]: time="2024-12-13T09:11:21.944183599Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:11:21.944536 containerd[1470]: time="2024-12-13T09:11:21.944452390Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:11:21.948373 containerd[1470]: time="2024-12-13T09:11:21.948235654Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 09:11:21.949623 containerd[1470]: time="2024-12-13T09:11:21.948788928Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 09:11:21.949623 containerd[1470]: time="2024-12-13T09:11:21.948866287Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 09:11:21.949623 containerd[1470]: time="2024-12-13T09:11:21.948891086Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:11:21.949623 containerd[1470]: time="2024-12-13T09:11:21.948999061Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:11:21.953754 containerd[1470]: time="2024-12-13T09:11:21.952988995Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 09:11:21.953754 containerd[1470]: time="2024-12-13T09:11:21.953034702Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:11:21.953754 containerd[1470]: time="2024-12-13T09:11:21.953203100Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:11:21.984661 systemd[1]: Started cri-containerd-38bc656331b1d5556a5a6761d19c22f6f378e80519e6227d0d6834730ace671b.scope - libcontainer container 38bc656331b1d5556a5a6761d19c22f6f378e80519e6227d0d6834730ace671b. Dec 13 09:11:21.992646 systemd[1]: Started cri-containerd-8e61c43b355a8acba9163f9841e4f4b9c24e60bb56a0a36d4f447c4a572cd25e.scope - libcontainer container 8e61c43b355a8acba9163f9841e4f4b9c24e60bb56a0a36d4f447c4a572cd25e. Dec 13 09:11:22.013597 systemd[1]: Started cri-containerd-ff6fec63861267598614147e7d2ca5f424af258aee3f52ff7d84c466ffd45dd6.scope - libcontainer container ff6fec63861267598614147e7d2ca5f424af258aee3f52ff7d84c466ffd45dd6. Dec 13 09:11:22.118590 kubelet[2203]: E1213 09:11:22.116589 2203 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.157.113:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.1-e-b721934136?timeout=10s\": dial tcp 146.190.157.113:6443: connect: connection refused" interval="1.6s" Dec 13 09:11:22.123434 containerd[1470]: time="2024-12-13T09:11:22.121963909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.2.1-e-b721934136,Uid:696b11355c31ebd3612ea6bfab989784,Namespace:kube-system,Attempt:0,} returns sandbox id \"38bc656331b1d5556a5a6761d19c22f6f378e80519e6227d0d6834730ace671b\"" Dec 13 09:11:22.127115 kubelet[2203]: E1213 09:11:22.127060 2203 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:22.132982 containerd[1470]: time="2024-12-13T09:11:22.132926766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.2.1-e-b721934136,Uid:7d8a342f61aa246be3eee6a0a8d25cc9,Namespace:kube-system,Attempt:0,} returns sandbox id \"8e61c43b355a8acba9163f9841e4f4b9c24e60bb56a0a36d4f447c4a572cd25e\"" Dec 13 09:11:22.137396 kubelet[2203]: E1213 09:11:22.137307 2203 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:22.141743 containerd[1470]: time="2024-12-13T09:11:22.141691918Z" level=info msg="CreateContainer within sandbox \"38bc656331b1d5556a5a6761d19c22f6f378e80519e6227d0d6834730ace671b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 09:11:22.142961 containerd[1470]: time="2024-12-13T09:11:22.142883176Z" level=info msg="CreateContainer within sandbox \"8e61c43b355a8acba9163f9841e4f4b9c24e60bb56a0a36d4f447c4a572cd25e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 09:11:22.156251 containerd[1470]: time="2024-12-13T09:11:22.156179995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.2.1-e-b721934136,Uid:5b31988cb2ce81b4c9eed649a3f7f4e3,Namespace:kube-system,Attempt:0,} returns sandbox id \"ff6fec63861267598614147e7d2ca5f424af258aee3f52ff7d84c466ffd45dd6\"" Dec 13 09:11:22.157520 kubelet[2203]: E1213 09:11:22.157325 2203 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:22.160461 containerd[1470]: time="2024-12-13T09:11:22.160257899Z" level=info msg="CreateContainer within sandbox \"ff6fec63861267598614147e7d2ca5f424af258aee3f52ff7d84c466ffd45dd6\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 09:11:22.189049 containerd[1470]: time="2024-12-13T09:11:22.188684778Z" level=info msg="CreateContainer within sandbox \"8e61c43b355a8acba9163f9841e4f4b9c24e60bb56a0a36d4f447c4a572cd25e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d00ee4e3baaad8719acc32f9ada097abb041bd40b7c57b6fe4471356a16bb5c2\"" Dec 13 09:11:22.190170 containerd[1470]: time="2024-12-13T09:11:22.190090086Z" level=info msg="CreateContainer within sandbox \"38bc656331b1d5556a5a6761d19c22f6f378e80519e6227d0d6834730ace671b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a09f55bc7db5b66f7cca866ea63879cd68fea132f9a857fdffe8622fd848c70d\"" Dec 13 09:11:22.190585 containerd[1470]: time="2024-12-13T09:11:22.190520433Z" level=info msg="StartContainer for \"d00ee4e3baaad8719acc32f9ada097abb041bd40b7c57b6fe4471356a16bb5c2\"" Dec 13 09:11:22.192369 containerd[1470]: time="2024-12-13T09:11:22.192223168Z" level=info msg="CreateContainer within sandbox \"ff6fec63861267598614147e7d2ca5f424af258aee3f52ff7d84c466ffd45dd6\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9e47753692a4c471ee22b99b3cba92a90099d419ba5de9509f211737d5b7eed6\"" Dec 13 09:11:22.194721 containerd[1470]: time="2024-12-13T09:11:22.193230333Z" level=info msg="StartContainer for \"9e47753692a4c471ee22b99b3cba92a90099d419ba5de9509f211737d5b7eed6\"" Dec 13 09:11:22.204220 containerd[1470]: time="2024-12-13T09:11:22.204044270Z" level=info msg="StartContainer for \"a09f55bc7db5b66f7cca866ea63879cd68fea132f9a857fdffe8622fd848c70d\"" Dec 13 09:11:22.230005 kubelet[2203]: I1213 09:11:22.229950 2203 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.1-e-b721934136" Dec 13 09:11:22.230963 kubelet[2203]: E1213 09:11:22.230931 2203 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://146.190.157.113:6443/api/v1/nodes\": dial tcp 146.190.157.113:6443: connect: connection refused" node="ci-4081.2.1-e-b721934136" Dec 13 09:11:22.244686 kubelet[2203]: W1213 09:11:22.244614 2203 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://146.190.157.113:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.1-e-b721934136&limit=500&resourceVersion=0": dial tcp 146.190.157.113:6443: connect: connection refused Dec 13 09:11:22.244924 kubelet[2203]: E1213 09:11:22.244896 2203 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://146.190.157.113:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.1-e-b721934136&limit=500&resourceVersion=0": dial tcp 146.190.157.113:6443: connect: connection refused Dec 13 09:11:22.251038 systemd[1]: Started cri-containerd-9e47753692a4c471ee22b99b3cba92a90099d419ba5de9509f211737d5b7eed6.scope - libcontainer container 9e47753692a4c471ee22b99b3cba92a90099d419ba5de9509f211737d5b7eed6. Dec 13 09:11:22.253781 systemd[1]: Started cri-containerd-d00ee4e3baaad8719acc32f9ada097abb041bd40b7c57b6fe4471356a16bb5c2.scope - libcontainer container d00ee4e3baaad8719acc32f9ada097abb041bd40b7c57b6fe4471356a16bb5c2. Dec 13 09:11:22.264617 systemd[1]: Started cri-containerd-a09f55bc7db5b66f7cca866ea63879cd68fea132f9a857fdffe8622fd848c70d.scope - libcontainer container a09f55bc7db5b66f7cca866ea63879cd68fea132f9a857fdffe8622fd848c70d. Dec 13 09:11:22.362260 containerd[1470]: time="2024-12-13T09:11:22.361688879Z" level=info msg="StartContainer for \"9e47753692a4c471ee22b99b3cba92a90099d419ba5de9509f211737d5b7eed6\" returns successfully" Dec 13 09:11:22.362260 containerd[1470]: time="2024-12-13T09:11:22.361877643Z" level=info msg="StartContainer for \"d00ee4e3baaad8719acc32f9ada097abb041bd40b7c57b6fe4471356a16bb5c2\" returns successfully" Dec 13 09:11:22.387678 containerd[1470]: time="2024-12-13T09:11:22.387621867Z" level=info msg="StartContainer for \"a09f55bc7db5b66f7cca866ea63879cd68fea132f9a857fdffe8622fd848c70d\" returns successfully" Dec 13 09:11:22.696265 kubelet[2203]: E1213 09:11:22.696219 2203 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://146.190.157.113:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 146.190.157.113:6443: connect: connection refused Dec 13 09:11:22.774623 kubelet[2203]: E1213 09:11:22.774570 2203 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:22.779395 kubelet[2203]: E1213 09:11:22.777826 2203 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:22.792625 kubelet[2203]: E1213 09:11:22.792578 2203 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:23.783458 kubelet[2203]: E1213 09:11:23.780963 2203 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:23.833004 kubelet[2203]: I1213 09:11:23.832964 2203 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.1-e-b721934136" Dec 13 09:11:25.070760 kubelet[2203]: E1213 09:11:25.070710 2203 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.2.1-e-b721934136\" not found" node="ci-4081.2.1-e-b721934136" Dec 13 09:11:25.201018 kubelet[2203]: E1213 09:11:25.200847 2203 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081.2.1-e-b721934136.1810b18e299c2cbe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.2.1-e-b721934136,UID:ci-4081.2.1-e-b721934136,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.2.1-e-b721934136,},FirstTimestamp:2024-12-13 09:11:20.67921427 +0000 UTC m=+0.589269694,LastTimestamp:2024-12-13 09:11:20.67921427 +0000 UTC m=+0.589269694,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.2.1-e-b721934136,}" Dec 13 09:11:25.265769 kubelet[2203]: I1213 09:11:25.265576 2203 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.2.1-e-b721934136" Dec 13 09:11:25.278209 kubelet[2203]: E1213 09:11:25.278153 2203 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.2.1-e-b721934136\" not found" Dec 13 09:11:25.379349 kubelet[2203]: E1213 09:11:25.379185 2203 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.2.1-e-b721934136\" not found" Dec 13 09:11:25.675580 kubelet[2203]: I1213 09:11:25.675412 2203 apiserver.go:52] "Watching apiserver" Dec 13 09:11:25.708370 kubelet[2203]: I1213 09:11:25.708291 2203 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 09:11:27.539511 systemd[1]: Reloading requested from client PID 2478 ('systemctl') (unit session-7.scope)... Dec 13 09:11:27.539538 systemd[1]: Reloading... Dec 13 09:11:27.647393 zram_generator::config[2520]: No configuration found. Dec 13 09:11:27.812670 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 09:11:27.908955 systemd[1]: Reloading finished in 368 ms. Dec 13 09:11:27.964247 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 09:11:27.981897 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 09:11:27.982149 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 09:11:27.982227 systemd[1]: kubelet.service: Consumed 1.109s CPU time, 110.3M memory peak, 0B memory swap peak. Dec 13 09:11:27.987810 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 09:11:28.193765 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 09:11:28.195237 (kubelet)[2567]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 09:11:28.284118 kubelet[2567]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 09:11:28.284118 kubelet[2567]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 09:11:28.284118 kubelet[2567]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 09:11:28.284657 kubelet[2567]: I1213 09:11:28.284160 2567 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 09:11:28.291437 kubelet[2567]: I1213 09:11:28.290937 2567 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 09:11:28.291437 kubelet[2567]: I1213 09:11:28.290971 2567 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 09:11:28.292157 kubelet[2567]: I1213 09:11:28.292115 2567 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 09:11:28.298557 kubelet[2567]: I1213 09:11:28.298511 2567 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 09:11:28.301253 kubelet[2567]: I1213 09:11:28.301204 2567 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 09:11:28.309450 kubelet[2567]: I1213 09:11:28.309261 2567 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 09:11:28.313751 kubelet[2567]: I1213 09:11:28.311574 2567 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 09:11:28.313751 kubelet[2567]: I1213 09:11:28.311645 2567 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.2.1-e-b721934136","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 09:11:28.313751 kubelet[2567]: I1213 09:11:28.312007 2567 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 09:11:28.313751 kubelet[2567]: I1213 09:11:28.312033 2567 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 09:11:28.314053 kubelet[2567]: I1213 09:11:28.312097 2567 state_mem.go:36] "Initialized new in-memory state store" Dec 13 09:11:28.314053 kubelet[2567]: I1213 09:11:28.312245 2567 kubelet.go:400] "Attempting to sync node with API server" Dec 13 09:11:28.314053 kubelet[2567]: I1213 09:11:28.312259 2567 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 09:11:28.314053 kubelet[2567]: I1213 09:11:28.312288 2567 kubelet.go:312] "Adding apiserver pod source" Dec 13 09:11:28.314053 kubelet[2567]: I1213 09:11:28.312302 2567 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 09:11:28.319877 kubelet[2567]: I1213 09:11:28.319828 2567 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 09:11:28.324595 kubelet[2567]: I1213 09:11:28.324521 2567 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 09:11:28.326399 kubelet[2567]: I1213 09:11:28.326374 2567 server.go:1264] "Started kubelet" Dec 13 09:11:28.332144 kubelet[2567]: I1213 09:11:28.332115 2567 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 09:11:28.337231 kubelet[2567]: I1213 09:11:28.337165 2567 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 09:11:28.338780 kubelet[2567]: I1213 09:11:28.338449 2567 server.go:455] "Adding debug handlers to kubelet server" Dec 13 09:11:28.339524 kubelet[2567]: I1213 09:11:28.339499 2567 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 09:11:28.341437 kubelet[2567]: I1213 09:11:28.340757 2567 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 09:11:28.341437 kubelet[2567]: I1213 09:11:28.341027 2567 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 09:11:28.343750 kubelet[2567]: I1213 09:11:28.343110 2567 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 09:11:28.346121 kubelet[2567]: I1213 09:11:28.344108 2567 reconciler.go:26] "Reconciler: start to sync state" Dec 13 09:11:28.356553 kubelet[2567]: I1213 09:11:28.356431 2567 factory.go:221] Registration of the systemd container factory successfully Dec 13 09:11:28.356553 kubelet[2567]: I1213 09:11:28.356548 2567 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 09:11:28.358993 kubelet[2567]: E1213 09:11:28.358916 2567 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 09:11:28.359355 kubelet[2567]: I1213 09:11:28.359232 2567 factory.go:221] Registration of the containerd container factory successfully Dec 13 09:11:28.369425 kubelet[2567]: I1213 09:11:28.369248 2567 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 09:11:28.370972 kubelet[2567]: I1213 09:11:28.370932 2567 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 09:11:28.370972 kubelet[2567]: I1213 09:11:28.370977 2567 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 09:11:28.371137 kubelet[2567]: I1213 09:11:28.371001 2567 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 09:11:28.371137 kubelet[2567]: E1213 09:11:28.371052 2567 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 09:11:28.437698 kubelet[2567]: I1213 09:11:28.437655 2567 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 09:11:28.437698 kubelet[2567]: I1213 09:11:28.437682 2567 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 09:11:28.437698 kubelet[2567]: I1213 09:11:28.437714 2567 state_mem.go:36] "Initialized new in-memory state store" Dec 13 09:11:28.437954 kubelet[2567]: I1213 09:11:28.437936 2567 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 09:11:28.437994 kubelet[2567]: I1213 09:11:28.437956 2567 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 09:11:28.437994 kubelet[2567]: I1213 09:11:28.437982 2567 policy_none.go:49] "None policy: Start" Dec 13 09:11:28.439429 kubelet[2567]: I1213 09:11:28.439074 2567 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 09:11:28.439429 kubelet[2567]: I1213 09:11:28.439125 2567 state_mem.go:35] "Initializing new in-memory state store" Dec 13 09:11:28.439429 kubelet[2567]: I1213 09:11:28.439304 2567 state_mem.go:75] "Updated machine memory state" Dec 13 09:11:28.441723 kubelet[2567]: I1213 09:11:28.441689 2567 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.1-e-b721934136" Dec 13 09:11:28.453781 kubelet[2567]: I1213 09:11:28.451438 2567 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 09:11:28.453781 kubelet[2567]: I1213 09:11:28.452041 2567 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 09:11:28.453781 kubelet[2567]: I1213 09:11:28.452390 2567 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 09:11:28.460942 kubelet[2567]: I1213 09:11:28.459104 2567 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081.2.1-e-b721934136" Dec 13 09:11:28.460942 kubelet[2567]: I1213 09:11:28.459214 2567 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.2.1-e-b721934136" Dec 13 09:11:28.475311 kubelet[2567]: I1213 09:11:28.474309 2567 topology_manager.go:215] "Topology Admit Handler" podUID="7d8a342f61aa246be3eee6a0a8d25cc9" podNamespace="kube-system" podName="kube-apiserver-ci-4081.2.1-e-b721934136" Dec 13 09:11:28.477660 kubelet[2567]: I1213 09:11:28.476116 2567 topology_manager.go:215] "Topology Admit Handler" podUID="5b31988cb2ce81b4c9eed649a3f7f4e3" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.2.1-e-b721934136" Dec 13 09:11:28.477660 kubelet[2567]: I1213 09:11:28.477200 2567 topology_manager.go:215] "Topology Admit Handler" podUID="696b11355c31ebd3612ea6bfab989784" podNamespace="kube-system" podName="kube-scheduler-ci-4081.2.1-e-b721934136" Dec 13 09:11:28.502212 kubelet[2567]: W1213 09:11:28.501932 2567 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 09:11:28.503601 kubelet[2567]: W1213 09:11:28.502607 2567 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 09:11:28.504410 kubelet[2567]: W1213 09:11:28.503989 2567 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 09:11:28.646418 kubelet[2567]: I1213 09:11:28.646368 2567 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5b31988cb2ce81b4c9eed649a3f7f4e3-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.2.1-e-b721934136\" (UID: \"5b31988cb2ce81b4c9eed649a3f7f4e3\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-e-b721934136" Dec 13 09:11:28.646707 kubelet[2567]: I1213 09:11:28.646439 2567 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5b31988cb2ce81b4c9eed649a3f7f4e3-k8s-certs\") pod \"kube-controller-manager-ci-4081.2.1-e-b721934136\" (UID: \"5b31988cb2ce81b4c9eed649a3f7f4e3\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-e-b721934136" Dec 13 09:11:28.646707 kubelet[2567]: I1213 09:11:28.646472 2567 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5b31988cb2ce81b4c9eed649a3f7f4e3-kubeconfig\") pod \"kube-controller-manager-ci-4081.2.1-e-b721934136\" (UID: \"5b31988cb2ce81b4c9eed649a3f7f4e3\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-e-b721934136" Dec 13 09:11:28.646707 kubelet[2567]: I1213 09:11:28.646491 2567 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7d8a342f61aa246be3eee6a0a8d25cc9-ca-certs\") pod \"kube-apiserver-ci-4081.2.1-e-b721934136\" (UID: \"7d8a342f61aa246be3eee6a0a8d25cc9\") " pod="kube-system/kube-apiserver-ci-4081.2.1-e-b721934136" Dec 13 09:11:28.646707 kubelet[2567]: I1213 09:11:28.646519 2567 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5b31988cb2ce81b4c9eed649a3f7f4e3-ca-certs\") pod \"kube-controller-manager-ci-4081.2.1-e-b721934136\" (UID: \"5b31988cb2ce81b4c9eed649a3f7f4e3\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-e-b721934136" Dec 13 09:11:28.646707 kubelet[2567]: I1213 09:11:28.646550 2567 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5b31988cb2ce81b4c9eed649a3f7f4e3-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.2.1-e-b721934136\" (UID: \"5b31988cb2ce81b4c9eed649a3f7f4e3\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-e-b721934136" Dec 13 09:11:28.646888 kubelet[2567]: I1213 09:11:28.646578 2567 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/696b11355c31ebd3612ea6bfab989784-kubeconfig\") pod \"kube-scheduler-ci-4081.2.1-e-b721934136\" (UID: \"696b11355c31ebd3612ea6bfab989784\") " pod="kube-system/kube-scheduler-ci-4081.2.1-e-b721934136" Dec 13 09:11:28.646888 kubelet[2567]: I1213 09:11:28.646606 2567 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7d8a342f61aa246be3eee6a0a8d25cc9-k8s-certs\") pod \"kube-apiserver-ci-4081.2.1-e-b721934136\" (UID: \"7d8a342f61aa246be3eee6a0a8d25cc9\") " pod="kube-system/kube-apiserver-ci-4081.2.1-e-b721934136" Dec 13 09:11:28.646888 kubelet[2567]: I1213 09:11:28.646627 2567 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7d8a342f61aa246be3eee6a0a8d25cc9-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.2.1-e-b721934136\" (UID: \"7d8a342f61aa246be3eee6a0a8d25cc9\") " pod="kube-system/kube-apiserver-ci-4081.2.1-e-b721934136" Dec 13 09:11:28.805892 kubelet[2567]: E1213 09:11:28.805636 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:28.807031 kubelet[2567]: E1213 09:11:28.806921 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:28.808475 kubelet[2567]: E1213 09:11:28.806987 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:29.315230 kubelet[2567]: I1213 09:11:29.314882 2567 apiserver.go:52] "Watching apiserver" Dec 13 09:11:29.344196 kubelet[2567]: I1213 09:11:29.344139 2567 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 09:11:29.401375 kubelet[2567]: E1213 09:11:29.399079 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:29.401375 kubelet[2567]: E1213 09:11:29.399219 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:29.403926 kubelet[2567]: E1213 09:11:29.402826 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:29.474490 kubelet[2567]: I1213 09:11:29.473854 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.2.1-e-b721934136" podStartSLOduration=1.473827395 podStartE2EDuration="1.473827395s" podCreationTimestamp="2024-12-13 09:11:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 09:11:29.452507047 +0000 UTC m=+1.249640274" watchObservedRunningTime="2024-12-13 09:11:29.473827395 +0000 UTC m=+1.270960618" Dec 13 09:11:29.528921 kubelet[2567]: I1213 09:11:29.528599 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.2.1-e-b721934136" podStartSLOduration=1.528576439 podStartE2EDuration="1.528576439s" podCreationTimestamp="2024-12-13 09:11:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 09:11:29.474272299 +0000 UTC m=+1.271405529" watchObservedRunningTime="2024-12-13 09:11:29.528576439 +0000 UTC m=+1.325709666" Dec 13 09:11:29.609705 kubelet[2567]: I1213 09:11:29.609412 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.2.1-e-b721934136" podStartSLOduration=1.609390478 podStartE2EDuration="1.609390478s" podCreationTimestamp="2024-12-13 09:11:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 09:11:29.532178969 +0000 UTC m=+1.329312200" watchObservedRunningTime="2024-12-13 09:11:29.609390478 +0000 UTC m=+1.406523705" Dec 13 09:11:31.577706 systemd-resolved[1325]: Clock change detected. Flushing caches. Dec 13 09:11:31.578873 systemd-timesyncd[1346]: Contacted time server 45.79.111.167:123 (2.flatcar.pool.ntp.org). Dec 13 09:11:31.579005 systemd-timesyncd[1346]: Initial clock synchronization to Fri 2024-12-13 09:11:31.577288 UTC. Dec 13 09:11:31.718627 kubelet[2567]: E1213 09:11:31.718553 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:32.717274 kubelet[2567]: E1213 09:11:32.716376 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:33.719674 kubelet[2567]: E1213 09:11:33.718806 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:34.721378 kubelet[2567]: E1213 09:11:34.720603 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:35.290344 kubelet[2567]: E1213 09:11:35.290264 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:35.722823 kubelet[2567]: E1213 09:11:35.722332 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:36.095112 sudo[1653]: pam_unix(sudo:session): session closed for user root Dec 13 09:11:36.101402 sshd[1650]: pam_unix(sshd:session): session closed for user core Dec 13 09:11:36.107221 systemd[1]: sshd@6-146.190.157.113:22-147.75.109.163:59034.service: Deactivated successfully. Dec 13 09:11:36.110344 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 09:11:36.110939 systemd[1]: session-7.scope: Consumed 7.186s CPU time, 188.7M memory peak, 0B memory swap peak. Dec 13 09:11:36.113592 systemd-logind[1447]: Session 7 logged out. Waiting for processes to exit. Dec 13 09:11:36.115329 systemd-logind[1447]: Removed session 7. Dec 13 09:11:39.196682 kubelet[2567]: E1213 09:11:39.196350 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:39.637832 update_engine[1448]: I20241213 09:11:39.636607 1448 update_attempter.cc:509] Updating boot flags... Dec 13 09:11:39.674677 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2651) Dec 13 09:11:39.732714 kubelet[2567]: E1213 09:11:39.732666 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:39.767827 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2653) Dec 13 09:11:43.248896 kubelet[2567]: I1213 09:11:43.248842 2567 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 09:11:43.257403 containerd[1470]: time="2024-12-13T09:11:43.257000933Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 09:11:43.258164 kubelet[2567]: I1213 09:11:43.257609 2567 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 09:11:44.207205 kubelet[2567]: I1213 09:11:44.207143 2567 topology_manager.go:215] "Topology Admit Handler" podUID="53cccb43-b715-4aa2-816a-d52c8577954f" podNamespace="kube-system" podName="kube-proxy-ndrtz" Dec 13 09:11:44.230973 systemd[1]: Created slice kubepods-besteffort-pod53cccb43_b715_4aa2_816a_d52c8577954f.slice - libcontainer container kubepods-besteffort-pod53cccb43_b715_4aa2_816a_d52c8577954f.slice. Dec 13 09:11:44.314707 kubelet[2567]: I1213 09:11:44.314522 2567 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/53cccb43-b715-4aa2-816a-d52c8577954f-kube-proxy\") pod \"kube-proxy-ndrtz\" (UID: \"53cccb43-b715-4aa2-816a-d52c8577954f\") " pod="kube-system/kube-proxy-ndrtz" Dec 13 09:11:44.314707 kubelet[2567]: I1213 09:11:44.314590 2567 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/53cccb43-b715-4aa2-816a-d52c8577954f-xtables-lock\") pod \"kube-proxy-ndrtz\" (UID: \"53cccb43-b715-4aa2-816a-d52c8577954f\") " pod="kube-system/kube-proxy-ndrtz" Dec 13 09:11:44.314707 kubelet[2567]: I1213 09:11:44.314620 2567 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/53cccb43-b715-4aa2-816a-d52c8577954f-lib-modules\") pod \"kube-proxy-ndrtz\" (UID: \"53cccb43-b715-4aa2-816a-d52c8577954f\") " pod="kube-system/kube-proxy-ndrtz" Dec 13 09:11:44.314707 kubelet[2567]: I1213 09:11:44.314670 2567 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bbx4\" (UniqueName: \"kubernetes.io/projected/53cccb43-b715-4aa2-816a-d52c8577954f-kube-api-access-8bbx4\") pod \"kube-proxy-ndrtz\" (UID: \"53cccb43-b715-4aa2-816a-d52c8577954f\") " pod="kube-system/kube-proxy-ndrtz" Dec 13 09:11:44.374358 kubelet[2567]: I1213 09:11:44.373323 2567 topology_manager.go:215] "Topology Admit Handler" podUID="0f38eda3-b3c1-465f-9fe7-199f7e8fbbe5" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-zcn7k" Dec 13 09:11:44.383315 systemd[1]: Created slice kubepods-besteffort-pod0f38eda3_b3c1_465f_9fe7_199f7e8fbbe5.slice - libcontainer container kubepods-besteffort-pod0f38eda3_b3c1_465f_9fe7_199f7e8fbbe5.slice. Dec 13 09:11:44.390684 kubelet[2567]: W1213 09:11:44.389554 2567 reflector.go:547] object-"tigera-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081.2.1-e-b721934136" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ci-4081.2.1-e-b721934136' and this object Dec 13 09:11:44.390684 kubelet[2567]: E1213 09:11:44.389617 2567 reflector.go:150] object-"tigera-operator"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081.2.1-e-b721934136" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ci-4081.2.1-e-b721934136' and this object Dec 13 09:11:44.390684 kubelet[2567]: W1213 09:11:44.389658 2567 reflector.go:547] object-"tigera-operator"/"kubernetes-services-endpoint": failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:ci-4081.2.1-e-b721934136" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ci-4081.2.1-e-b721934136' and this object Dec 13 09:11:44.390684 kubelet[2567]: E1213 09:11:44.389669 2567 reflector.go:150] object-"tigera-operator"/"kubernetes-services-endpoint": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:ci-4081.2.1-e-b721934136" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ci-4081.2.1-e-b721934136' and this object Dec 13 09:11:44.516360 kubelet[2567]: I1213 09:11:44.516191 2567 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfx62\" (UniqueName: \"kubernetes.io/projected/0f38eda3-b3c1-465f-9fe7-199f7e8fbbe5-kube-api-access-cfx62\") pod \"tigera-operator-7bc55997bb-zcn7k\" (UID: \"0f38eda3-b3c1-465f-9fe7-199f7e8fbbe5\") " pod="tigera-operator/tigera-operator-7bc55997bb-zcn7k" Dec 13 09:11:44.516964 kubelet[2567]: I1213 09:11:44.516876 2567 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0f38eda3-b3c1-465f-9fe7-199f7e8fbbe5-var-lib-calico\") pod \"tigera-operator-7bc55997bb-zcn7k\" (UID: \"0f38eda3-b3c1-465f-9fe7-199f7e8fbbe5\") " pod="tigera-operator/tigera-operator-7bc55997bb-zcn7k" Dec 13 09:11:44.539502 kubelet[2567]: E1213 09:11:44.539293 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:44.540784 containerd[1470]: time="2024-12-13T09:11:44.540492741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ndrtz,Uid:53cccb43-b715-4aa2-816a-d52c8577954f,Namespace:kube-system,Attempt:0,}" Dec 13 09:11:44.580262 containerd[1470]: time="2024-12-13T09:11:44.579695129Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 09:11:44.580262 containerd[1470]: time="2024-12-13T09:11:44.579774562Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 09:11:44.580262 containerd[1470]: time="2024-12-13T09:11:44.579790675Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:11:44.580262 containerd[1470]: time="2024-12-13T09:11:44.579922721Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:11:44.607458 systemd[1]: run-containerd-runc-k8s.io-2c0b183c90d9d445615864e58a199897d710ba45863c0483d1be0fb3be4611d1-runc.VhUxWz.mount: Deactivated successfully. Dec 13 09:11:44.620239 systemd[1]: Started cri-containerd-2c0b183c90d9d445615864e58a199897d710ba45863c0483d1be0fb3be4611d1.scope - libcontainer container 2c0b183c90d9d445615864e58a199897d710ba45863c0483d1be0fb3be4611d1. Dec 13 09:11:44.667256 containerd[1470]: time="2024-12-13T09:11:44.667209186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ndrtz,Uid:53cccb43-b715-4aa2-816a-d52c8577954f,Namespace:kube-system,Attempt:0,} returns sandbox id \"2c0b183c90d9d445615864e58a199897d710ba45863c0483d1be0fb3be4611d1\"" Dec 13 09:11:44.668971 kubelet[2567]: E1213 09:11:44.668912 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:44.675526 containerd[1470]: time="2024-12-13T09:11:44.675324558Z" level=info msg="CreateContainer within sandbox \"2c0b183c90d9d445615864e58a199897d710ba45863c0483d1be0fb3be4611d1\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 09:11:44.697748 containerd[1470]: time="2024-12-13T09:11:44.697693007Z" level=info msg="CreateContainer within sandbox \"2c0b183c90d9d445615864e58a199897d710ba45863c0483d1be0fb3be4611d1\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"66e2dd236332ec9db03c2f6cc560230dcd52b14e3d6fa35a870313c9443a8803\"" Dec 13 09:11:44.699288 containerd[1470]: time="2024-12-13T09:11:44.699224818Z" level=info msg="StartContainer for \"66e2dd236332ec9db03c2f6cc560230dcd52b14e3d6fa35a870313c9443a8803\"" Dec 13 09:11:44.739920 systemd[1]: Started cri-containerd-66e2dd236332ec9db03c2f6cc560230dcd52b14e3d6fa35a870313c9443a8803.scope - libcontainer container 66e2dd236332ec9db03c2f6cc560230dcd52b14e3d6fa35a870313c9443a8803. Dec 13 09:11:44.791210 containerd[1470]: time="2024-12-13T09:11:44.790915072Z" level=info msg="StartContainer for \"66e2dd236332ec9db03c2f6cc560230dcd52b14e3d6fa35a870313c9443a8803\" returns successfully" Dec 13 09:11:45.768054 kubelet[2567]: E1213 09:11:45.767965 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:45.893733 containerd[1470]: time="2024-12-13T09:11:45.893147841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-zcn7k,Uid:0f38eda3-b3c1-465f-9fe7-199f7e8fbbe5,Namespace:tigera-operator,Attempt:0,}" Dec 13 09:11:45.936509 containerd[1470]: time="2024-12-13T09:11:45.936350793Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 09:11:45.937939 containerd[1470]: time="2024-12-13T09:11:45.936450214Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 09:11:45.938392 containerd[1470]: time="2024-12-13T09:11:45.938135536Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:11:45.938392 containerd[1470]: time="2024-12-13T09:11:45.938327434Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:11:45.980076 systemd[1]: Started cri-containerd-9f2effc5c7bb870f623aabf0b14b244166ba5cd7790c5a916075c9edbc274f43.scope - libcontainer container 9f2effc5c7bb870f623aabf0b14b244166ba5cd7790c5a916075c9edbc274f43. Dec 13 09:11:46.052607 containerd[1470]: time="2024-12-13T09:11:46.052065942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-zcn7k,Uid:0f38eda3-b3c1-465f-9fe7-199f7e8fbbe5,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"9f2effc5c7bb870f623aabf0b14b244166ba5cd7790c5a916075c9edbc274f43\"" Dec 13 09:11:46.064428 containerd[1470]: time="2024-12-13T09:11:46.064253222Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Dec 13 09:11:46.774953 kubelet[2567]: E1213 09:11:46.774529 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:47.479746 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4040549758.mount: Deactivated successfully. Dec 13 09:11:52.096547 containerd[1470]: time="2024-12-13T09:11:52.095346567Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:11:52.096547 containerd[1470]: time="2024-12-13T09:11:52.096433837Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21763697" Dec 13 09:11:52.097569 containerd[1470]: time="2024-12-13T09:11:52.097524734Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:11:52.101693 containerd[1470]: time="2024-12-13T09:11:52.101606240Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:11:52.103303 containerd[1470]: time="2024-12-13T09:11:52.103237136Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 6.038923411s" Dec 13 09:11:52.103303 containerd[1470]: time="2024-12-13T09:11:52.103303271Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Dec 13 09:11:52.110625 containerd[1470]: time="2024-12-13T09:11:52.110398077Z" level=info msg="CreateContainer within sandbox \"9f2effc5c7bb870f623aabf0b14b244166ba5cd7790c5a916075c9edbc274f43\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 13 09:11:52.130394 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2954278293.mount: Deactivated successfully. Dec 13 09:11:52.136928 containerd[1470]: time="2024-12-13T09:11:52.136616870Z" level=info msg="CreateContainer within sandbox \"9f2effc5c7bb870f623aabf0b14b244166ba5cd7790c5a916075c9edbc274f43\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"af7ff54ecd9c70810948665a85ea46f862edce9ebcb5550e46a32ec8fd38c991\"" Dec 13 09:11:52.137970 containerd[1470]: time="2024-12-13T09:11:52.137694639Z" level=info msg="StartContainer for \"af7ff54ecd9c70810948665a85ea46f862edce9ebcb5550e46a32ec8fd38c991\"" Dec 13 09:11:52.185500 systemd[1]: Started cri-containerd-af7ff54ecd9c70810948665a85ea46f862edce9ebcb5550e46a32ec8fd38c991.scope - libcontainer container af7ff54ecd9c70810948665a85ea46f862edce9ebcb5550e46a32ec8fd38c991. Dec 13 09:11:52.281845 containerd[1470]: time="2024-12-13T09:11:52.280140436Z" level=info msg="StartContainer for \"af7ff54ecd9c70810948665a85ea46f862edce9ebcb5550e46a32ec8fd38c991\" returns successfully" Dec 13 09:11:52.822111 kubelet[2567]: I1213 09:11:52.821500 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ndrtz" podStartSLOduration=8.82146918 podStartE2EDuration="8.82146918s" podCreationTimestamp="2024-12-13 09:11:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 09:11:45.784490414 +0000 UTC m=+16.271320735" watchObservedRunningTime="2024-12-13 09:11:52.82146918 +0000 UTC m=+23.308299445" Dec 13 09:11:55.445720 kubelet[2567]: I1213 09:11:55.445576 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-zcn7k" podStartSLOduration=5.396164813 podStartE2EDuration="11.445551172s" podCreationTimestamp="2024-12-13 09:11:44 +0000 UTC" firstStartedPulling="2024-12-13 09:11:46.055524119 +0000 UTC m=+16.542354376" lastFinishedPulling="2024-12-13 09:11:52.104910491 +0000 UTC m=+22.591740735" observedRunningTime="2024-12-13 09:11:52.82478377 +0000 UTC m=+23.311614039" watchObservedRunningTime="2024-12-13 09:11:55.445551172 +0000 UTC m=+25.932381425" Dec 13 09:11:55.446250 kubelet[2567]: I1213 09:11:55.445863 2567 topology_manager.go:215] "Topology Admit Handler" podUID="4d9c2b39-c09f-4d6c-af06-e051049cb769" podNamespace="calico-system" podName="calico-typha-8d5657df6-r8r2b" Dec 13 09:11:55.459195 systemd[1]: Created slice kubepods-besteffort-pod4d9c2b39_c09f_4d6c_af06_e051049cb769.slice - libcontainer container kubepods-besteffort-pod4d9c2b39_c09f_4d6c_af06_e051049cb769.slice. Dec 13 09:11:55.492914 kubelet[2567]: I1213 09:11:55.492236 2567 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d9c2b39-c09f-4d6c-af06-e051049cb769-tigera-ca-bundle\") pod \"calico-typha-8d5657df6-r8r2b\" (UID: \"4d9c2b39-c09f-4d6c-af06-e051049cb769\") " pod="calico-system/calico-typha-8d5657df6-r8r2b" Dec 13 09:11:55.492914 kubelet[2567]: I1213 09:11:55.492292 2567 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/4d9c2b39-c09f-4d6c-af06-e051049cb769-typha-certs\") pod \"calico-typha-8d5657df6-r8r2b\" (UID: \"4d9c2b39-c09f-4d6c-af06-e051049cb769\") " pod="calico-system/calico-typha-8d5657df6-r8r2b" Dec 13 09:11:55.492914 kubelet[2567]: I1213 09:11:55.492320 2567 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkc7k\" (UniqueName: \"kubernetes.io/projected/4d9c2b39-c09f-4d6c-af06-e051049cb769-kube-api-access-bkc7k\") pod \"calico-typha-8d5657df6-r8r2b\" (UID: \"4d9c2b39-c09f-4d6c-af06-e051049cb769\") " pod="calico-system/calico-typha-8d5657df6-r8r2b" Dec 13 09:11:55.592875 kubelet[2567]: I1213 09:11:55.591504 2567 topology_manager.go:215] "Topology Admit Handler" podUID="bc2a685c-c9eb-4841-b7aa-5d9c9250aada" podNamespace="calico-system" podName="calico-node-tl8fc" Dec 13 09:11:55.633135 systemd[1]: Created slice kubepods-besteffort-podbc2a685c_c9eb_4841_b7aa_5d9c9250aada.slice - libcontainer container kubepods-besteffort-podbc2a685c_c9eb_4841_b7aa_5d9c9250aada.slice. Dec 13 09:11:55.694179 kubelet[2567]: I1213 09:11:55.694117 2567 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/bc2a685c-c9eb-4841-b7aa-5d9c9250aada-policysync\") pod \"calico-node-tl8fc\" (UID: \"bc2a685c-c9eb-4841-b7aa-5d9c9250aada\") " pod="calico-system/calico-node-tl8fc" Dec 13 09:11:55.694179 kubelet[2567]: I1213 09:11:55.694182 2567 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27mbp\" (UniqueName: \"kubernetes.io/projected/bc2a685c-c9eb-4841-b7aa-5d9c9250aada-kube-api-access-27mbp\") pod \"calico-node-tl8fc\" (UID: \"bc2a685c-c9eb-4841-b7aa-5d9c9250aada\") " pod="calico-system/calico-node-tl8fc" Dec 13 09:11:55.694429 kubelet[2567]: I1213 09:11:55.694207 2567 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bc2a685c-c9eb-4841-b7aa-5d9c9250aada-var-lib-calico\") pod \"calico-node-tl8fc\" (UID: \"bc2a685c-c9eb-4841-b7aa-5d9c9250aada\") " pod="calico-system/calico-node-tl8fc" Dec 13 09:11:55.694429 kubelet[2567]: I1213 09:11:55.694241 2567 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc2a685c-c9eb-4841-b7aa-5d9c9250aada-tigera-ca-bundle\") pod \"calico-node-tl8fc\" (UID: \"bc2a685c-c9eb-4841-b7aa-5d9c9250aada\") " pod="calico-system/calico-node-tl8fc" Dec 13 09:11:55.694429 kubelet[2567]: I1213 09:11:55.694265 2567 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/bc2a685c-c9eb-4841-b7aa-5d9c9250aada-node-certs\") pod \"calico-node-tl8fc\" (UID: \"bc2a685c-c9eb-4841-b7aa-5d9c9250aada\") " pod="calico-system/calico-node-tl8fc" Dec 13 09:11:55.694429 kubelet[2567]: I1213 09:11:55.694282 2567 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bc2a685c-c9eb-4841-b7aa-5d9c9250aada-xtables-lock\") pod \"calico-node-tl8fc\" (UID: \"bc2a685c-c9eb-4841-b7aa-5d9c9250aada\") " pod="calico-system/calico-node-tl8fc" Dec 13 09:11:55.694429 kubelet[2567]: I1213 09:11:55.694297 2567 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/bc2a685c-c9eb-4841-b7aa-5d9c9250aada-var-run-calico\") pod \"calico-node-tl8fc\" (UID: \"bc2a685c-c9eb-4841-b7aa-5d9c9250aada\") " pod="calico-system/calico-node-tl8fc" Dec 13 09:11:55.694849 kubelet[2567]: I1213 09:11:55.694312 2567 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/bc2a685c-c9eb-4841-b7aa-5d9c9250aada-cni-bin-dir\") pod \"calico-node-tl8fc\" (UID: \"bc2a685c-c9eb-4841-b7aa-5d9c9250aada\") " pod="calico-system/calico-node-tl8fc" Dec 13 09:11:55.694849 kubelet[2567]: I1213 09:11:55.694326 2567 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bc2a685c-c9eb-4841-b7aa-5d9c9250aada-lib-modules\") pod \"calico-node-tl8fc\" (UID: \"bc2a685c-c9eb-4841-b7aa-5d9c9250aada\") " pod="calico-system/calico-node-tl8fc" Dec 13 09:11:55.694849 kubelet[2567]: I1213 09:11:55.694341 2567 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/bc2a685c-c9eb-4841-b7aa-5d9c9250aada-cni-net-dir\") pod \"calico-node-tl8fc\" (UID: \"bc2a685c-c9eb-4841-b7aa-5d9c9250aada\") " pod="calico-system/calico-node-tl8fc" Dec 13 09:11:55.694849 kubelet[2567]: I1213 09:11:55.694357 2567 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/bc2a685c-c9eb-4841-b7aa-5d9c9250aada-flexvol-driver-host\") pod \"calico-node-tl8fc\" (UID: \"bc2a685c-c9eb-4841-b7aa-5d9c9250aada\") " pod="calico-system/calico-node-tl8fc" Dec 13 09:11:55.694849 kubelet[2567]: I1213 09:11:55.694371 2567 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/bc2a685c-c9eb-4841-b7aa-5d9c9250aada-cni-log-dir\") pod \"calico-node-tl8fc\" (UID: \"bc2a685c-c9eb-4841-b7aa-5d9c9250aada\") " pod="calico-system/calico-node-tl8fc" Dec 13 09:11:55.736989 kubelet[2567]: I1213 09:11:55.735965 2567 topology_manager.go:215] "Topology Admit Handler" podUID="7e99ecb7-45e0-439d-b399-3755faed5090" podNamespace="calico-system" podName="csi-node-driver-fg4tr" Dec 13 09:11:55.739984 kubelet[2567]: E1213 09:11:55.737458 2567 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fg4tr" podUID="7e99ecb7-45e0-439d-b399-3755faed5090" Dec 13 09:11:55.769400 kubelet[2567]: E1213 09:11:55.768683 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:55.769592 containerd[1470]: time="2024-12-13T09:11:55.769393003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8d5657df6-r8r2b,Uid:4d9c2b39-c09f-4d6c-af06-e051049cb769,Namespace:calico-system,Attempt:0,}" Dec 13 09:11:55.796168 kubelet[2567]: I1213 09:11:55.795385 2567 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/7e99ecb7-45e0-439d-b399-3755faed5090-varrun\") pod \"csi-node-driver-fg4tr\" (UID: \"7e99ecb7-45e0-439d-b399-3755faed5090\") " pod="calico-system/csi-node-driver-fg4tr" Dec 13 09:11:55.796168 kubelet[2567]: I1213 09:11:55.795451 2567 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7e99ecb7-45e0-439d-b399-3755faed5090-kubelet-dir\") pod \"csi-node-driver-fg4tr\" (UID: \"7e99ecb7-45e0-439d-b399-3755faed5090\") " pod="calico-system/csi-node-driver-fg4tr" Dec 13 09:11:55.796168 kubelet[2567]: I1213 09:11:55.795485 2567 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fn772\" (UniqueName: \"kubernetes.io/projected/7e99ecb7-45e0-439d-b399-3755faed5090-kube-api-access-fn772\") pod \"csi-node-driver-fg4tr\" (UID: \"7e99ecb7-45e0-439d-b399-3755faed5090\") " pod="calico-system/csi-node-driver-fg4tr" Dec 13 09:11:55.796168 kubelet[2567]: I1213 09:11:55.795551 2567 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7e99ecb7-45e0-439d-b399-3755faed5090-registration-dir\") pod \"csi-node-driver-fg4tr\" (UID: \"7e99ecb7-45e0-439d-b399-3755faed5090\") " pod="calico-system/csi-node-driver-fg4tr" Dec 13 09:11:55.796168 kubelet[2567]: I1213 09:11:55.795773 2567 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7e99ecb7-45e0-439d-b399-3755faed5090-socket-dir\") pod \"csi-node-driver-fg4tr\" (UID: \"7e99ecb7-45e0-439d-b399-3755faed5090\") " pod="calico-system/csi-node-driver-fg4tr" Dec 13 09:11:55.817669 kubelet[2567]: E1213 09:11:55.817286 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:11:55.817669 kubelet[2567]: W1213 09:11:55.817318 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:11:55.817669 kubelet[2567]: E1213 09:11:55.817348 2567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:11:55.821611 kubelet[2567]: E1213 09:11:55.820363 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:11:55.821611 kubelet[2567]: W1213 09:11:55.820403 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:11:55.821611 kubelet[2567]: E1213 09:11:55.820450 2567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:11:55.826604 kubelet[2567]: E1213 09:11:55.826561 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:11:55.831242 kubelet[2567]: W1213 09:11:55.830303 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:11:55.831242 kubelet[2567]: E1213 09:11:55.831822 2567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:11:55.834078 kubelet[2567]: E1213 09:11:55.834036 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:11:55.834546 kubelet[2567]: W1213 09:11:55.834512 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:11:55.835326 kubelet[2567]: E1213 09:11:55.835239 2567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:11:55.838471 kubelet[2567]: E1213 09:11:55.837817 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:11:55.838471 kubelet[2567]: W1213 09:11:55.837842 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:11:55.838471 kubelet[2567]: E1213 09:11:55.837953 2567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:11:55.839694 kubelet[2567]: E1213 09:11:55.839514 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:11:55.839694 kubelet[2567]: W1213 09:11:55.839542 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:11:55.842317 kubelet[2567]: E1213 09:11:55.842272 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:11:55.842522 kubelet[2567]: W1213 09:11:55.842503 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:11:55.842845 kubelet[2567]: E1213 09:11:55.842809 2567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:11:55.842929 kubelet[2567]: E1213 09:11:55.842866 2567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:11:55.843239 kubelet[2567]: E1213 09:11:55.843195 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:11:55.843518 kubelet[2567]: W1213 09:11:55.843492 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:11:55.844861 kubelet[2567]: E1213 09:11:55.844386 2567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:11:55.845259 kubelet[2567]: E1213 09:11:55.845212 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:11:55.845495 kubelet[2567]: W1213 09:11:55.845434 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:11:55.845681 kubelet[2567]: E1213 09:11:55.845580 2567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:11:55.846235 kubelet[2567]: E1213 09:11:55.846164 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:11:55.846235 kubelet[2567]: W1213 09:11:55.846211 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:11:55.846481 kubelet[2567]: E1213 09:11:55.846294 2567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:11:55.846910 kubelet[2567]: E1213 09:11:55.846830 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:11:55.846910 kubelet[2567]: W1213 09:11:55.846843 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:11:55.846910 kubelet[2567]: E1213 09:11:55.846867 2567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:11:55.847462 kubelet[2567]: E1213 09:11:55.847444 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:11:55.847667 kubelet[2567]: W1213 09:11:55.847544 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:11:55.848318 kubelet[2567]: E1213 09:11:55.848289 2567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:11:55.850685 kubelet[2567]: E1213 09:11:55.848624 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:11:55.850685 kubelet[2567]: W1213 09:11:55.848675 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:11:55.850685 kubelet[2567]: E1213 09:11:55.848695 2567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:11:55.851268 kubelet[2567]: E1213 09:11:55.851113 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:11:55.851268 kubelet[2567]: W1213 09:11:55.851136 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:11:55.851467 kubelet[2567]: E1213 09:11:55.851456 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:11:55.851527 kubelet[2567]: W1213 09:11:55.851518 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:11:55.851858 kubelet[2567]: E1213 09:11:55.851569 2567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:11:55.851858 kubelet[2567]: E1213 09:11:55.851615 2567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:11:55.852055 kubelet[2567]: E1213 09:11:55.852043 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:11:55.852135 kubelet[2567]: W1213 09:11:55.852121 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:11:55.852204 kubelet[2567]: E1213 09:11:55.852190 2567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:11:55.866664 containerd[1470]: time="2024-12-13T09:11:55.863008276Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 09:11:55.866664 containerd[1470]: time="2024-12-13T09:11:55.863107453Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 09:11:55.866664 containerd[1470]: time="2024-12-13T09:11:55.863127278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:11:55.866664 containerd[1470]: time="2024-12-13T09:11:55.863272702Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:11:55.897721 kubelet[2567]: E1213 09:11:55.897662 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:11:55.898979 kubelet[2567]: W1213 09:11:55.898694 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:11:55.898979 kubelet[2567]: E1213 09:11:55.898753 2567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:11:55.899920 kubelet[2567]: E1213 09:11:55.899893 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:11:55.901988 kubelet[2567]: W1213 09:11:55.901785 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:11:55.901988 kubelet[2567]: E1213 09:11:55.901864 2567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:11:55.906864 kubelet[2567]: E1213 09:11:55.906819 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:11:55.906864 kubelet[2567]: W1213 09:11:55.906860 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:11:55.909054 kubelet[2567]: E1213 09:11:55.907717 2567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:11:55.909054 kubelet[2567]: E1213 09:11:55.907810 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:11:55.909054 kubelet[2567]: W1213 09:11:55.908150 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:11:55.909054 kubelet[2567]: E1213 09:11:55.908419 2567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:11:55.911571 kubelet[2567]: E1213 09:11:55.910908 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:11:55.911571 kubelet[2567]: W1213 09:11:55.910935 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:11:55.911571 kubelet[2567]: E1213 09:11:55.911125 2567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:11:55.911815 kubelet[2567]: E1213 09:11:55.911289 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:11:55.911949 systemd[1]: Started cri-containerd-670df67fab25a88dcf70a424d0b60198238073df4b69af0e63c93dd46034074c.scope - libcontainer container 670df67fab25a88dcf70a424d0b60198238073df4b69af0e63c93dd46034074c. Dec 13 09:11:55.914514 kubelet[2567]: W1213 09:11:55.911629 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:11:55.914514 kubelet[2567]: E1213 09:11:55.913301 2567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:11:55.914851 kubelet[2567]: E1213 09:11:55.914587 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:11:55.915112 kubelet[2567]: W1213 09:11:55.914693 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:11:55.916180 kubelet[2567]: E1213 09:11:55.915157 2567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:11:55.919937 kubelet[2567]: E1213 09:11:55.917604 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:11:55.919937 kubelet[2567]: W1213 09:11:55.917630 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:11:55.919937 kubelet[2567]: E1213 09:11:55.917903 2567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:11:55.919937 kubelet[2567]: E1213 09:11:55.918536 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:11:55.919937 kubelet[2567]: W1213 09:11:55.918553 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:11:55.919937 kubelet[2567]: E1213 09:11:55.919698 2567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:11:55.922076 kubelet[2567]: E1213 09:11:55.920490 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:11:55.922076 kubelet[2567]: W1213 09:11:55.920513 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:11:55.922076 kubelet[2567]: E1213 09:11:55.920578 2567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:11:55.923036 kubelet[2567]: E1213 09:11:55.922414 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:11:55.923036 kubelet[2567]: W1213 09:11:55.922448 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:11:55.923036 kubelet[2567]: E1213 09:11:55.922539 2567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:11:55.923036 kubelet[2567]: E1213 09:11:55.923017 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:11:55.923036 kubelet[2567]: W1213 09:11:55.923044 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:11:55.924253 kubelet[2567]: E1213 09:11:55.923553 2567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:11:55.924253 kubelet[2567]: E1213 09:11:55.923902 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:11:55.924253 kubelet[2567]: W1213 09:11:55.923916 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:11:55.924253 kubelet[2567]: E1213 09:11:55.923967 2567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:11:55.925654 kubelet[2567]: E1213 09:11:55.924935 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:11:55.925654 kubelet[2567]: W1213 09:11:55.924952 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:11:55.925931 kubelet[2567]: E1213 09:11:55.925795 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:11:55.925931 kubelet[2567]: W1213 09:11:55.925816 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:11:55.925931 kubelet[2567]: E1213 09:11:55.925889 2567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:11:55.928727 kubelet[2567]: E1213 09:11:55.926679 2567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:11:55.928727 kubelet[2567]: E1213 09:11:55.926739 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:11:55.928727 kubelet[2567]: W1213 09:11:55.926757 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:11:55.928727 kubelet[2567]: E1213 09:11:55.927328 2567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:11:55.928727 kubelet[2567]: E1213 09:11:55.927973 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:11:55.928727 kubelet[2567]: W1213 09:11:55.927988 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:11:55.928727 kubelet[2567]: E1213 09:11:55.928032 2567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:11:55.933209 kubelet[2567]: E1213 09:11:55.929079 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:11:55.933209 kubelet[2567]: W1213 09:11:55.929097 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:11:55.933209 kubelet[2567]: E1213 09:11:55.929212 2567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:11:55.933209 kubelet[2567]: E1213 09:11:55.929832 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:11:55.933209 kubelet[2567]: W1213 09:11:55.929845 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:11:55.933209 kubelet[2567]: E1213 09:11:55.929923 2567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:11:55.933209 kubelet[2567]: E1213 09:11:55.931782 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:11:55.933209 kubelet[2567]: W1213 09:11:55.931803 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:11:55.933209 kubelet[2567]: E1213 09:11:55.931853 2567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:11:55.933209 kubelet[2567]: E1213 09:11:55.932382 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:11:55.933597 kubelet[2567]: W1213 09:11:55.932399 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:11:55.933597 kubelet[2567]: E1213 09:11:55.932720 2567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:11:55.933597 kubelet[2567]: E1213 09:11:55.932726 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:11:55.933597 kubelet[2567]: W1213 09:11:55.932833 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:11:55.933597 kubelet[2567]: E1213 09:11:55.932868 2567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:11:55.936322 kubelet[2567]: E1213 09:11:55.934528 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:11:55.936322 kubelet[2567]: W1213 09:11:55.934549 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:11:55.936322 kubelet[2567]: E1213 09:11:55.934592 2567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:11:55.936322 kubelet[2567]: E1213 09:11:55.935070 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:11:55.936322 kubelet[2567]: W1213 09:11:55.935085 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:11:55.936322 kubelet[2567]: E1213 09:11:55.935113 2567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:11:55.936322 kubelet[2567]: E1213 09:11:55.935773 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:11:55.936322 kubelet[2567]: W1213 09:11:55.935789 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:11:55.936322 kubelet[2567]: E1213 09:11:55.935808 2567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:11:55.946675 kubelet[2567]: E1213 09:11:55.946012 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:55.949536 containerd[1470]: time="2024-12-13T09:11:55.948971349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tl8fc,Uid:bc2a685c-c9eb-4841-b7aa-5d9c9250aada,Namespace:calico-system,Attempt:0,}" Dec 13 09:11:55.964699 kubelet[2567]: E1213 09:11:55.964618 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:11:55.964970 kubelet[2567]: W1213 09:11:55.964930 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:11:55.965263 kubelet[2567]: E1213 09:11:55.965243 2567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:11:56.027589 containerd[1470]: time="2024-12-13T09:11:56.025199542Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 09:11:56.027589 containerd[1470]: time="2024-12-13T09:11:56.025291899Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 09:11:56.027589 containerd[1470]: time="2024-12-13T09:11:56.025312026Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:11:56.027589 containerd[1470]: time="2024-12-13T09:11:56.025461708Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:11:56.075992 systemd[1]: Started cri-containerd-8f4cbdb074c0799e730b49d09a7eb8ac2649a90fe6e4dbe6dbbc87dbd84b46d6.scope - libcontainer container 8f4cbdb074c0799e730b49d09a7eb8ac2649a90fe6e4dbe6dbbc87dbd84b46d6. Dec 13 09:11:56.191788 containerd[1470]: time="2024-12-13T09:11:56.191176550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tl8fc,Uid:bc2a685c-c9eb-4841-b7aa-5d9c9250aada,Namespace:calico-system,Attempt:0,} returns sandbox id \"8f4cbdb074c0799e730b49d09a7eb8ac2649a90fe6e4dbe6dbbc87dbd84b46d6\"" Dec 13 09:11:56.195404 kubelet[2567]: E1213 09:11:56.194127 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:56.199255 containerd[1470]: time="2024-12-13T09:11:56.198695900Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Dec 13 09:11:56.245218 containerd[1470]: time="2024-12-13T09:11:56.245087646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8d5657df6-r8r2b,Uid:4d9c2b39-c09f-4d6c-af06-e051049cb769,Namespace:calico-system,Attempt:0,} returns sandbox id \"670df67fab25a88dcf70a424d0b60198238073df4b69af0e63c93dd46034074c\"" Dec 13 09:11:56.246665 kubelet[2567]: E1213 09:11:56.246407 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:57.593504 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4270800936.mount: Deactivated successfully. Dec 13 09:11:57.682950 kubelet[2567]: E1213 09:11:57.682171 2567 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fg4tr" podUID="7e99ecb7-45e0-439d-b399-3755faed5090" Dec 13 09:11:57.754541 containerd[1470]: time="2024-12-13T09:11:57.754471318Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:11:57.756022 containerd[1470]: time="2024-12-13T09:11:57.755518982Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Dec 13 09:11:57.757997 containerd[1470]: time="2024-12-13T09:11:57.757921503Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:11:57.762705 containerd[1470]: time="2024-12-13T09:11:57.762620822Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:11:57.764672 containerd[1470]: time="2024-12-13T09:11:57.764463928Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.56571191s" Dec 13 09:11:57.764672 containerd[1470]: time="2024-12-13T09:11:57.764527197Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Dec 13 09:11:57.766979 containerd[1470]: time="2024-12-13T09:11:57.766936536Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Dec 13 09:11:57.771345 containerd[1470]: time="2024-12-13T09:11:57.771082692Z" level=info msg="CreateContainer within sandbox \"8f4cbdb074c0799e730b49d09a7eb8ac2649a90fe6e4dbe6dbbc87dbd84b46d6\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 09:11:57.798696 containerd[1470]: time="2024-12-13T09:11:57.798585874Z" level=info msg="CreateContainer within sandbox \"8f4cbdb074c0799e730b49d09a7eb8ac2649a90fe6e4dbe6dbbc87dbd84b46d6\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b91917b81838485b4109dc73c7a61f0a512c52c8d98d9670c3777ec1e1c35be3\"" Dec 13 09:11:57.800807 containerd[1470]: time="2024-12-13T09:11:57.799447316Z" level=info msg="StartContainer for \"b91917b81838485b4109dc73c7a61f0a512c52c8d98d9670c3777ec1e1c35be3\"" Dec 13 09:11:57.857033 systemd[1]: Started cri-containerd-b91917b81838485b4109dc73c7a61f0a512c52c8d98d9670c3777ec1e1c35be3.scope - libcontainer container b91917b81838485b4109dc73c7a61f0a512c52c8d98d9670c3777ec1e1c35be3. Dec 13 09:11:57.922586 containerd[1470]: time="2024-12-13T09:11:57.922085239Z" level=info msg="StartContainer for \"b91917b81838485b4109dc73c7a61f0a512c52c8d98d9670c3777ec1e1c35be3\" returns successfully" Dec 13 09:11:57.946126 systemd[1]: cri-containerd-b91917b81838485b4109dc73c7a61f0a512c52c8d98d9670c3777ec1e1c35be3.scope: Deactivated successfully. Dec 13 09:11:57.999170 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b91917b81838485b4109dc73c7a61f0a512c52c8d98d9670c3777ec1e1c35be3-rootfs.mount: Deactivated successfully. Dec 13 09:11:58.042231 containerd[1470]: time="2024-12-13T09:11:58.042126705Z" level=info msg="shim disconnected" id=b91917b81838485b4109dc73c7a61f0a512c52c8d98d9670c3777ec1e1c35be3 namespace=k8s.io Dec 13 09:11:58.042231 containerd[1470]: time="2024-12-13T09:11:58.042202466Z" level=warning msg="cleaning up after shim disconnected" id=b91917b81838485b4109dc73c7a61f0a512c52c8d98d9670c3777ec1e1c35be3 namespace=k8s.io Dec 13 09:11:58.042231 containerd[1470]: time="2024-12-13T09:11:58.042215721Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 09:11:58.844665 kubelet[2567]: E1213 09:11:58.844583 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:59.682595 kubelet[2567]: E1213 09:11:59.682482 2567 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fg4tr" podUID="7e99ecb7-45e0-439d-b399-3755faed5090" Dec 13 09:12:00.914757 containerd[1470]: time="2024-12-13T09:12:00.914191884Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:12:00.917814 containerd[1470]: time="2024-12-13T09:12:00.917530197Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29850141" Dec 13 09:12:00.920467 containerd[1470]: time="2024-12-13T09:12:00.920391718Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:12:00.928903 containerd[1470]: time="2024-12-13T09:12:00.928813012Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:12:00.930379 containerd[1470]: time="2024-12-13T09:12:00.929991411Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 3.162818893s" Dec 13 09:12:00.930379 containerd[1470]: time="2024-12-13T09:12:00.930071263Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Dec 13 09:12:00.933229 containerd[1470]: time="2024-12-13T09:12:00.931989770Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Dec 13 09:12:00.976050 containerd[1470]: time="2024-12-13T09:12:00.975998064Z" level=info msg="CreateContainer within sandbox \"670df67fab25a88dcf70a424d0b60198238073df4b69af0e63c93dd46034074c\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 13 09:12:01.014699 containerd[1470]: time="2024-12-13T09:12:01.013465015Z" level=info msg="CreateContainer within sandbox \"670df67fab25a88dcf70a424d0b60198238073df4b69af0e63c93dd46034074c\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"07940c99356e2e54e1bdf6c853650cd722d5b7091db99ca9521e08179f94a5b8\"" Dec 13 09:12:01.015416 containerd[1470]: time="2024-12-13T09:12:01.015330935Z" level=info msg="StartContainer for \"07940c99356e2e54e1bdf6c853650cd722d5b7091db99ca9521e08179f94a5b8\"" Dec 13 09:12:01.109986 systemd[1]: Started cri-containerd-07940c99356e2e54e1bdf6c853650cd722d5b7091db99ca9521e08179f94a5b8.scope - libcontainer container 07940c99356e2e54e1bdf6c853650cd722d5b7091db99ca9521e08179f94a5b8. Dec 13 09:12:01.251448 containerd[1470]: time="2024-12-13T09:12:01.250548922Z" level=info msg="StartContainer for \"07940c99356e2e54e1bdf6c853650cd722d5b7091db99ca9521e08179f94a5b8\" returns successfully" Dec 13 09:12:01.684966 kubelet[2567]: E1213 09:12:01.683350 2567 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fg4tr" podUID="7e99ecb7-45e0-439d-b399-3755faed5090" Dec 13 09:12:01.868728 kubelet[2567]: E1213 09:12:01.867837 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:12:01.910915 kubelet[2567]: I1213 09:12:01.910754 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-8d5657df6-r8r2b" podStartSLOduration=2.228339771 podStartE2EDuration="6.909853439s" podCreationTimestamp="2024-12-13 09:11:55 +0000 UTC" firstStartedPulling="2024-12-13 09:11:56.250153176 +0000 UTC m=+26.736983416" lastFinishedPulling="2024-12-13 09:12:00.931666827 +0000 UTC m=+31.418497084" observedRunningTime="2024-12-13 09:12:01.900243467 +0000 UTC m=+32.387073734" watchObservedRunningTime="2024-12-13 09:12:01.909853439 +0000 UTC m=+32.396683707" Dec 13 09:12:02.869483 kubelet[2567]: I1213 09:12:02.869439 2567 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 09:12:02.871682 kubelet[2567]: E1213 09:12:02.871142 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:12:03.686963 kubelet[2567]: E1213 09:12:03.684997 2567 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fg4tr" podUID="7e99ecb7-45e0-439d-b399-3755faed5090" Dec 13 09:12:05.684003 kubelet[2567]: E1213 09:12:05.683872 2567 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fg4tr" podUID="7e99ecb7-45e0-439d-b399-3755faed5090" Dec 13 09:12:07.457186 containerd[1470]: time="2024-12-13T09:12:07.456776883Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:12:07.459192 containerd[1470]: time="2024-12-13T09:12:07.458851944Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Dec 13 09:12:07.460680 containerd[1470]: time="2024-12-13T09:12:07.460064973Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:12:07.464177 containerd[1470]: time="2024-12-13T09:12:07.464068202Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:12:07.465452 containerd[1470]: time="2024-12-13T09:12:07.464805938Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 6.532766025s" Dec 13 09:12:07.465452 containerd[1470]: time="2024-12-13T09:12:07.464852250Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Dec 13 09:12:07.468210 containerd[1470]: time="2024-12-13T09:12:07.468148766Z" level=info msg="CreateContainer within sandbox \"8f4cbdb074c0799e730b49d09a7eb8ac2649a90fe6e4dbe6dbbc87dbd84b46d6\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 09:12:07.493436 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount33542017.mount: Deactivated successfully. Dec 13 09:12:07.501228 containerd[1470]: time="2024-12-13T09:12:07.501035510Z" level=info msg="CreateContainer within sandbox \"8f4cbdb074c0799e730b49d09a7eb8ac2649a90fe6e4dbe6dbbc87dbd84b46d6\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"83514939f7f0cc09397b26e893bf78d20ab274e3c0c80bd9536131e214d096fe\"" Dec 13 09:12:07.502669 containerd[1470]: time="2024-12-13T09:12:07.502184716Z" level=info msg="StartContainer for \"83514939f7f0cc09397b26e893bf78d20ab274e3c0c80bd9536131e214d096fe\"" Dec 13 09:12:07.647901 systemd[1]: run-containerd-runc-k8s.io-83514939f7f0cc09397b26e893bf78d20ab274e3c0c80bd9536131e214d096fe-runc.qtdOhQ.mount: Deactivated successfully. Dec 13 09:12:07.660090 systemd[1]: Started cri-containerd-83514939f7f0cc09397b26e893bf78d20ab274e3c0c80bd9536131e214d096fe.scope - libcontainer container 83514939f7f0cc09397b26e893bf78d20ab274e3c0c80bd9536131e214d096fe. Dec 13 09:12:07.682819 kubelet[2567]: E1213 09:12:07.682730 2567 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fg4tr" podUID="7e99ecb7-45e0-439d-b399-3755faed5090" Dec 13 09:12:07.779510 containerd[1470]: time="2024-12-13T09:12:07.779179278Z" level=info msg="StartContainer for \"83514939f7f0cc09397b26e893bf78d20ab274e3c0c80bd9536131e214d096fe\" returns successfully" Dec 13 09:12:07.923681 kubelet[2567]: E1213 09:12:07.923275 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:12:08.662619 systemd[1]: cri-containerd-83514939f7f0cc09397b26e893bf78d20ab274e3c0c80bd9536131e214d096fe.scope: Deactivated successfully. Dec 13 09:12:08.713098 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-83514939f7f0cc09397b26e893bf78d20ab274e3c0c80bd9536131e214d096fe-rootfs.mount: Deactivated successfully. Dec 13 09:12:08.724549 containerd[1470]: time="2024-12-13T09:12:08.723117729Z" level=info msg="shim disconnected" id=83514939f7f0cc09397b26e893bf78d20ab274e3c0c80bd9536131e214d096fe namespace=k8s.io Dec 13 09:12:08.724549 containerd[1470]: time="2024-12-13T09:12:08.723194666Z" level=warning msg="cleaning up after shim disconnected" id=83514939f7f0cc09397b26e893bf78d20ab274e3c0c80bd9536131e214d096fe namespace=k8s.io Dec 13 09:12:08.724549 containerd[1470]: time="2024-12-13T09:12:08.723206054Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 09:12:08.738289 kubelet[2567]: I1213 09:12:08.738002 2567 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 09:12:08.751849 containerd[1470]: time="2024-12-13T09:12:08.751589919Z" level=warning msg="cleanup warnings time=\"2024-12-13T09:12:08Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 09:12:08.775227 kubelet[2567]: I1213 09:12:08.774258 2567 topology_manager.go:215] "Topology Admit Handler" podUID="eb0da44c-3f76-4633-8e65-0b9e15072d96" podNamespace="kube-system" podName="coredns-7db6d8ff4d-rbtvp" Dec 13 09:12:08.782631 kubelet[2567]: I1213 09:12:08.782560 2567 topology_manager.go:215] "Topology Admit Handler" podUID="6c0ab554-eaa5-49a2-ba96-7901a803a4df" podNamespace="calico-system" podName="calico-kube-controllers-847f66d7bd-gnjzf" Dec 13 09:12:08.793216 kubelet[2567]: I1213 09:12:08.792481 2567 topology_manager.go:215] "Topology Admit Handler" podUID="2399d564-1e25-4cf5-a873-070c1c53ce9a" podNamespace="kube-system" podName="coredns-7db6d8ff4d-hcqvh" Dec 13 09:12:08.792999 systemd[1]: Created slice kubepods-burstable-podeb0da44c_3f76_4633_8e65_0b9e15072d96.slice - libcontainer container kubepods-burstable-podeb0da44c_3f76_4633_8e65_0b9e15072d96.slice. Dec 13 09:12:08.796661 kubelet[2567]: I1213 09:12:08.795822 2567 topology_manager.go:215] "Topology Admit Handler" podUID="4700244a-abfc-4fc6-93e2-57b920e50bc1" podNamespace="calico-apiserver" podName="calico-apiserver-dd59c5464-h679k" Dec 13 09:12:08.796661 kubelet[2567]: I1213 09:12:08.796026 2567 topology_manager.go:215] "Topology Admit Handler" podUID="57fcc6dd-566b-4786-b6c6-5e0d6f04624c" podNamespace="calico-apiserver" podName="calico-apiserver-dd59c5464-8chwx" Dec 13 09:12:08.799672 kubelet[2567]: W1213 09:12:08.799406 2567 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4081.2.1-e-b721934136" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081.2.1-e-b721934136' and this object Dec 13 09:12:08.799672 kubelet[2567]: E1213 09:12:08.799475 2567 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4081.2.1-e-b721934136" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081.2.1-e-b721934136' and this object Dec 13 09:12:08.810822 systemd[1]: Created slice kubepods-besteffort-pod6c0ab554_eaa5_49a2_ba96_7901a803a4df.slice - libcontainer container kubepods-besteffort-pod6c0ab554_eaa5_49a2_ba96_7901a803a4df.slice. Dec 13 09:12:08.821980 systemd[1]: Created slice kubepods-burstable-pod2399d564_1e25_4cf5_a873_070c1c53ce9a.slice - libcontainer container kubepods-burstable-pod2399d564_1e25_4cf5_a873_070c1c53ce9a.slice. Dec 13 09:12:08.842272 systemd[1]: Created slice kubepods-besteffort-pod57fcc6dd_566b_4786_b6c6_5e0d6f04624c.slice - libcontainer container kubepods-besteffort-pod57fcc6dd_566b_4786_b6c6_5e0d6f04624c.slice. Dec 13 09:12:08.854222 systemd[1]: Created slice kubepods-besteffort-pod4700244a_abfc_4fc6_93e2_57b920e50bc1.slice - libcontainer container kubepods-besteffort-pod4700244a_abfc_4fc6_93e2_57b920e50bc1.slice. Dec 13 09:12:08.932031 kubelet[2567]: E1213 09:12:08.929670 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:12:08.933576 containerd[1470]: time="2024-12-13T09:12:08.933530347Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Dec 13 09:12:08.944087 kubelet[2567]: I1213 09:12:08.942345 2567 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2399d564-1e25-4cf5-a873-070c1c53ce9a-config-volume\") pod \"coredns-7db6d8ff4d-hcqvh\" (UID: \"2399d564-1e25-4cf5-a873-070c1c53ce9a\") " pod="kube-system/coredns-7db6d8ff4d-hcqvh" Dec 13 09:12:08.944087 kubelet[2567]: I1213 09:12:08.942407 2567 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbs6p\" (UniqueName: \"kubernetes.io/projected/4700244a-abfc-4fc6-93e2-57b920e50bc1-kube-api-access-pbs6p\") pod \"calico-apiserver-dd59c5464-h679k\" (UID: \"4700244a-abfc-4fc6-93e2-57b920e50bc1\") " pod="calico-apiserver/calico-apiserver-dd59c5464-h679k" Dec 13 09:12:08.944087 kubelet[2567]: I1213 09:12:08.942428 2567 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c0ab554-eaa5-49a2-ba96-7901a803a4df-tigera-ca-bundle\") pod \"calico-kube-controllers-847f66d7bd-gnjzf\" (UID: \"6c0ab554-eaa5-49a2-ba96-7901a803a4df\") " pod="calico-system/calico-kube-controllers-847f66d7bd-gnjzf" Dec 13 09:12:08.944087 kubelet[2567]: I1213 09:12:08.942464 2567 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7wbr\" (UniqueName: \"kubernetes.io/projected/2399d564-1e25-4cf5-a873-070c1c53ce9a-kube-api-access-t7wbr\") pod \"coredns-7db6d8ff4d-hcqvh\" (UID: \"2399d564-1e25-4cf5-a873-070c1c53ce9a\") " pod="kube-system/coredns-7db6d8ff4d-hcqvh" Dec 13 09:12:08.944087 kubelet[2567]: I1213 09:12:08.942484 2567 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2kxc\" (UniqueName: \"kubernetes.io/projected/6c0ab554-eaa5-49a2-ba96-7901a803a4df-kube-api-access-z2kxc\") pod \"calico-kube-controllers-847f66d7bd-gnjzf\" (UID: \"6c0ab554-eaa5-49a2-ba96-7901a803a4df\") " pod="calico-system/calico-kube-controllers-847f66d7bd-gnjzf" Dec 13 09:12:08.944345 kubelet[2567]: I1213 09:12:08.942540 2567 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/57fcc6dd-566b-4786-b6c6-5e0d6f04624c-calico-apiserver-certs\") pod \"calico-apiserver-dd59c5464-8chwx\" (UID: \"57fcc6dd-566b-4786-b6c6-5e0d6f04624c\") " pod="calico-apiserver/calico-apiserver-dd59c5464-8chwx" Dec 13 09:12:08.944345 kubelet[2567]: I1213 09:12:08.942569 2567 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frfhf\" (UniqueName: \"kubernetes.io/projected/57fcc6dd-566b-4786-b6c6-5e0d6f04624c-kube-api-access-frfhf\") pod \"calico-apiserver-dd59c5464-8chwx\" (UID: \"57fcc6dd-566b-4786-b6c6-5e0d6f04624c\") " pod="calico-apiserver/calico-apiserver-dd59c5464-8chwx" Dec 13 09:12:08.944345 kubelet[2567]: I1213 09:12:08.942595 2567 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eb0da44c-3f76-4633-8e65-0b9e15072d96-config-volume\") pod \"coredns-7db6d8ff4d-rbtvp\" (UID: \"eb0da44c-3f76-4633-8e65-0b9e15072d96\") " pod="kube-system/coredns-7db6d8ff4d-rbtvp" Dec 13 09:12:08.944345 kubelet[2567]: I1213 09:12:08.942622 2567 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4700244a-abfc-4fc6-93e2-57b920e50bc1-calico-apiserver-certs\") pod \"calico-apiserver-dd59c5464-h679k\" (UID: \"4700244a-abfc-4fc6-93e2-57b920e50bc1\") " pod="calico-apiserver/calico-apiserver-dd59c5464-h679k" Dec 13 09:12:08.944345 kubelet[2567]: I1213 09:12:08.942665 2567 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9wq6\" (UniqueName: \"kubernetes.io/projected/eb0da44c-3f76-4633-8e65-0b9e15072d96-kube-api-access-m9wq6\") pod \"coredns-7db6d8ff4d-rbtvp\" (UID: \"eb0da44c-3f76-4633-8e65-0b9e15072d96\") " pod="kube-system/coredns-7db6d8ff4d-rbtvp" Dec 13 09:12:09.116946 containerd[1470]: time="2024-12-13T09:12:09.116868643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-847f66d7bd-gnjzf,Uid:6c0ab554-eaa5-49a2-ba96-7901a803a4df,Namespace:calico-system,Attempt:0,}" Dec 13 09:12:09.150427 containerd[1470]: time="2024-12-13T09:12:09.150313193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dd59c5464-8chwx,Uid:57fcc6dd-566b-4786-b6c6-5e0d6f04624c,Namespace:calico-apiserver,Attempt:0,}" Dec 13 09:12:09.162619 containerd[1470]: time="2024-12-13T09:12:09.160910786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dd59c5464-h679k,Uid:4700244a-abfc-4fc6-93e2-57b920e50bc1,Namespace:calico-apiserver,Attempt:0,}" Dec 13 09:12:09.460146 containerd[1470]: time="2024-12-13T09:12:09.459663925Z" level=error msg="Failed to destroy network for sandbox \"fcebbf9ff255a39523e24688a9b20c584b5a715ee32514148a982e7efd530f19\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:12:09.472047 containerd[1470]: time="2024-12-13T09:12:09.471587559Z" level=error msg="encountered an error cleaning up failed sandbox \"fcebbf9ff255a39523e24688a9b20c584b5a715ee32514148a982e7efd530f19\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:12:09.472047 containerd[1470]: time="2024-12-13T09:12:09.471735616Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dd59c5464-8chwx,Uid:57fcc6dd-566b-4786-b6c6-5e0d6f04624c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fcebbf9ff255a39523e24688a9b20c584b5a715ee32514148a982e7efd530f19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:12:09.482432 containerd[1470]: time="2024-12-13T09:12:09.480790386Z" level=error msg="Failed to destroy network for sandbox \"03211bcb90188fff2f38b44545c754fec32a8393084cab6450a3f72d350a8a3e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:12:09.482432 containerd[1470]: time="2024-12-13T09:12:09.481243346Z" level=error msg="encountered an error cleaning up failed sandbox \"03211bcb90188fff2f38b44545c754fec32a8393084cab6450a3f72d350a8a3e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:12:09.482432 containerd[1470]: time="2024-12-13T09:12:09.481365903Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dd59c5464-h679k,Uid:4700244a-abfc-4fc6-93e2-57b920e50bc1,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"03211bcb90188fff2f38b44545c754fec32a8393084cab6450a3f72d350a8a3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:12:09.482432 containerd[1470]: time="2024-12-13T09:12:09.481462876Z" level=error msg="Failed to destroy network for sandbox \"9083c936da8f8f702587c99d9cc2d53875a7abb59e3cc3bf1c72570e1620d593\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:12:09.482432 containerd[1470]: time="2024-12-13T09:12:09.481953600Z" level=error msg="encountered an error cleaning up failed sandbox \"9083c936da8f8f702587c99d9cc2d53875a7abb59e3cc3bf1c72570e1620d593\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:12:09.482432 containerd[1470]: time="2024-12-13T09:12:09.482128755Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-847f66d7bd-gnjzf,Uid:6c0ab554-eaa5-49a2-ba96-7901a803a4df,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9083c936da8f8f702587c99d9cc2d53875a7abb59e3cc3bf1c72570e1620d593\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:12:09.483174 kubelet[2567]: E1213 09:12:09.482958 2567 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03211bcb90188fff2f38b44545c754fec32a8393084cab6450a3f72d350a8a3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:12:09.483174 kubelet[2567]: E1213 09:12:09.483016 2567 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9083c936da8f8f702587c99d9cc2d53875a7abb59e3cc3bf1c72570e1620d593\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:12:09.483174 kubelet[2567]: E1213 09:12:09.483064 2567 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03211bcb90188fff2f38b44545c754fec32a8393084cab6450a3f72d350a8a3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-dd59c5464-h679k" Dec 13 09:12:09.483174 kubelet[2567]: E1213 09:12:09.483101 2567 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03211bcb90188fff2f38b44545c754fec32a8393084cab6450a3f72d350a8a3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-dd59c5464-h679k" Dec 13 09:12:09.483407 kubelet[2567]: E1213 09:12:09.483184 2567 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-dd59c5464-h679k_calico-apiserver(4700244a-abfc-4fc6-93e2-57b920e50bc1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-dd59c5464-h679k_calico-apiserver(4700244a-abfc-4fc6-93e2-57b920e50bc1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"03211bcb90188fff2f38b44545c754fec32a8393084cab6450a3f72d350a8a3e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-dd59c5464-h679k" podUID="4700244a-abfc-4fc6-93e2-57b920e50bc1" Dec 13 09:12:09.484822 kubelet[2567]: E1213 09:12:09.482950 2567 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fcebbf9ff255a39523e24688a9b20c584b5a715ee32514148a982e7efd530f19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:12:09.484822 kubelet[2567]: E1213 09:12:09.483699 2567 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fcebbf9ff255a39523e24688a9b20c584b5a715ee32514148a982e7efd530f19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-dd59c5464-8chwx" Dec 13 09:12:09.484822 kubelet[2567]: E1213 09:12:09.483750 2567 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fcebbf9ff255a39523e24688a9b20c584b5a715ee32514148a982e7efd530f19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-dd59c5464-8chwx" Dec 13 09:12:09.485155 kubelet[2567]: E1213 09:12:09.483807 2567 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-dd59c5464-8chwx_calico-apiserver(57fcc6dd-566b-4786-b6c6-5e0d6f04624c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-dd59c5464-8chwx_calico-apiserver(57fcc6dd-566b-4786-b6c6-5e0d6f04624c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fcebbf9ff255a39523e24688a9b20c584b5a715ee32514148a982e7efd530f19\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-dd59c5464-8chwx" podUID="57fcc6dd-566b-4786-b6c6-5e0d6f04624c" Dec 13 09:12:09.485552 kubelet[2567]: E1213 09:12:09.485353 2567 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9083c936da8f8f702587c99d9cc2d53875a7abb59e3cc3bf1c72570e1620d593\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-847f66d7bd-gnjzf" Dec 13 09:12:09.485552 kubelet[2567]: E1213 09:12:09.485406 2567 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9083c936da8f8f702587c99d9cc2d53875a7abb59e3cc3bf1c72570e1620d593\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-847f66d7bd-gnjzf" Dec 13 09:12:09.485552 kubelet[2567]: E1213 09:12:09.485478 2567 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-847f66d7bd-gnjzf_calico-system(6c0ab554-eaa5-49a2-ba96-7901a803a4df)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-847f66d7bd-gnjzf_calico-system(6c0ab554-eaa5-49a2-ba96-7901a803a4df)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9083c936da8f8f702587c99d9cc2d53875a7abb59e3cc3bf1c72570e1620d593\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-847f66d7bd-gnjzf" podUID="6c0ab554-eaa5-49a2-ba96-7901a803a4df" Dec 13 09:12:09.692747 systemd[1]: Created slice kubepods-besteffort-pod7e99ecb7_45e0_439d_b399_3755faed5090.slice - libcontainer container kubepods-besteffort-pod7e99ecb7_45e0_439d_b399_3755faed5090.slice. Dec 13 09:12:09.696040 containerd[1470]: time="2024-12-13T09:12:09.695983700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fg4tr,Uid:7e99ecb7-45e0-439d-b399-3755faed5090,Namespace:calico-system,Attempt:0,}" Dec 13 09:12:09.812578 containerd[1470]: time="2024-12-13T09:12:09.812414074Z" level=error msg="Failed to destroy network for sandbox \"e353ade0fd510c8423f00f163850e4ca31417bf10d3c7112535820bc6505345f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:12:09.815671 containerd[1470]: time="2024-12-13T09:12:09.814052085Z" level=error msg="encountered an error cleaning up failed sandbox \"e353ade0fd510c8423f00f163850e4ca31417bf10d3c7112535820bc6505345f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:12:09.815671 containerd[1470]: time="2024-12-13T09:12:09.814145553Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fg4tr,Uid:7e99ecb7-45e0-439d-b399-3755faed5090,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e353ade0fd510c8423f00f163850e4ca31417bf10d3c7112535820bc6505345f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:12:09.816084 kubelet[2567]: E1213 09:12:09.816024 2567 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e353ade0fd510c8423f00f163850e4ca31417bf10d3c7112535820bc6505345f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:12:09.816467 kubelet[2567]: E1213 09:12:09.816115 2567 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e353ade0fd510c8423f00f163850e4ca31417bf10d3c7112535820bc6505345f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fg4tr" Dec 13 09:12:09.816467 kubelet[2567]: E1213 09:12:09.816150 2567 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e353ade0fd510c8423f00f163850e4ca31417bf10d3c7112535820bc6505345f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fg4tr" Dec 13 09:12:09.816467 kubelet[2567]: E1213 09:12:09.816216 2567 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-fg4tr_calico-system(7e99ecb7-45e0-439d-b399-3755faed5090)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-fg4tr_calico-system(7e99ecb7-45e0-439d-b399-3755faed5090)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e353ade0fd510c8423f00f163850e4ca31417bf10d3c7112535820bc6505345f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fg4tr" podUID="7e99ecb7-45e0-439d-b399-3755faed5090" Dec 13 09:12:09.821275 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e353ade0fd510c8423f00f163850e4ca31417bf10d3c7112535820bc6505345f-shm.mount: Deactivated successfully. Dec 13 09:12:09.931236 kubelet[2567]: I1213 09:12:09.931205 2567 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03211bcb90188fff2f38b44545c754fec32a8393084cab6450a3f72d350a8a3e" Dec 13 09:12:09.934970 kubelet[2567]: I1213 09:12:09.934931 2567 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fcebbf9ff255a39523e24688a9b20c584b5a715ee32514148a982e7efd530f19" Dec 13 09:12:09.942042 containerd[1470]: time="2024-12-13T09:12:09.940176542Z" level=info msg="StopPodSandbox for \"03211bcb90188fff2f38b44545c754fec32a8393084cab6450a3f72d350a8a3e\"" Dec 13 09:12:09.942042 containerd[1470]: time="2024-12-13T09:12:09.941065676Z" level=info msg="StopPodSandbox for \"fcebbf9ff255a39523e24688a9b20c584b5a715ee32514148a982e7efd530f19\"" Dec 13 09:12:09.942272 containerd[1470]: time="2024-12-13T09:12:09.942105998Z" level=info msg="Ensure that sandbox 03211bcb90188fff2f38b44545c754fec32a8393084cab6450a3f72d350a8a3e in task-service has been cleanup successfully" Dec 13 09:12:09.942535 containerd[1470]: time="2024-12-13T09:12:09.942498653Z" level=info msg="Ensure that sandbox fcebbf9ff255a39523e24688a9b20c584b5a715ee32514148a982e7efd530f19 in task-service has been cleanup successfully" Dec 13 09:12:09.951784 kubelet[2567]: I1213 09:12:09.951743 2567 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9083c936da8f8f702587c99d9cc2d53875a7abb59e3cc3bf1c72570e1620d593" Dec 13 09:12:09.953966 containerd[1470]: time="2024-12-13T09:12:09.953435609Z" level=info msg="StopPodSandbox for \"9083c936da8f8f702587c99d9cc2d53875a7abb59e3cc3bf1c72570e1620d593\"" Dec 13 09:12:09.954111 containerd[1470]: time="2024-12-13T09:12:09.954068456Z" level=info msg="Ensure that sandbox 9083c936da8f8f702587c99d9cc2d53875a7abb59e3cc3bf1c72570e1620d593 in task-service has been cleanup successfully" Dec 13 09:12:09.958389 kubelet[2567]: I1213 09:12:09.958356 2567 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e353ade0fd510c8423f00f163850e4ca31417bf10d3c7112535820bc6505345f" Dec 13 09:12:09.959703 containerd[1470]: time="2024-12-13T09:12:09.959662067Z" level=info msg="StopPodSandbox for \"e353ade0fd510c8423f00f163850e4ca31417bf10d3c7112535820bc6505345f\"" Dec 13 09:12:09.962676 containerd[1470]: time="2024-12-13T09:12:09.962496432Z" level=info msg="Ensure that sandbox e353ade0fd510c8423f00f163850e4ca31417bf10d3c7112535820bc6505345f in task-service has been cleanup successfully" Dec 13 09:12:10.003340 kubelet[2567]: E1213 09:12:10.002007 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:12:10.003516 containerd[1470]: time="2024-12-13T09:12:10.002759738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rbtvp,Uid:eb0da44c-3f76-4633-8e65-0b9e15072d96,Namespace:kube-system,Attempt:0,}" Dec 13 09:12:10.029275 kubelet[2567]: E1213 09:12:10.029224 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:12:10.040307 containerd[1470]: time="2024-12-13T09:12:10.040244516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hcqvh,Uid:2399d564-1e25-4cf5-a873-070c1c53ce9a,Namespace:kube-system,Attempt:0,}" Dec 13 09:12:10.087907 containerd[1470]: time="2024-12-13T09:12:10.087841199Z" level=error msg="StopPodSandbox for \"e353ade0fd510c8423f00f163850e4ca31417bf10d3c7112535820bc6505345f\" failed" error="failed to destroy network for sandbox \"e353ade0fd510c8423f00f163850e4ca31417bf10d3c7112535820bc6505345f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:12:10.089736 containerd[1470]: time="2024-12-13T09:12:10.088447123Z" level=error msg="StopPodSandbox for \"03211bcb90188fff2f38b44545c754fec32a8393084cab6450a3f72d350a8a3e\" failed" error="failed to destroy network for sandbox \"03211bcb90188fff2f38b44545c754fec32a8393084cab6450a3f72d350a8a3e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:12:10.089891 kubelet[2567]: E1213 09:12:10.088852 2567 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"03211bcb90188fff2f38b44545c754fec32a8393084cab6450a3f72d350a8a3e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="03211bcb90188fff2f38b44545c754fec32a8393084cab6450a3f72d350a8a3e" Dec 13 09:12:10.089891 kubelet[2567]: E1213 09:12:10.088935 2567 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"03211bcb90188fff2f38b44545c754fec32a8393084cab6450a3f72d350a8a3e"} Dec 13 09:12:10.089891 kubelet[2567]: E1213 09:12:10.089013 2567 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4700244a-abfc-4fc6-93e2-57b920e50bc1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"03211bcb90188fff2f38b44545c754fec32a8393084cab6450a3f72d350a8a3e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 09:12:10.089891 kubelet[2567]: E1213 09:12:10.089038 2567 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4700244a-abfc-4fc6-93e2-57b920e50bc1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"03211bcb90188fff2f38b44545c754fec32a8393084cab6450a3f72d350a8a3e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-dd59c5464-h679k" podUID="4700244a-abfc-4fc6-93e2-57b920e50bc1" Dec 13 09:12:10.090136 kubelet[2567]: E1213 09:12:10.088861 2567 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e353ade0fd510c8423f00f163850e4ca31417bf10d3c7112535820bc6505345f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e353ade0fd510c8423f00f163850e4ca31417bf10d3c7112535820bc6505345f" Dec 13 09:12:10.092368 kubelet[2567]: E1213 09:12:10.089192 2567 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e353ade0fd510c8423f00f163850e4ca31417bf10d3c7112535820bc6505345f"} Dec 13 09:12:10.092368 kubelet[2567]: E1213 09:12:10.090878 2567 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7e99ecb7-45e0-439d-b399-3755faed5090\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e353ade0fd510c8423f00f163850e4ca31417bf10d3c7112535820bc6505345f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 09:12:10.092368 kubelet[2567]: E1213 09:12:10.090983 2567 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7e99ecb7-45e0-439d-b399-3755faed5090\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e353ade0fd510c8423f00f163850e4ca31417bf10d3c7112535820bc6505345f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fg4tr" podUID="7e99ecb7-45e0-439d-b399-3755faed5090" Dec 13 09:12:10.107124 containerd[1470]: time="2024-12-13T09:12:10.106846092Z" level=error msg="StopPodSandbox for \"9083c936da8f8f702587c99d9cc2d53875a7abb59e3cc3bf1c72570e1620d593\" failed" error="failed to destroy network for sandbox \"9083c936da8f8f702587c99d9cc2d53875a7abb59e3cc3bf1c72570e1620d593\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:12:10.108866 kubelet[2567]: E1213 09:12:10.108777 2567 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9083c936da8f8f702587c99d9cc2d53875a7abb59e3cc3bf1c72570e1620d593\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9083c936da8f8f702587c99d9cc2d53875a7abb59e3cc3bf1c72570e1620d593" Dec 13 09:12:10.108866 kubelet[2567]: E1213 09:12:10.108876 2567 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9083c936da8f8f702587c99d9cc2d53875a7abb59e3cc3bf1c72570e1620d593"} Dec 13 09:12:10.109070 kubelet[2567]: E1213 09:12:10.108916 2567 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6c0ab554-eaa5-49a2-ba96-7901a803a4df\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9083c936da8f8f702587c99d9cc2d53875a7abb59e3cc3bf1c72570e1620d593\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 09:12:10.109070 kubelet[2567]: E1213 09:12:10.108950 2567 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6c0ab554-eaa5-49a2-ba96-7901a803a4df\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9083c936da8f8f702587c99d9cc2d53875a7abb59e3cc3bf1c72570e1620d593\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-847f66d7bd-gnjzf" podUID="6c0ab554-eaa5-49a2-ba96-7901a803a4df" Dec 13 09:12:10.109223 containerd[1470]: time="2024-12-13T09:12:10.109178316Z" level=error msg="StopPodSandbox for \"fcebbf9ff255a39523e24688a9b20c584b5a715ee32514148a982e7efd530f19\" failed" error="failed to destroy network for sandbox \"fcebbf9ff255a39523e24688a9b20c584b5a715ee32514148a982e7efd530f19\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:12:10.109547 kubelet[2567]: E1213 09:12:10.109412 2567 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fcebbf9ff255a39523e24688a9b20c584b5a715ee32514148a982e7efd530f19\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fcebbf9ff255a39523e24688a9b20c584b5a715ee32514148a982e7efd530f19" Dec 13 09:12:10.109547 kubelet[2567]: E1213 09:12:10.109461 2567 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fcebbf9ff255a39523e24688a9b20c584b5a715ee32514148a982e7efd530f19"} Dec 13 09:12:10.109547 kubelet[2567]: E1213 09:12:10.109495 2567 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"57fcc6dd-566b-4786-b6c6-5e0d6f04624c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fcebbf9ff255a39523e24688a9b20c584b5a715ee32514148a982e7efd530f19\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 09:12:10.109547 kubelet[2567]: E1213 09:12:10.109518 2567 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"57fcc6dd-566b-4786-b6c6-5e0d6f04624c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fcebbf9ff255a39523e24688a9b20c584b5a715ee32514148a982e7efd530f19\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-dd59c5464-8chwx" podUID="57fcc6dd-566b-4786-b6c6-5e0d6f04624c" Dec 13 09:12:10.283926 containerd[1470]: time="2024-12-13T09:12:10.283722123Z" level=error msg="Failed to destroy network for sandbox \"5d769bdbc33915068200477ac35b6960319a60abf41d49c35de456ecb67f3210\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:12:10.284469 containerd[1470]: time="2024-12-13T09:12:10.284422761Z" level=error msg="encountered an error cleaning up failed sandbox \"5d769bdbc33915068200477ac35b6960319a60abf41d49c35de456ecb67f3210\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:12:10.285153 containerd[1470]: time="2024-12-13T09:12:10.284718629Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hcqvh,Uid:2399d564-1e25-4cf5-a873-070c1c53ce9a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5d769bdbc33915068200477ac35b6960319a60abf41d49c35de456ecb67f3210\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:12:10.288544 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5d769bdbc33915068200477ac35b6960319a60abf41d49c35de456ecb67f3210-shm.mount: Deactivated successfully. Dec 13 09:12:10.290968 containerd[1470]: time="2024-12-13T09:12:10.290823679Z" level=error msg="Failed to destroy network for sandbox \"9ed4efaa4a5040c925b271214c7529b89db9dde3b3fd73e3a218ea511bfea84b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:12:10.292008 kubelet[2567]: E1213 09:12:10.291594 2567 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d769bdbc33915068200477ac35b6960319a60abf41d49c35de456ecb67f3210\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:12:10.292008 kubelet[2567]: E1213 09:12:10.291703 2567 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d769bdbc33915068200477ac35b6960319a60abf41d49c35de456ecb67f3210\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-hcqvh" Dec 13 09:12:10.292377 kubelet[2567]: E1213 09:12:10.292251 2567 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d769bdbc33915068200477ac35b6960319a60abf41d49c35de456ecb67f3210\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-hcqvh" Dec 13 09:12:10.292923 kubelet[2567]: E1213 09:12:10.292588 2567 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-hcqvh_kube-system(2399d564-1e25-4cf5-a873-070c1c53ce9a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-hcqvh_kube-system(2399d564-1e25-4cf5-a873-070c1c53ce9a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5d769bdbc33915068200477ac35b6960319a60abf41d49c35de456ecb67f3210\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-hcqvh" podUID="2399d564-1e25-4cf5-a873-070c1c53ce9a" Dec 13 09:12:10.296953 containerd[1470]: time="2024-12-13T09:12:10.296608719Z" level=error msg="encountered an error cleaning up failed sandbox \"9ed4efaa4a5040c925b271214c7529b89db9dde3b3fd73e3a218ea511bfea84b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:12:10.296953 containerd[1470]: time="2024-12-13T09:12:10.296814202Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rbtvp,Uid:eb0da44c-3f76-4633-8e65-0b9e15072d96,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9ed4efaa4a5040c925b271214c7529b89db9dde3b3fd73e3a218ea511bfea84b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:12:10.297215 kubelet[2567]: E1213 09:12:10.297159 2567 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ed4efaa4a5040c925b271214c7529b89db9dde3b3fd73e3a218ea511bfea84b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:12:10.297289 kubelet[2567]: E1213 09:12:10.297240 2567 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ed4efaa4a5040c925b271214c7529b89db9dde3b3fd73e3a218ea511bfea84b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-rbtvp" Dec 13 09:12:10.297336 kubelet[2567]: E1213 09:12:10.297270 2567 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ed4efaa4a5040c925b271214c7529b89db9dde3b3fd73e3a218ea511bfea84b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-rbtvp" Dec 13 09:12:10.297440 kubelet[2567]: E1213 09:12:10.297374 2567 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-rbtvp_kube-system(eb0da44c-3f76-4633-8e65-0b9e15072d96)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-rbtvp_kube-system(eb0da44c-3f76-4633-8e65-0b9e15072d96)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9ed4efaa4a5040c925b271214c7529b89db9dde3b3fd73e3a218ea511bfea84b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-rbtvp" podUID="eb0da44c-3f76-4633-8e65-0b9e15072d96" Dec 13 09:12:10.714371 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9ed4efaa4a5040c925b271214c7529b89db9dde3b3fd73e3a218ea511bfea84b-shm.mount: Deactivated successfully. Dec 13 09:12:10.963693 kubelet[2567]: I1213 09:12:10.962878 2567 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d769bdbc33915068200477ac35b6960319a60abf41d49c35de456ecb67f3210" Dec 13 09:12:10.969932 containerd[1470]: time="2024-12-13T09:12:10.967993413Z" level=info msg="StopPodSandbox for \"5d769bdbc33915068200477ac35b6960319a60abf41d49c35de456ecb67f3210\"" Dec 13 09:12:10.969932 containerd[1470]: time="2024-12-13T09:12:10.968241084Z" level=info msg="Ensure that sandbox 5d769bdbc33915068200477ac35b6960319a60abf41d49c35de456ecb67f3210 in task-service has been cleanup successfully" Dec 13 09:12:10.986133 kubelet[2567]: I1213 09:12:10.985669 2567 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ed4efaa4a5040c925b271214c7529b89db9dde3b3fd73e3a218ea511bfea84b" Dec 13 09:12:10.989872 containerd[1470]: time="2024-12-13T09:12:10.989815084Z" level=info msg="StopPodSandbox for \"9ed4efaa4a5040c925b271214c7529b89db9dde3b3fd73e3a218ea511bfea84b\"" Dec 13 09:12:10.996271 containerd[1470]: time="2024-12-13T09:12:10.995500231Z" level=info msg="Ensure that sandbox 9ed4efaa4a5040c925b271214c7529b89db9dde3b3fd73e3a218ea511bfea84b in task-service has been cleanup successfully" Dec 13 09:12:11.080941 containerd[1470]: time="2024-12-13T09:12:11.080858652Z" level=error msg="StopPodSandbox for \"9ed4efaa4a5040c925b271214c7529b89db9dde3b3fd73e3a218ea511bfea84b\" failed" error="failed to destroy network for sandbox \"9ed4efaa4a5040c925b271214c7529b89db9dde3b3fd73e3a218ea511bfea84b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:12:11.081728 kubelet[2567]: E1213 09:12:11.081405 2567 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9ed4efaa4a5040c925b271214c7529b89db9dde3b3fd73e3a218ea511bfea84b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9ed4efaa4a5040c925b271214c7529b89db9dde3b3fd73e3a218ea511bfea84b" Dec 13 09:12:11.081728 kubelet[2567]: E1213 09:12:11.081474 2567 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9ed4efaa4a5040c925b271214c7529b89db9dde3b3fd73e3a218ea511bfea84b"} Dec 13 09:12:11.081728 kubelet[2567]: E1213 09:12:11.081538 2567 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"eb0da44c-3f76-4633-8e65-0b9e15072d96\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9ed4efaa4a5040c925b271214c7529b89db9dde3b3fd73e3a218ea511bfea84b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 09:12:11.081728 kubelet[2567]: E1213 09:12:11.081582 2567 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"eb0da44c-3f76-4633-8e65-0b9e15072d96\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9ed4efaa4a5040c925b271214c7529b89db9dde3b3fd73e3a218ea511bfea84b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-rbtvp" podUID="eb0da44c-3f76-4633-8e65-0b9e15072d96" Dec 13 09:12:11.086227 containerd[1470]: time="2024-12-13T09:12:11.086136033Z" level=error msg="StopPodSandbox for \"5d769bdbc33915068200477ac35b6960319a60abf41d49c35de456ecb67f3210\" failed" error="failed to destroy network for sandbox \"5d769bdbc33915068200477ac35b6960319a60abf41d49c35de456ecb67f3210\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:12:11.087019 kubelet[2567]: E1213 09:12:11.086784 2567 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5d769bdbc33915068200477ac35b6960319a60abf41d49c35de456ecb67f3210\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5d769bdbc33915068200477ac35b6960319a60abf41d49c35de456ecb67f3210" Dec 13 09:12:11.087019 kubelet[2567]: E1213 09:12:11.086867 2567 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5d769bdbc33915068200477ac35b6960319a60abf41d49c35de456ecb67f3210"} Dec 13 09:12:11.087019 kubelet[2567]: E1213 09:12:11.086924 2567 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2399d564-1e25-4cf5-a873-070c1c53ce9a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5d769bdbc33915068200477ac35b6960319a60abf41d49c35de456ecb67f3210\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 09:12:11.087019 kubelet[2567]: E1213 09:12:11.086963 2567 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2399d564-1e25-4cf5-a873-070c1c53ce9a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5d769bdbc33915068200477ac35b6960319a60abf41d49c35de456ecb67f3210\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-hcqvh" podUID="2399d564-1e25-4cf5-a873-070c1c53ce9a" Dec 13 09:12:15.090674 kubelet[2567]: I1213 09:12:15.090240 2567 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 09:12:15.117810 kubelet[2567]: E1213 09:12:15.117498 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:12:16.033179 kubelet[2567]: E1213 09:12:16.032538 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:12:16.400032 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount741235822.mount: Deactivated successfully. Dec 13 09:12:16.542551 containerd[1470]: time="2024-12-13T09:12:16.501965134Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Dec 13 09:12:16.600718 containerd[1470]: time="2024-12-13T09:12:16.600619027Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:12:16.613497 containerd[1470]: time="2024-12-13T09:12:16.613429060Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:12:16.629592 containerd[1470]: time="2024-12-13T09:12:16.629498081Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:12:16.636556 containerd[1470]: time="2024-12-13T09:12:16.636440241Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 7.697166648s" Dec 13 09:12:16.636556 containerd[1470]: time="2024-12-13T09:12:16.636540333Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Dec 13 09:12:16.724757 containerd[1470]: time="2024-12-13T09:12:16.723886457Z" level=info msg="CreateContainer within sandbox \"8f4cbdb074c0799e730b49d09a7eb8ac2649a90fe6e4dbe6dbbc87dbd84b46d6\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 09:12:16.832837 containerd[1470]: time="2024-12-13T09:12:16.832403221Z" level=info msg="CreateContainer within sandbox \"8f4cbdb074c0799e730b49d09a7eb8ac2649a90fe6e4dbe6dbbc87dbd84b46d6\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"0a67b11ad9c135ef22fdb28c520e648fd71b29b33de06996744682e4d0613080\"" Dec 13 09:12:16.836942 containerd[1470]: time="2024-12-13T09:12:16.836655441Z" level=info msg="StartContainer for \"0a67b11ad9c135ef22fdb28c520e648fd71b29b33de06996744682e4d0613080\"" Dec 13 09:12:16.967979 systemd[1]: Started cri-containerd-0a67b11ad9c135ef22fdb28c520e648fd71b29b33de06996744682e4d0613080.scope - libcontainer container 0a67b11ad9c135ef22fdb28c520e648fd71b29b33de06996744682e4d0613080. Dec 13 09:12:17.054828 containerd[1470]: time="2024-12-13T09:12:17.052807063Z" level=info msg="StartContainer for \"0a67b11ad9c135ef22fdb28c520e648fd71b29b33de06996744682e4d0613080\" returns successfully" Dec 13 09:12:17.243069 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 13 09:12:17.244840 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 13 09:12:18.104688 kubelet[2567]: E1213 09:12:18.103929 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:12:18.172672 kubelet[2567]: I1213 09:12:18.170917 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-tl8fc" podStartSLOduration=2.681179493 podStartE2EDuration="23.160100465s" podCreationTimestamp="2024-12-13 09:11:55 +0000 UTC" firstStartedPulling="2024-12-13 09:11:56.196833045 +0000 UTC m=+26.683663283" lastFinishedPulling="2024-12-13 09:12:16.675754005 +0000 UTC m=+47.162584255" observedRunningTime="2024-12-13 09:12:18.138615001 +0000 UTC m=+48.625445262" watchObservedRunningTime="2024-12-13 09:12:18.160100465 +0000 UTC m=+48.646930758" Dec 13 09:12:19.107543 kubelet[2567]: E1213 09:12:19.107424 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:12:19.253686 kernel: bpftool[3815]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Dec 13 09:12:19.617123 systemd-networkd[1367]: vxlan.calico: Link UP Dec 13 09:12:19.617133 systemd-networkd[1367]: vxlan.calico: Gained carrier Dec 13 09:12:20.109995 kubelet[2567]: E1213 09:12:20.109458 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:12:20.132543 systemd[1]: run-containerd-runc-k8s.io-0a67b11ad9c135ef22fdb28c520e648fd71b29b33de06996744682e4d0613080-runc.Nd26mY.mount: Deactivated successfully. Dec 13 09:12:21.580303 systemd-networkd[1367]: vxlan.calico: Gained IPv6LL Dec 13 09:12:21.686038 containerd[1470]: time="2024-12-13T09:12:21.684884096Z" level=info msg="StopPodSandbox for \"e353ade0fd510c8423f00f163850e4ca31417bf10d3c7112535820bc6505345f\"" Dec 13 09:12:21.958544 containerd[1470]: 2024-12-13 09:12:21.776 [INFO][3925] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e353ade0fd510c8423f00f163850e4ca31417bf10d3c7112535820bc6505345f" Dec 13 09:12:21.958544 containerd[1470]: 2024-12-13 09:12:21.777 [INFO][3925] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e353ade0fd510c8423f00f163850e4ca31417bf10d3c7112535820bc6505345f" iface="eth0" netns="/var/run/netns/cni-94e88942-c122-0aa5-de0d-e7cdb6e3d196" Dec 13 09:12:21.958544 containerd[1470]: 2024-12-13 09:12:21.778 [INFO][3925] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e353ade0fd510c8423f00f163850e4ca31417bf10d3c7112535820bc6505345f" iface="eth0" netns="/var/run/netns/cni-94e88942-c122-0aa5-de0d-e7cdb6e3d196" Dec 13 09:12:21.958544 containerd[1470]: 2024-12-13 09:12:21.783 [INFO][3925] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e353ade0fd510c8423f00f163850e4ca31417bf10d3c7112535820bc6505345f" iface="eth0" netns="/var/run/netns/cni-94e88942-c122-0aa5-de0d-e7cdb6e3d196" Dec 13 09:12:21.958544 containerd[1470]: 2024-12-13 09:12:21.783 [INFO][3925] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e353ade0fd510c8423f00f163850e4ca31417bf10d3c7112535820bc6505345f" Dec 13 09:12:21.958544 containerd[1470]: 2024-12-13 09:12:21.783 [INFO][3925] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e353ade0fd510c8423f00f163850e4ca31417bf10d3c7112535820bc6505345f" Dec 13 09:12:21.958544 containerd[1470]: 2024-12-13 09:12:21.929 [INFO][3931] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e353ade0fd510c8423f00f163850e4ca31417bf10d3c7112535820bc6505345f" HandleID="k8s-pod-network.e353ade0fd510c8423f00f163850e4ca31417bf10d3c7112535820bc6505345f" Workload="ci--4081.2.1--e--b721934136-k8s-csi--node--driver--fg4tr-eth0" Dec 13 09:12:21.958544 containerd[1470]: 2024-12-13 09:12:21.932 [INFO][3931] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 09:12:21.958544 containerd[1470]: 2024-12-13 09:12:21.932 [INFO][3931] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 09:12:21.958544 containerd[1470]: 2024-12-13 09:12:21.947 [WARNING][3931] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e353ade0fd510c8423f00f163850e4ca31417bf10d3c7112535820bc6505345f" HandleID="k8s-pod-network.e353ade0fd510c8423f00f163850e4ca31417bf10d3c7112535820bc6505345f" Workload="ci--4081.2.1--e--b721934136-k8s-csi--node--driver--fg4tr-eth0" Dec 13 09:12:21.958544 containerd[1470]: 2024-12-13 09:12:21.947 [INFO][3931] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e353ade0fd510c8423f00f163850e4ca31417bf10d3c7112535820bc6505345f" HandleID="k8s-pod-network.e353ade0fd510c8423f00f163850e4ca31417bf10d3c7112535820bc6505345f" Workload="ci--4081.2.1--e--b721934136-k8s-csi--node--driver--fg4tr-eth0" Dec 13 09:12:21.958544 containerd[1470]: 2024-12-13 09:12:21.951 [INFO][3931] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 09:12:21.958544 containerd[1470]: 2024-12-13 09:12:21.954 [INFO][3925] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e353ade0fd510c8423f00f163850e4ca31417bf10d3c7112535820bc6505345f" Dec 13 09:12:21.972191 systemd[1]: run-netns-cni\x2d94e88942\x2dc122\x2d0aa5\x2dde0d\x2de7cdb6e3d196.mount: Deactivated successfully. Dec 13 09:12:21.980368 containerd[1470]: time="2024-12-13T09:12:21.980280462Z" level=info msg="TearDown network for sandbox \"e353ade0fd510c8423f00f163850e4ca31417bf10d3c7112535820bc6505345f\" successfully" Dec 13 09:12:21.980368 containerd[1470]: time="2024-12-13T09:12:21.980346221Z" level=info msg="StopPodSandbox for \"e353ade0fd510c8423f00f163850e4ca31417bf10d3c7112535820bc6505345f\" returns successfully" Dec 13 09:12:21.981552 containerd[1470]: time="2024-12-13T09:12:21.981477115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fg4tr,Uid:7e99ecb7-45e0-439d-b399-3755faed5090,Namespace:calico-system,Attempt:1,}" Dec 13 09:12:22.267536 systemd-networkd[1367]: cali8f9eacfb15e: Link UP Dec 13 09:12:22.268690 systemd-networkd[1367]: cali8f9eacfb15e: Gained carrier Dec 13 09:12:22.295138 containerd[1470]: 2024-12-13 09:12:22.132 [INFO][3942] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.1--e--b721934136-k8s-csi--node--driver--fg4tr-eth0 csi-node-driver- calico-system 7e99ecb7-45e0-439d-b399-3755faed5090 833 0 2024-12-13 09:11:55 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.2.1-e-b721934136 csi-node-driver-fg4tr eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali8f9eacfb15e [] []}} ContainerID="90d621ea092d5e36dff1933a34fe75069f403b30150aa57d227006e1a890f29a" Namespace="calico-system" Pod="csi-node-driver-fg4tr" WorkloadEndpoint="ci--4081.2.1--e--b721934136-k8s-csi--node--driver--fg4tr-" Dec 13 09:12:22.295138 containerd[1470]: 2024-12-13 09:12:22.132 [INFO][3942] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="90d621ea092d5e36dff1933a34fe75069f403b30150aa57d227006e1a890f29a" Namespace="calico-system" Pod="csi-node-driver-fg4tr" WorkloadEndpoint="ci--4081.2.1--e--b721934136-k8s-csi--node--driver--fg4tr-eth0" Dec 13 09:12:22.295138 containerd[1470]: 2024-12-13 09:12:22.181 [INFO][3948] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="90d621ea092d5e36dff1933a34fe75069f403b30150aa57d227006e1a890f29a" HandleID="k8s-pod-network.90d621ea092d5e36dff1933a34fe75069f403b30150aa57d227006e1a890f29a" Workload="ci--4081.2.1--e--b721934136-k8s-csi--node--driver--fg4tr-eth0" Dec 13 09:12:22.295138 containerd[1470]: 2024-12-13 09:12:22.199 [INFO][3948] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="90d621ea092d5e36dff1933a34fe75069f403b30150aa57d227006e1a890f29a" HandleID="k8s-pod-network.90d621ea092d5e36dff1933a34fe75069f403b30150aa57d227006e1a890f29a" Workload="ci--4081.2.1--e--b721934136-k8s-csi--node--driver--fg4tr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002916a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.2.1-e-b721934136", "pod":"csi-node-driver-fg4tr", "timestamp":"2024-12-13 09:12:22.181400958 +0000 UTC"}, Hostname:"ci-4081.2.1-e-b721934136", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 09:12:22.295138 containerd[1470]: 2024-12-13 09:12:22.199 [INFO][3948] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 09:12:22.295138 containerd[1470]: 2024-12-13 09:12:22.199 [INFO][3948] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 09:12:22.295138 containerd[1470]: 2024-12-13 09:12:22.199 [INFO][3948] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.1-e-b721934136' Dec 13 09:12:22.295138 containerd[1470]: 2024-12-13 09:12:22.203 [INFO][3948] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.90d621ea092d5e36dff1933a34fe75069f403b30150aa57d227006e1a890f29a" host="ci-4081.2.1-e-b721934136" Dec 13 09:12:22.295138 containerd[1470]: 2024-12-13 09:12:22.213 [INFO][3948] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.1-e-b721934136" Dec 13 09:12:22.295138 containerd[1470]: 2024-12-13 09:12:22.224 [INFO][3948] ipam/ipam.go 489: Trying affinity for 192.168.19.128/26 host="ci-4081.2.1-e-b721934136" Dec 13 09:12:22.295138 containerd[1470]: 2024-12-13 09:12:22.227 [INFO][3948] ipam/ipam.go 155: Attempting to load block cidr=192.168.19.128/26 host="ci-4081.2.1-e-b721934136" Dec 13 09:12:22.295138 containerd[1470]: 2024-12-13 09:12:22.234 [INFO][3948] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.19.128/26 host="ci-4081.2.1-e-b721934136" Dec 13 09:12:22.295138 containerd[1470]: 2024-12-13 09:12:22.234 [INFO][3948] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.19.128/26 handle="k8s-pod-network.90d621ea092d5e36dff1933a34fe75069f403b30150aa57d227006e1a890f29a" host="ci-4081.2.1-e-b721934136" Dec 13 09:12:22.295138 containerd[1470]: 2024-12-13 09:12:22.240 [INFO][3948] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.90d621ea092d5e36dff1933a34fe75069f403b30150aa57d227006e1a890f29a Dec 13 09:12:22.295138 containerd[1470]: 2024-12-13 09:12:22.248 [INFO][3948] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.19.128/26 handle="k8s-pod-network.90d621ea092d5e36dff1933a34fe75069f403b30150aa57d227006e1a890f29a" host="ci-4081.2.1-e-b721934136" Dec 13 09:12:22.295138 containerd[1470]: 2024-12-13 09:12:22.258 [INFO][3948] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.19.129/26] block=192.168.19.128/26 handle="k8s-pod-network.90d621ea092d5e36dff1933a34fe75069f403b30150aa57d227006e1a890f29a" host="ci-4081.2.1-e-b721934136" Dec 13 09:12:22.295138 containerd[1470]: 2024-12-13 09:12:22.259 [INFO][3948] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.19.129/26] handle="k8s-pod-network.90d621ea092d5e36dff1933a34fe75069f403b30150aa57d227006e1a890f29a" host="ci-4081.2.1-e-b721934136" Dec 13 09:12:22.295138 containerd[1470]: 2024-12-13 09:12:22.259 [INFO][3948] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 09:12:22.295138 containerd[1470]: 2024-12-13 09:12:22.259 [INFO][3948] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.19.129/26] IPv6=[] ContainerID="90d621ea092d5e36dff1933a34fe75069f403b30150aa57d227006e1a890f29a" HandleID="k8s-pod-network.90d621ea092d5e36dff1933a34fe75069f403b30150aa57d227006e1a890f29a" Workload="ci--4081.2.1--e--b721934136-k8s-csi--node--driver--fg4tr-eth0" Dec 13 09:12:22.295863 containerd[1470]: 2024-12-13 09:12:22.263 [INFO][3942] cni-plugin/k8s.go 386: Populated endpoint ContainerID="90d621ea092d5e36dff1933a34fe75069f403b30150aa57d227006e1a890f29a" Namespace="calico-system" Pod="csi-node-driver-fg4tr" WorkloadEndpoint="ci--4081.2.1--e--b721934136-k8s-csi--node--driver--fg4tr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--e--b721934136-k8s-csi--node--driver--fg4tr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7e99ecb7-45e0-439d-b399-3755faed5090", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 11, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-e-b721934136", ContainerID:"", Pod:"csi-node-driver-fg4tr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.19.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8f9eacfb15e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 09:12:22.295863 containerd[1470]: 2024-12-13 09:12:22.263 [INFO][3942] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.19.129/32] ContainerID="90d621ea092d5e36dff1933a34fe75069f403b30150aa57d227006e1a890f29a" Namespace="calico-system" Pod="csi-node-driver-fg4tr" WorkloadEndpoint="ci--4081.2.1--e--b721934136-k8s-csi--node--driver--fg4tr-eth0" Dec 13 09:12:22.295863 containerd[1470]: 2024-12-13 09:12:22.263 [INFO][3942] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8f9eacfb15e ContainerID="90d621ea092d5e36dff1933a34fe75069f403b30150aa57d227006e1a890f29a" Namespace="calico-system" Pod="csi-node-driver-fg4tr" WorkloadEndpoint="ci--4081.2.1--e--b721934136-k8s-csi--node--driver--fg4tr-eth0" Dec 13 09:12:22.295863 containerd[1470]: 2024-12-13 09:12:22.268 [INFO][3942] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="90d621ea092d5e36dff1933a34fe75069f403b30150aa57d227006e1a890f29a" Namespace="calico-system" Pod="csi-node-driver-fg4tr" WorkloadEndpoint="ci--4081.2.1--e--b721934136-k8s-csi--node--driver--fg4tr-eth0" Dec 13 09:12:22.295863 containerd[1470]: 2024-12-13 09:12:22.269 [INFO][3942] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="90d621ea092d5e36dff1933a34fe75069f403b30150aa57d227006e1a890f29a" Namespace="calico-system" Pod="csi-node-driver-fg4tr" WorkloadEndpoint="ci--4081.2.1--e--b721934136-k8s-csi--node--driver--fg4tr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--e--b721934136-k8s-csi--node--driver--fg4tr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7e99ecb7-45e0-439d-b399-3755faed5090", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 11, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-e-b721934136", ContainerID:"90d621ea092d5e36dff1933a34fe75069f403b30150aa57d227006e1a890f29a", Pod:"csi-node-driver-fg4tr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.19.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8f9eacfb15e", MAC:"12:ff:54:51:9c:36", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 09:12:22.295863 containerd[1470]: 2024-12-13 09:12:22.287 [INFO][3942] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="90d621ea092d5e36dff1933a34fe75069f403b30150aa57d227006e1a890f29a" Namespace="calico-system" Pod="csi-node-driver-fg4tr" WorkloadEndpoint="ci--4081.2.1--e--b721934136-k8s-csi--node--driver--fg4tr-eth0" Dec 13 09:12:22.343426 containerd[1470]: time="2024-12-13T09:12:22.342414238Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 09:12:22.343769 containerd[1470]: time="2024-12-13T09:12:22.343442199Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 09:12:22.343769 containerd[1470]: time="2024-12-13T09:12:22.343461025Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:12:22.343871 containerd[1470]: time="2024-12-13T09:12:22.343681481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:12:22.395026 systemd[1]: Started cri-containerd-90d621ea092d5e36dff1933a34fe75069f403b30150aa57d227006e1a890f29a.scope - libcontainer container 90d621ea092d5e36dff1933a34fe75069f403b30150aa57d227006e1a890f29a. Dec 13 09:12:22.448855 containerd[1470]: time="2024-12-13T09:12:22.448785661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fg4tr,Uid:7e99ecb7-45e0-439d-b399-3755faed5090,Namespace:calico-system,Attempt:1,} returns sandbox id \"90d621ea092d5e36dff1933a34fe75069f403b30150aa57d227006e1a890f29a\"" Dec 13 09:12:22.456353 containerd[1470]: time="2024-12-13T09:12:22.456140750Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Dec 13 09:12:22.685043 containerd[1470]: time="2024-12-13T09:12:22.684948063Z" level=info msg="StopPodSandbox for \"fcebbf9ff255a39523e24688a9b20c584b5a715ee32514148a982e7efd530f19\"" Dec 13 09:12:22.829897 containerd[1470]: 2024-12-13 09:12:22.765 [INFO][4023] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fcebbf9ff255a39523e24688a9b20c584b5a715ee32514148a982e7efd530f19" Dec 13 09:12:22.829897 containerd[1470]: 2024-12-13 09:12:22.765 [INFO][4023] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fcebbf9ff255a39523e24688a9b20c584b5a715ee32514148a982e7efd530f19" iface="eth0" netns="/var/run/netns/cni-12e91dbb-9769-bbf5-5ddb-587c7cb73f92" Dec 13 09:12:22.829897 containerd[1470]: 2024-12-13 09:12:22.766 [INFO][4023] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fcebbf9ff255a39523e24688a9b20c584b5a715ee32514148a982e7efd530f19" iface="eth0" netns="/var/run/netns/cni-12e91dbb-9769-bbf5-5ddb-587c7cb73f92" Dec 13 09:12:22.829897 containerd[1470]: 2024-12-13 09:12:22.766 [INFO][4023] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fcebbf9ff255a39523e24688a9b20c584b5a715ee32514148a982e7efd530f19" iface="eth0" netns="/var/run/netns/cni-12e91dbb-9769-bbf5-5ddb-587c7cb73f92" Dec 13 09:12:22.829897 containerd[1470]: 2024-12-13 09:12:22.766 [INFO][4023] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fcebbf9ff255a39523e24688a9b20c584b5a715ee32514148a982e7efd530f19" Dec 13 09:12:22.829897 containerd[1470]: 2024-12-13 09:12:22.766 [INFO][4023] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fcebbf9ff255a39523e24688a9b20c584b5a715ee32514148a982e7efd530f19" Dec 13 09:12:22.829897 containerd[1470]: 2024-12-13 09:12:22.811 [INFO][4029] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fcebbf9ff255a39523e24688a9b20c584b5a715ee32514148a982e7efd530f19" HandleID="k8s-pod-network.fcebbf9ff255a39523e24688a9b20c584b5a715ee32514148a982e7efd530f19" Workload="ci--4081.2.1--e--b721934136-k8s-calico--apiserver--dd59c5464--8chwx-eth0" Dec 13 09:12:22.829897 containerd[1470]: 2024-12-13 09:12:22.812 [INFO][4029] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 09:12:22.829897 containerd[1470]: 2024-12-13 09:12:22.812 [INFO][4029] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 09:12:22.829897 containerd[1470]: 2024-12-13 09:12:22.821 [WARNING][4029] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fcebbf9ff255a39523e24688a9b20c584b5a715ee32514148a982e7efd530f19" HandleID="k8s-pod-network.fcebbf9ff255a39523e24688a9b20c584b5a715ee32514148a982e7efd530f19" Workload="ci--4081.2.1--e--b721934136-k8s-calico--apiserver--dd59c5464--8chwx-eth0" Dec 13 09:12:22.829897 containerd[1470]: 2024-12-13 09:12:22.821 [INFO][4029] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fcebbf9ff255a39523e24688a9b20c584b5a715ee32514148a982e7efd530f19" HandleID="k8s-pod-network.fcebbf9ff255a39523e24688a9b20c584b5a715ee32514148a982e7efd530f19" Workload="ci--4081.2.1--e--b721934136-k8s-calico--apiserver--dd59c5464--8chwx-eth0" Dec 13 09:12:22.829897 containerd[1470]: 2024-12-13 09:12:22.824 [INFO][4029] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 09:12:22.829897 containerd[1470]: 2024-12-13 09:12:22.827 [INFO][4023] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fcebbf9ff255a39523e24688a9b20c584b5a715ee32514148a982e7efd530f19" Dec 13 09:12:22.831256 containerd[1470]: time="2024-12-13T09:12:22.830545707Z" level=info msg="TearDown network for sandbox \"fcebbf9ff255a39523e24688a9b20c584b5a715ee32514148a982e7efd530f19\" successfully" Dec 13 09:12:22.831256 containerd[1470]: time="2024-12-13T09:12:22.830595090Z" level=info msg="StopPodSandbox for \"fcebbf9ff255a39523e24688a9b20c584b5a715ee32514148a982e7efd530f19\" returns successfully" Dec 13 09:12:22.832972 containerd[1470]: time="2024-12-13T09:12:22.832520027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dd59c5464-8chwx,Uid:57fcc6dd-566b-4786-b6c6-5e0d6f04624c,Namespace:calico-apiserver,Attempt:1,}" Dec 13 09:12:22.970032 systemd[1]: run-containerd-runc-k8s.io-90d621ea092d5e36dff1933a34fe75069f403b30150aa57d227006e1a890f29a-runc.Bvh4KV.mount: Deactivated successfully. Dec 13 09:12:22.970228 systemd[1]: run-netns-cni\x2d12e91dbb\x2d9769\x2dbbf5\x2d5ddb\x2d587c7cb73f92.mount: Deactivated successfully. Dec 13 09:12:23.056666 systemd-networkd[1367]: calicd88d0c632b: Link UP Dec 13 09:12:23.060078 systemd-networkd[1367]: calicd88d0c632b: Gained carrier Dec 13 09:12:23.090251 containerd[1470]: 2024-12-13 09:12:22.914 [INFO][4037] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.1--e--b721934136-k8s-calico--apiserver--dd59c5464--8chwx-eth0 calico-apiserver-dd59c5464- calico-apiserver 57fcc6dd-566b-4786-b6c6-5e0d6f04624c 840 0 2024-12-13 09:11:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:dd59c5464 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.2.1-e-b721934136 calico-apiserver-dd59c5464-8chwx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calicd88d0c632b [] []}} ContainerID="8e51e56433a015c057cf477d9e7bc20c0ce3a5f9f7650047b0cff25c77ef539a" Namespace="calico-apiserver" Pod="calico-apiserver-dd59c5464-8chwx" WorkloadEndpoint="ci--4081.2.1--e--b721934136-k8s-calico--apiserver--dd59c5464--8chwx-" Dec 13 09:12:23.090251 containerd[1470]: 2024-12-13 09:12:22.914 [INFO][4037] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8e51e56433a015c057cf477d9e7bc20c0ce3a5f9f7650047b0cff25c77ef539a" Namespace="calico-apiserver" Pod="calico-apiserver-dd59c5464-8chwx" WorkloadEndpoint="ci--4081.2.1--e--b721934136-k8s-calico--apiserver--dd59c5464--8chwx-eth0" Dec 13 09:12:23.090251 containerd[1470]: 2024-12-13 09:12:22.979 [INFO][4048] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8e51e56433a015c057cf477d9e7bc20c0ce3a5f9f7650047b0cff25c77ef539a" HandleID="k8s-pod-network.8e51e56433a015c057cf477d9e7bc20c0ce3a5f9f7650047b0cff25c77ef539a" Workload="ci--4081.2.1--e--b721934136-k8s-calico--apiserver--dd59c5464--8chwx-eth0" Dec 13 09:12:23.090251 containerd[1470]: 2024-12-13 09:12:22.995 [INFO][4048] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8e51e56433a015c057cf477d9e7bc20c0ce3a5f9f7650047b0cff25c77ef539a" HandleID="k8s-pod-network.8e51e56433a015c057cf477d9e7bc20c0ce3a5f9f7650047b0cff25c77ef539a" Workload="ci--4081.2.1--e--b721934136-k8s-calico--apiserver--dd59c5464--8chwx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002907a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.2.1-e-b721934136", "pod":"calico-apiserver-dd59c5464-8chwx", "timestamp":"2024-12-13 09:12:22.979876246 +0000 UTC"}, Hostname:"ci-4081.2.1-e-b721934136", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 09:12:23.090251 containerd[1470]: 2024-12-13 09:12:22.995 [INFO][4048] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 09:12:23.090251 containerd[1470]: 2024-12-13 09:12:22.996 [INFO][4048] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 09:12:23.090251 containerd[1470]: 2024-12-13 09:12:22.996 [INFO][4048] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.1-e-b721934136' Dec 13 09:12:23.090251 containerd[1470]: 2024-12-13 09:12:22.999 [INFO][4048] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8e51e56433a015c057cf477d9e7bc20c0ce3a5f9f7650047b0cff25c77ef539a" host="ci-4081.2.1-e-b721934136" Dec 13 09:12:23.090251 containerd[1470]: 2024-12-13 09:12:23.006 [INFO][4048] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.1-e-b721934136" Dec 13 09:12:23.090251 containerd[1470]: 2024-12-13 09:12:23.016 [INFO][4048] ipam/ipam.go 489: Trying affinity for 192.168.19.128/26 host="ci-4081.2.1-e-b721934136" Dec 13 09:12:23.090251 containerd[1470]: 2024-12-13 09:12:23.020 [INFO][4048] ipam/ipam.go 155: Attempting to load block cidr=192.168.19.128/26 host="ci-4081.2.1-e-b721934136" Dec 13 09:12:23.090251 containerd[1470]: 2024-12-13 09:12:23.025 [INFO][4048] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.19.128/26 host="ci-4081.2.1-e-b721934136" Dec 13 09:12:23.090251 containerd[1470]: 2024-12-13 09:12:23.025 [INFO][4048] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.19.128/26 handle="k8s-pod-network.8e51e56433a015c057cf477d9e7bc20c0ce3a5f9f7650047b0cff25c77ef539a" host="ci-4081.2.1-e-b721934136" Dec 13 09:12:23.090251 containerd[1470]: 2024-12-13 09:12:23.028 [INFO][4048] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8e51e56433a015c057cf477d9e7bc20c0ce3a5f9f7650047b0cff25c77ef539a Dec 13 09:12:23.090251 containerd[1470]: 2024-12-13 09:12:23.039 [INFO][4048] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.19.128/26 handle="k8s-pod-network.8e51e56433a015c057cf477d9e7bc20c0ce3a5f9f7650047b0cff25c77ef539a" host="ci-4081.2.1-e-b721934136" Dec 13 09:12:23.090251 containerd[1470]: 2024-12-13 09:12:23.048 [INFO][4048] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.19.130/26] block=192.168.19.128/26 handle="k8s-pod-network.8e51e56433a015c057cf477d9e7bc20c0ce3a5f9f7650047b0cff25c77ef539a" host="ci-4081.2.1-e-b721934136" Dec 13 09:12:23.090251 containerd[1470]: 2024-12-13 09:12:23.048 [INFO][4048] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.19.130/26] handle="k8s-pod-network.8e51e56433a015c057cf477d9e7bc20c0ce3a5f9f7650047b0cff25c77ef539a" host="ci-4081.2.1-e-b721934136" Dec 13 09:12:23.090251 containerd[1470]: 2024-12-13 09:12:23.048 [INFO][4048] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 09:12:23.090251 containerd[1470]: 2024-12-13 09:12:23.048 [INFO][4048] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.19.130/26] IPv6=[] ContainerID="8e51e56433a015c057cf477d9e7bc20c0ce3a5f9f7650047b0cff25c77ef539a" HandleID="k8s-pod-network.8e51e56433a015c057cf477d9e7bc20c0ce3a5f9f7650047b0cff25c77ef539a" Workload="ci--4081.2.1--e--b721934136-k8s-calico--apiserver--dd59c5464--8chwx-eth0" Dec 13 09:12:23.092030 containerd[1470]: 2024-12-13 09:12:23.051 [INFO][4037] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8e51e56433a015c057cf477d9e7bc20c0ce3a5f9f7650047b0cff25c77ef539a" Namespace="calico-apiserver" Pod="calico-apiserver-dd59c5464-8chwx" WorkloadEndpoint="ci--4081.2.1--e--b721934136-k8s-calico--apiserver--dd59c5464--8chwx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--e--b721934136-k8s-calico--apiserver--dd59c5464--8chwx-eth0", GenerateName:"calico-apiserver-dd59c5464-", Namespace:"calico-apiserver", SelfLink:"", UID:"57fcc6dd-566b-4786-b6c6-5e0d6f04624c", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 11, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dd59c5464", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-e-b721934136", ContainerID:"", Pod:"calico-apiserver-dd59c5464-8chwx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.19.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicd88d0c632b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 09:12:23.092030 containerd[1470]: 2024-12-13 09:12:23.051 [INFO][4037] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.19.130/32] ContainerID="8e51e56433a015c057cf477d9e7bc20c0ce3a5f9f7650047b0cff25c77ef539a" Namespace="calico-apiserver" Pod="calico-apiserver-dd59c5464-8chwx" WorkloadEndpoint="ci--4081.2.1--e--b721934136-k8s-calico--apiserver--dd59c5464--8chwx-eth0" Dec 13 09:12:23.092030 containerd[1470]: 2024-12-13 09:12:23.051 [INFO][4037] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicd88d0c632b ContainerID="8e51e56433a015c057cf477d9e7bc20c0ce3a5f9f7650047b0cff25c77ef539a" Namespace="calico-apiserver" Pod="calico-apiserver-dd59c5464-8chwx" WorkloadEndpoint="ci--4081.2.1--e--b721934136-k8s-calico--apiserver--dd59c5464--8chwx-eth0" Dec 13 09:12:23.092030 containerd[1470]: 2024-12-13 09:12:23.055 [INFO][4037] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8e51e56433a015c057cf477d9e7bc20c0ce3a5f9f7650047b0cff25c77ef539a" Namespace="calico-apiserver" Pod="calico-apiserver-dd59c5464-8chwx" WorkloadEndpoint="ci--4081.2.1--e--b721934136-k8s-calico--apiserver--dd59c5464--8chwx-eth0" Dec 13 09:12:23.092030 containerd[1470]: 2024-12-13 09:12:23.056 [INFO][4037] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8e51e56433a015c057cf477d9e7bc20c0ce3a5f9f7650047b0cff25c77ef539a" Namespace="calico-apiserver" Pod="calico-apiserver-dd59c5464-8chwx" WorkloadEndpoint="ci--4081.2.1--e--b721934136-k8s-calico--apiserver--dd59c5464--8chwx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--e--b721934136-k8s-calico--apiserver--dd59c5464--8chwx-eth0", GenerateName:"calico-apiserver-dd59c5464-", Namespace:"calico-apiserver", SelfLink:"", UID:"57fcc6dd-566b-4786-b6c6-5e0d6f04624c", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 11, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dd59c5464", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-e-b721934136", ContainerID:"8e51e56433a015c057cf477d9e7bc20c0ce3a5f9f7650047b0cff25c77ef539a", Pod:"calico-apiserver-dd59c5464-8chwx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.19.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicd88d0c632b", MAC:"f6:fa:6f:b9:f3:84", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 09:12:23.092030 containerd[1470]: 2024-12-13 09:12:23.080 [INFO][4037] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8e51e56433a015c057cf477d9e7bc20c0ce3a5f9f7650047b0cff25c77ef539a" Namespace="calico-apiserver" Pod="calico-apiserver-dd59c5464-8chwx" WorkloadEndpoint="ci--4081.2.1--e--b721934136-k8s-calico--apiserver--dd59c5464--8chwx-eth0" Dec 13 09:12:23.147717 containerd[1470]: time="2024-12-13T09:12:23.141980754Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 09:12:23.147717 containerd[1470]: time="2024-12-13T09:12:23.142177402Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 09:12:23.147717 containerd[1470]: time="2024-12-13T09:12:23.142199534Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:12:23.147717 containerd[1470]: time="2024-12-13T09:12:23.142361852Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:12:23.196766 systemd[1]: run-containerd-runc-k8s.io-8e51e56433a015c057cf477d9e7bc20c0ce3a5f9f7650047b0cff25c77ef539a-runc.2LLcms.mount: Deactivated successfully. Dec 13 09:12:23.208111 systemd[1]: Started cri-containerd-8e51e56433a015c057cf477d9e7bc20c0ce3a5f9f7650047b0cff25c77ef539a.scope - libcontainer container 8e51e56433a015c057cf477d9e7bc20c0ce3a5f9f7650047b0cff25c77ef539a. Dec 13 09:12:23.285086 containerd[1470]: time="2024-12-13T09:12:23.285042926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dd59c5464-8chwx,Uid:57fcc6dd-566b-4786-b6c6-5e0d6f04624c,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"8e51e56433a015c057cf477d9e7bc20c0ce3a5f9f7650047b0cff25c77ef539a\"" Dec 13 09:12:24.076999 systemd-networkd[1367]: cali8f9eacfb15e: Gained IPv6LL Dec 13 09:12:24.384428 containerd[1470]: time="2024-12-13T09:12:24.384242260Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:12:24.386973 containerd[1470]: time="2024-12-13T09:12:24.386891793Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Dec 13 09:12:24.387937 containerd[1470]: time="2024-12-13T09:12:24.387888440Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:12:24.392279 containerd[1470]: time="2024-12-13T09:12:24.392200328Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:12:24.393187 containerd[1470]: time="2024-12-13T09:12:24.393132865Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.936926626s" Dec 13 09:12:24.393187 containerd[1470]: time="2024-12-13T09:12:24.393187615Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Dec 13 09:12:24.396172 containerd[1470]: time="2024-12-13T09:12:24.396121911Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 09:12:24.399148 containerd[1470]: time="2024-12-13T09:12:24.398939805Z" level=info msg="CreateContainer within sandbox \"90d621ea092d5e36dff1933a34fe75069f403b30150aa57d227006e1a890f29a\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Dec 13 09:12:24.426012 containerd[1470]: time="2024-12-13T09:12:24.425934284Z" level=info msg="CreateContainer within sandbox \"90d621ea092d5e36dff1933a34fe75069f403b30150aa57d227006e1a890f29a\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"7f08f9f3149ab83e54d78c794a3ff3822bce8a9ee9948ea3af8d42ad002be6fe\"" Dec 13 09:12:24.428065 containerd[1470]: time="2024-12-13T09:12:24.427559436Z" level=info msg="StartContainer for \"7f08f9f3149ab83e54d78c794a3ff3822bce8a9ee9948ea3af8d42ad002be6fe\"" Dec 13 09:12:24.476393 systemd[1]: run-containerd-runc-k8s.io-7f08f9f3149ab83e54d78c794a3ff3822bce8a9ee9948ea3af8d42ad002be6fe-runc.jxfuoF.mount: Deactivated successfully. Dec 13 09:12:24.489954 systemd[1]: Started cri-containerd-7f08f9f3149ab83e54d78c794a3ff3822bce8a9ee9948ea3af8d42ad002be6fe.scope - libcontainer container 7f08f9f3149ab83e54d78c794a3ff3822bce8a9ee9948ea3af8d42ad002be6fe. Dec 13 09:12:24.536190 containerd[1470]: time="2024-12-13T09:12:24.535999553Z" level=info msg="StartContainer for \"7f08f9f3149ab83e54d78c794a3ff3822bce8a9ee9948ea3af8d42ad002be6fe\" returns successfully" Dec 13 09:12:24.651975 systemd-networkd[1367]: calicd88d0c632b: Gained IPv6LL Dec 13 09:12:24.687393 containerd[1470]: time="2024-12-13T09:12:24.687030704Z" level=info msg="StopPodSandbox for \"9ed4efaa4a5040c925b271214c7529b89db9dde3b3fd73e3a218ea511bfea84b\"" Dec 13 09:12:24.688825 containerd[1470]: time="2024-12-13T09:12:24.687919089Z" level=info msg="StopPodSandbox for \"03211bcb90188fff2f38b44545c754fec32a8393084cab6450a3f72d350a8a3e\"" Dec 13 09:12:24.690410 containerd[1470]: time="2024-12-13T09:12:24.690358140Z" level=info msg="StopPodSandbox for \"9083c936da8f8f702587c99d9cc2d53875a7abb59e3cc3bf1c72570e1620d593\"" Dec 13 09:12:24.962497 containerd[1470]: 2024-12-13 09:12:24.835 [INFO][4190] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9ed4efaa4a5040c925b271214c7529b89db9dde3b3fd73e3a218ea511bfea84b" Dec 13 09:12:24.962497 containerd[1470]: 2024-12-13 09:12:24.835 [INFO][4190] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9ed4efaa4a5040c925b271214c7529b89db9dde3b3fd73e3a218ea511bfea84b" iface="eth0" netns="/var/run/netns/cni-d6d2d0b6-e4e0-c47b-30ae-85ea86f55c35" Dec 13 09:12:24.962497 containerd[1470]: 2024-12-13 09:12:24.837 [INFO][4190] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9ed4efaa4a5040c925b271214c7529b89db9dde3b3fd73e3a218ea511bfea84b" iface="eth0" netns="/var/run/netns/cni-d6d2d0b6-e4e0-c47b-30ae-85ea86f55c35" Dec 13 09:12:24.962497 containerd[1470]: 2024-12-13 09:12:24.841 [INFO][4190] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9ed4efaa4a5040c925b271214c7529b89db9dde3b3fd73e3a218ea511bfea84b" iface="eth0" netns="/var/run/netns/cni-d6d2d0b6-e4e0-c47b-30ae-85ea86f55c35" Dec 13 09:12:24.962497 containerd[1470]: 2024-12-13 09:12:24.841 [INFO][4190] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9ed4efaa4a5040c925b271214c7529b89db9dde3b3fd73e3a218ea511bfea84b" Dec 13 09:12:24.962497 containerd[1470]: 2024-12-13 09:12:24.842 [INFO][4190] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9ed4efaa4a5040c925b271214c7529b89db9dde3b3fd73e3a218ea511bfea84b" Dec 13 09:12:24.962497 containerd[1470]: 2024-12-13 09:12:24.927 [INFO][4204] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9ed4efaa4a5040c925b271214c7529b89db9dde3b3fd73e3a218ea511bfea84b" HandleID="k8s-pod-network.9ed4efaa4a5040c925b271214c7529b89db9dde3b3fd73e3a218ea511bfea84b" Workload="ci--4081.2.1--e--b721934136-k8s-coredns--7db6d8ff4d--rbtvp-eth0" Dec 13 09:12:24.962497 containerd[1470]: 2024-12-13 09:12:24.928 [INFO][4204] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 09:12:24.962497 containerd[1470]: 2024-12-13 09:12:24.928 [INFO][4204] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 09:12:24.962497 containerd[1470]: 2024-12-13 09:12:24.944 [WARNING][4204] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9ed4efaa4a5040c925b271214c7529b89db9dde3b3fd73e3a218ea511bfea84b" HandleID="k8s-pod-network.9ed4efaa4a5040c925b271214c7529b89db9dde3b3fd73e3a218ea511bfea84b" Workload="ci--4081.2.1--e--b721934136-k8s-coredns--7db6d8ff4d--rbtvp-eth0" Dec 13 09:12:24.962497 containerd[1470]: 2024-12-13 09:12:24.944 [INFO][4204] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9ed4efaa4a5040c925b271214c7529b89db9dde3b3fd73e3a218ea511bfea84b" HandleID="k8s-pod-network.9ed4efaa4a5040c925b271214c7529b89db9dde3b3fd73e3a218ea511bfea84b" Workload="ci--4081.2.1--e--b721934136-k8s-coredns--7db6d8ff4d--rbtvp-eth0" Dec 13 09:12:24.962497 containerd[1470]: 2024-12-13 09:12:24.949 [INFO][4204] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 09:12:24.962497 containerd[1470]: 2024-12-13 09:12:24.953 [INFO][4190] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9ed4efaa4a5040c925b271214c7529b89db9dde3b3fd73e3a218ea511bfea84b" Dec 13 09:12:24.968138 containerd[1470]: time="2024-12-13T09:12:24.965946474Z" level=info msg="TearDown network for sandbox \"9ed4efaa4a5040c925b271214c7529b89db9dde3b3fd73e3a218ea511bfea84b\" successfully" Dec 13 09:12:24.969928 containerd[1470]: time="2024-12-13T09:12:24.969696247Z" level=info msg="StopPodSandbox for \"9ed4efaa4a5040c925b271214c7529b89db9dde3b3fd73e3a218ea511bfea84b\" returns successfully" Dec 13 09:12:24.971399 kubelet[2567]: E1213 09:12:24.971303 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:12:24.972818 systemd[1]: run-netns-cni\x2dd6d2d0b6\x2de4e0\x2dc47b\x2d30ae\x2d85ea86f55c35.mount: Deactivated successfully. Dec 13 09:12:24.977215 containerd[1470]: time="2024-12-13T09:12:24.977122261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rbtvp,Uid:eb0da44c-3f76-4633-8e65-0b9e15072d96,Namespace:kube-system,Attempt:1,}" Dec 13 09:12:24.983615 containerd[1470]: 2024-12-13 09:12:24.833 [INFO][4174] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="03211bcb90188fff2f38b44545c754fec32a8393084cab6450a3f72d350a8a3e" Dec 13 09:12:24.983615 containerd[1470]: 2024-12-13 09:12:24.841 [INFO][4174] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="03211bcb90188fff2f38b44545c754fec32a8393084cab6450a3f72d350a8a3e" iface="eth0" netns="/var/run/netns/cni-8f35c40b-b9a9-004b-a4e9-b79b3aa04903" Dec 13 09:12:24.983615 containerd[1470]: 2024-12-13 09:12:24.845 [INFO][4174] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="03211bcb90188fff2f38b44545c754fec32a8393084cab6450a3f72d350a8a3e" iface="eth0" netns="/var/run/netns/cni-8f35c40b-b9a9-004b-a4e9-b79b3aa04903" Dec 13 09:12:24.983615 containerd[1470]: 2024-12-13 09:12:24.848 [INFO][4174] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="03211bcb90188fff2f38b44545c754fec32a8393084cab6450a3f72d350a8a3e" iface="eth0" netns="/var/run/netns/cni-8f35c40b-b9a9-004b-a4e9-b79b3aa04903" Dec 13 09:12:24.983615 containerd[1470]: 2024-12-13 09:12:24.848 [INFO][4174] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="03211bcb90188fff2f38b44545c754fec32a8393084cab6450a3f72d350a8a3e" Dec 13 09:12:24.983615 containerd[1470]: 2024-12-13 09:12:24.848 [INFO][4174] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="03211bcb90188fff2f38b44545c754fec32a8393084cab6450a3f72d350a8a3e" Dec 13 09:12:24.983615 containerd[1470]: 2024-12-13 09:12:24.936 [INFO][4206] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="03211bcb90188fff2f38b44545c754fec32a8393084cab6450a3f72d350a8a3e" HandleID="k8s-pod-network.03211bcb90188fff2f38b44545c754fec32a8393084cab6450a3f72d350a8a3e" Workload="ci--4081.2.1--e--b721934136-k8s-calico--apiserver--dd59c5464--h679k-eth0" Dec 13 09:12:24.983615 containerd[1470]: 2024-12-13 09:12:24.940 [INFO][4206] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 09:12:24.983615 containerd[1470]: 2024-12-13 09:12:24.949 [INFO][4206] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 09:12:24.983615 containerd[1470]: 2024-12-13 09:12:24.966 [WARNING][4206] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="03211bcb90188fff2f38b44545c754fec32a8393084cab6450a3f72d350a8a3e" HandleID="k8s-pod-network.03211bcb90188fff2f38b44545c754fec32a8393084cab6450a3f72d350a8a3e" Workload="ci--4081.2.1--e--b721934136-k8s-calico--apiserver--dd59c5464--h679k-eth0" Dec 13 09:12:24.983615 containerd[1470]: 2024-12-13 09:12:24.967 [INFO][4206] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="03211bcb90188fff2f38b44545c754fec32a8393084cab6450a3f72d350a8a3e" HandleID="k8s-pod-network.03211bcb90188fff2f38b44545c754fec32a8393084cab6450a3f72d350a8a3e" Workload="ci--4081.2.1--e--b721934136-k8s-calico--apiserver--dd59c5464--h679k-eth0" Dec 13 09:12:24.983615 containerd[1470]: 2024-12-13 09:12:24.971 [INFO][4206] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 09:12:24.983615 containerd[1470]: 2024-12-13 09:12:24.976 [INFO][4174] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="03211bcb90188fff2f38b44545c754fec32a8393084cab6450a3f72d350a8a3e" Dec 13 09:12:24.984329 containerd[1470]: time="2024-12-13T09:12:24.982792255Z" level=info msg="TearDown network for sandbox \"03211bcb90188fff2f38b44545c754fec32a8393084cab6450a3f72d350a8a3e\" successfully" Dec 13 09:12:24.984329 containerd[1470]: time="2024-12-13T09:12:24.983753670Z" level=info msg="StopPodSandbox for \"03211bcb90188fff2f38b44545c754fec32a8393084cab6450a3f72d350a8a3e\" returns successfully" Dec 13 09:12:24.985944 containerd[1470]: time="2024-12-13T09:12:24.984853826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dd59c5464-h679k,Uid:4700244a-abfc-4fc6-93e2-57b920e50bc1,Namespace:calico-apiserver,Attempt:1,}" Dec 13 09:12:25.008674 containerd[1470]: 2024-12-13 09:12:24.836 [INFO][4191] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9083c936da8f8f702587c99d9cc2d53875a7abb59e3cc3bf1c72570e1620d593" Dec 13 09:12:25.008674 containerd[1470]: 2024-12-13 09:12:24.836 [INFO][4191] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9083c936da8f8f702587c99d9cc2d53875a7abb59e3cc3bf1c72570e1620d593" iface="eth0" netns="/var/run/netns/cni-6ede8854-106a-395f-54da-b4fc3db21721" Dec 13 09:12:25.008674 containerd[1470]: 2024-12-13 09:12:24.843 [INFO][4191] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9083c936da8f8f702587c99d9cc2d53875a7abb59e3cc3bf1c72570e1620d593" iface="eth0" netns="/var/run/netns/cni-6ede8854-106a-395f-54da-b4fc3db21721" Dec 13 09:12:25.008674 containerd[1470]: 2024-12-13 09:12:24.851 [INFO][4191] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9083c936da8f8f702587c99d9cc2d53875a7abb59e3cc3bf1c72570e1620d593" iface="eth0" netns="/var/run/netns/cni-6ede8854-106a-395f-54da-b4fc3db21721" Dec 13 09:12:25.008674 containerd[1470]: 2024-12-13 09:12:24.851 [INFO][4191] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9083c936da8f8f702587c99d9cc2d53875a7abb59e3cc3bf1c72570e1620d593" Dec 13 09:12:25.008674 containerd[1470]: 2024-12-13 09:12:24.852 [INFO][4191] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9083c936da8f8f702587c99d9cc2d53875a7abb59e3cc3bf1c72570e1620d593" Dec 13 09:12:25.008674 containerd[1470]: 2024-12-13 09:12:24.979 [INFO][4207] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9083c936da8f8f702587c99d9cc2d53875a7abb59e3cc3bf1c72570e1620d593" HandleID="k8s-pod-network.9083c936da8f8f702587c99d9cc2d53875a7abb59e3cc3bf1c72570e1620d593" Workload="ci--4081.2.1--e--b721934136-k8s-calico--kube--controllers--847f66d7bd--gnjzf-eth0" Dec 13 09:12:25.008674 containerd[1470]: 2024-12-13 09:12:24.979 [INFO][4207] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 09:12:25.008674 containerd[1470]: 2024-12-13 09:12:24.979 [INFO][4207] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 09:12:25.008674 containerd[1470]: 2024-12-13 09:12:24.993 [WARNING][4207] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9083c936da8f8f702587c99d9cc2d53875a7abb59e3cc3bf1c72570e1620d593" HandleID="k8s-pod-network.9083c936da8f8f702587c99d9cc2d53875a7abb59e3cc3bf1c72570e1620d593" Workload="ci--4081.2.1--e--b721934136-k8s-calico--kube--controllers--847f66d7bd--gnjzf-eth0" Dec 13 09:12:25.008674 containerd[1470]: 2024-12-13 09:12:24.993 [INFO][4207] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9083c936da8f8f702587c99d9cc2d53875a7abb59e3cc3bf1c72570e1620d593" HandleID="k8s-pod-network.9083c936da8f8f702587c99d9cc2d53875a7abb59e3cc3bf1c72570e1620d593" Workload="ci--4081.2.1--e--b721934136-k8s-calico--kube--controllers--847f66d7bd--gnjzf-eth0" Dec 13 09:12:25.008674 containerd[1470]: 2024-12-13 09:12:24.998 [INFO][4207] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 09:12:25.008674 containerd[1470]: 2024-12-13 09:12:25.002 [INFO][4191] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9083c936da8f8f702587c99d9cc2d53875a7abb59e3cc3bf1c72570e1620d593" Dec 13 09:12:25.011571 containerd[1470]: time="2024-12-13T09:12:25.010821043Z" level=info msg="TearDown network for sandbox \"9083c936da8f8f702587c99d9cc2d53875a7abb59e3cc3bf1c72570e1620d593\" successfully" Dec 13 09:12:25.011571 containerd[1470]: time="2024-12-13T09:12:25.010877734Z" level=info msg="StopPodSandbox for \"9083c936da8f8f702587c99d9cc2d53875a7abb59e3cc3bf1c72570e1620d593\" returns successfully" Dec 13 09:12:25.023146 containerd[1470]: time="2024-12-13T09:12:25.020937827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-847f66d7bd-gnjzf,Uid:6c0ab554-eaa5-49a2-ba96-7901a803a4df,Namespace:calico-system,Attempt:1,}" Dec 13 09:12:25.353882 systemd-networkd[1367]: cali5f5b3bcc039: Link UP Dec 13 09:12:25.367377 systemd-networkd[1367]: cali5f5b3bcc039: Gained carrier Dec 13 09:12:25.427973 systemd[1]: run-netns-cni\x2d8f35c40b\x2db9a9\x2d004b\x2da4e9\x2db79b3aa04903.mount: Deactivated successfully. Dec 13 09:12:25.428492 systemd[1]: run-netns-cni\x2d6ede8854\x2d106a\x2d395f\x2d54da\x2db4fc3db21721.mount: Deactivated successfully. Dec 13 09:12:25.455678 containerd[1470]: 2024-12-13 09:12:25.158 [INFO][4224] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.1--e--b721934136-k8s-calico--apiserver--dd59c5464--h679k-eth0 calico-apiserver-dd59c5464- calico-apiserver 4700244a-abfc-4fc6-93e2-57b920e50bc1 855 0 2024-12-13 09:11:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:dd59c5464 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.2.1-e-b721934136 calico-apiserver-dd59c5464-h679k eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5f5b3bcc039 [] []}} ContainerID="8432936e63ed4ba18a4a48f9eac9a878b920e94b6a5f898edde24073f72531bd" Namespace="calico-apiserver" Pod="calico-apiserver-dd59c5464-h679k" WorkloadEndpoint="ci--4081.2.1--e--b721934136-k8s-calico--apiserver--dd59c5464--h679k-" Dec 13 09:12:25.455678 containerd[1470]: 2024-12-13 09:12:25.159 [INFO][4224] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8432936e63ed4ba18a4a48f9eac9a878b920e94b6a5f898edde24073f72531bd" Namespace="calico-apiserver" Pod="calico-apiserver-dd59c5464-h679k" WorkloadEndpoint="ci--4081.2.1--e--b721934136-k8s-calico--apiserver--dd59c5464--h679k-eth0" Dec 13 09:12:25.455678 containerd[1470]: 2024-12-13 09:12:25.235 [INFO][4273] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8432936e63ed4ba18a4a48f9eac9a878b920e94b6a5f898edde24073f72531bd" HandleID="k8s-pod-network.8432936e63ed4ba18a4a48f9eac9a878b920e94b6a5f898edde24073f72531bd" Workload="ci--4081.2.1--e--b721934136-k8s-calico--apiserver--dd59c5464--h679k-eth0" Dec 13 09:12:25.455678 containerd[1470]: 2024-12-13 09:12:25.251 [INFO][4273] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8432936e63ed4ba18a4a48f9eac9a878b920e94b6a5f898edde24073f72531bd" HandleID="k8s-pod-network.8432936e63ed4ba18a4a48f9eac9a878b920e94b6a5f898edde24073f72531bd" Workload="ci--4081.2.1--e--b721934136-k8s-calico--apiserver--dd59c5464--h679k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00030e120), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.2.1-e-b721934136", "pod":"calico-apiserver-dd59c5464-h679k", "timestamp":"2024-12-13 09:12:25.235254038 +0000 UTC"}, Hostname:"ci-4081.2.1-e-b721934136", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 09:12:25.455678 containerd[1470]: 2024-12-13 09:12:25.251 [INFO][4273] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 09:12:25.455678 containerd[1470]: 2024-12-13 09:12:25.251 [INFO][4273] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 09:12:25.455678 containerd[1470]: 2024-12-13 09:12:25.251 [INFO][4273] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.1-e-b721934136' Dec 13 09:12:25.455678 containerd[1470]: 2024-12-13 09:12:25.257 [INFO][4273] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8432936e63ed4ba18a4a48f9eac9a878b920e94b6a5f898edde24073f72531bd" host="ci-4081.2.1-e-b721934136" Dec 13 09:12:25.455678 containerd[1470]: 2024-12-13 09:12:25.271 [INFO][4273] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.1-e-b721934136" Dec 13 09:12:25.455678 containerd[1470]: 2024-12-13 09:12:25.283 [INFO][4273] ipam/ipam.go 489: Trying affinity for 192.168.19.128/26 host="ci-4081.2.1-e-b721934136" Dec 13 09:12:25.455678 containerd[1470]: 2024-12-13 09:12:25.291 [INFO][4273] ipam/ipam.go 155: Attempting to load block cidr=192.168.19.128/26 host="ci-4081.2.1-e-b721934136" Dec 13 09:12:25.455678 containerd[1470]: 2024-12-13 09:12:25.296 [INFO][4273] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.19.128/26 host="ci-4081.2.1-e-b721934136" Dec 13 09:12:25.455678 containerd[1470]: 2024-12-13 09:12:25.297 [INFO][4273] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.19.128/26 handle="k8s-pod-network.8432936e63ed4ba18a4a48f9eac9a878b920e94b6a5f898edde24073f72531bd" host="ci-4081.2.1-e-b721934136" Dec 13 09:12:25.455678 containerd[1470]: 2024-12-13 09:12:25.303 [INFO][4273] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8432936e63ed4ba18a4a48f9eac9a878b920e94b6a5f898edde24073f72531bd Dec 13 09:12:25.455678 containerd[1470]: 2024-12-13 09:12:25.313 [INFO][4273] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.19.128/26 handle="k8s-pod-network.8432936e63ed4ba18a4a48f9eac9a878b920e94b6a5f898edde24073f72531bd" host="ci-4081.2.1-e-b721934136" Dec 13 09:12:25.455678 containerd[1470]: 2024-12-13 09:12:25.326 [INFO][4273] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.19.131/26] block=192.168.19.128/26 handle="k8s-pod-network.8432936e63ed4ba18a4a48f9eac9a878b920e94b6a5f898edde24073f72531bd" host="ci-4081.2.1-e-b721934136" Dec 13 09:12:25.455678 containerd[1470]: 2024-12-13 09:12:25.326 [INFO][4273] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.19.131/26] handle="k8s-pod-network.8432936e63ed4ba18a4a48f9eac9a878b920e94b6a5f898edde24073f72531bd" host="ci-4081.2.1-e-b721934136" Dec 13 09:12:25.455678 containerd[1470]: 2024-12-13 09:12:25.326 [INFO][4273] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 09:12:25.455678 containerd[1470]: 2024-12-13 09:12:25.327 [INFO][4273] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.19.131/26] IPv6=[] ContainerID="8432936e63ed4ba18a4a48f9eac9a878b920e94b6a5f898edde24073f72531bd" HandleID="k8s-pod-network.8432936e63ed4ba18a4a48f9eac9a878b920e94b6a5f898edde24073f72531bd" Workload="ci--4081.2.1--e--b721934136-k8s-calico--apiserver--dd59c5464--h679k-eth0" Dec 13 09:12:25.456914 containerd[1470]: 2024-12-13 09:12:25.337 [INFO][4224] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8432936e63ed4ba18a4a48f9eac9a878b920e94b6a5f898edde24073f72531bd" Namespace="calico-apiserver" Pod="calico-apiserver-dd59c5464-h679k" WorkloadEndpoint="ci--4081.2.1--e--b721934136-k8s-calico--apiserver--dd59c5464--h679k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--e--b721934136-k8s-calico--apiserver--dd59c5464--h679k-eth0", GenerateName:"calico-apiserver-dd59c5464-", Namespace:"calico-apiserver", SelfLink:"", UID:"4700244a-abfc-4fc6-93e2-57b920e50bc1", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 11, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dd59c5464", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-e-b721934136", ContainerID:"", Pod:"calico-apiserver-dd59c5464-h679k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.19.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5f5b3bcc039", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 09:12:25.456914 containerd[1470]: 2024-12-13 09:12:25.337 [INFO][4224] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.19.131/32] ContainerID="8432936e63ed4ba18a4a48f9eac9a878b920e94b6a5f898edde24073f72531bd" Namespace="calico-apiserver" Pod="calico-apiserver-dd59c5464-h679k" WorkloadEndpoint="ci--4081.2.1--e--b721934136-k8s-calico--apiserver--dd59c5464--h679k-eth0" Dec 13 09:12:25.456914 containerd[1470]: 2024-12-13 09:12:25.338 [INFO][4224] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5f5b3bcc039 ContainerID="8432936e63ed4ba18a4a48f9eac9a878b920e94b6a5f898edde24073f72531bd" Namespace="calico-apiserver" Pod="calico-apiserver-dd59c5464-h679k" WorkloadEndpoint="ci--4081.2.1--e--b721934136-k8s-calico--apiserver--dd59c5464--h679k-eth0" Dec 13 09:12:25.456914 containerd[1470]: 2024-12-13 09:12:25.360 [INFO][4224] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8432936e63ed4ba18a4a48f9eac9a878b920e94b6a5f898edde24073f72531bd" Namespace="calico-apiserver" Pod="calico-apiserver-dd59c5464-h679k" WorkloadEndpoint="ci--4081.2.1--e--b721934136-k8s-calico--apiserver--dd59c5464--h679k-eth0" Dec 13 09:12:25.456914 containerd[1470]: 2024-12-13 09:12:25.362 [INFO][4224] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8432936e63ed4ba18a4a48f9eac9a878b920e94b6a5f898edde24073f72531bd" Namespace="calico-apiserver" Pod="calico-apiserver-dd59c5464-h679k" WorkloadEndpoint="ci--4081.2.1--e--b721934136-k8s-calico--apiserver--dd59c5464--h679k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--e--b721934136-k8s-calico--apiserver--dd59c5464--h679k-eth0", GenerateName:"calico-apiserver-dd59c5464-", Namespace:"calico-apiserver", SelfLink:"", UID:"4700244a-abfc-4fc6-93e2-57b920e50bc1", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 11, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dd59c5464", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-e-b721934136", ContainerID:"8432936e63ed4ba18a4a48f9eac9a878b920e94b6a5f898edde24073f72531bd", Pod:"calico-apiserver-dd59c5464-h679k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.19.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5f5b3bcc039", MAC:"fe:8a:46:ae:7f:69", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 09:12:25.456914 containerd[1470]: 2024-12-13 09:12:25.402 [INFO][4224] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8432936e63ed4ba18a4a48f9eac9a878b920e94b6a5f898edde24073f72531bd" Namespace="calico-apiserver" Pod="calico-apiserver-dd59c5464-h679k" WorkloadEndpoint="ci--4081.2.1--e--b721934136-k8s-calico--apiserver--dd59c5464--h679k-eth0" Dec 13 09:12:25.509425 systemd-networkd[1367]: cali8a4830723c3: Link UP Dec 13 09:12:25.512317 systemd-networkd[1367]: cali8a4830723c3: Gained carrier Dec 13 09:12:25.548896 containerd[1470]: 2024-12-13 09:12:25.112 [INFO][4228] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.1--e--b721934136-k8s-coredns--7db6d8ff4d--rbtvp-eth0 coredns-7db6d8ff4d- kube-system eb0da44c-3f76-4633-8e65-0b9e15072d96 856 0 2024-12-13 09:11:44 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.2.1-e-b721934136 coredns-7db6d8ff4d-rbtvp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8a4830723c3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="4a2aaea560632d1ad078023743f0f517f90a958485c928ce49d598d456d68d0b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rbtvp" WorkloadEndpoint="ci--4081.2.1--e--b721934136-k8s-coredns--7db6d8ff4d--rbtvp-" Dec 13 09:12:25.548896 containerd[1470]: 2024-12-13 09:12:25.112 [INFO][4228] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4a2aaea560632d1ad078023743f0f517f90a958485c928ce49d598d456d68d0b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rbtvp" WorkloadEndpoint="ci--4081.2.1--e--b721934136-k8s-coredns--7db6d8ff4d--rbtvp-eth0" Dec 13 09:12:25.548896 containerd[1470]: 2024-12-13 09:12:25.232 [INFO][4269] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4a2aaea560632d1ad078023743f0f517f90a958485c928ce49d598d456d68d0b" HandleID="k8s-pod-network.4a2aaea560632d1ad078023743f0f517f90a958485c928ce49d598d456d68d0b" Workload="ci--4081.2.1--e--b721934136-k8s-coredns--7db6d8ff4d--rbtvp-eth0" Dec 13 09:12:25.548896 containerd[1470]: 2024-12-13 09:12:25.262 [INFO][4269] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4a2aaea560632d1ad078023743f0f517f90a958485c928ce49d598d456d68d0b" HandleID="k8s-pod-network.4a2aaea560632d1ad078023743f0f517f90a958485c928ce49d598d456d68d0b" Workload="ci--4081.2.1--e--b721934136-k8s-coredns--7db6d8ff4d--rbtvp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002af170), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.2.1-e-b721934136", "pod":"coredns-7db6d8ff4d-rbtvp", "timestamp":"2024-12-13 09:12:25.232579971 +0000 UTC"}, Hostname:"ci-4081.2.1-e-b721934136", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 09:12:25.548896 containerd[1470]: 2024-12-13 09:12:25.262 [INFO][4269] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 09:12:25.548896 containerd[1470]: 2024-12-13 09:12:25.328 [INFO][4269] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 09:12:25.548896 containerd[1470]: 2024-12-13 09:12:25.328 [INFO][4269] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.1-e-b721934136' Dec 13 09:12:25.548896 containerd[1470]: 2024-12-13 09:12:25.335 [INFO][4269] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4a2aaea560632d1ad078023743f0f517f90a958485c928ce49d598d456d68d0b" host="ci-4081.2.1-e-b721934136" Dec 13 09:12:25.548896 containerd[1470]: 2024-12-13 09:12:25.363 [INFO][4269] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.1-e-b721934136" Dec 13 09:12:25.548896 containerd[1470]: 2024-12-13 09:12:25.411 [INFO][4269] ipam/ipam.go 489: Trying affinity for 192.168.19.128/26 host="ci-4081.2.1-e-b721934136" Dec 13 09:12:25.548896 containerd[1470]: 2024-12-13 09:12:25.425 [INFO][4269] ipam/ipam.go 155: Attempting to load block cidr=192.168.19.128/26 host="ci-4081.2.1-e-b721934136" Dec 13 09:12:25.548896 containerd[1470]: 2024-12-13 09:12:25.448 [INFO][4269] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.19.128/26 host="ci-4081.2.1-e-b721934136" Dec 13 09:12:25.548896 containerd[1470]: 2024-12-13 09:12:25.448 [INFO][4269] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.19.128/26 handle="k8s-pod-network.4a2aaea560632d1ad078023743f0f517f90a958485c928ce49d598d456d68d0b" host="ci-4081.2.1-e-b721934136" Dec 13 09:12:25.548896 containerd[1470]: 2024-12-13 09:12:25.453 [INFO][4269] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4a2aaea560632d1ad078023743f0f517f90a958485c928ce49d598d456d68d0b Dec 13 09:12:25.548896 containerd[1470]: 2024-12-13 09:12:25.466 [INFO][4269] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.19.128/26 handle="k8s-pod-network.4a2aaea560632d1ad078023743f0f517f90a958485c928ce49d598d456d68d0b" host="ci-4081.2.1-e-b721934136" Dec 13 09:12:25.548896 containerd[1470]: 2024-12-13 09:12:25.487 [INFO][4269] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.19.132/26] block=192.168.19.128/26 handle="k8s-pod-network.4a2aaea560632d1ad078023743f0f517f90a958485c928ce49d598d456d68d0b" host="ci-4081.2.1-e-b721934136" Dec 13 09:12:25.548896 containerd[1470]: 2024-12-13 09:12:25.487 [INFO][4269] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.19.132/26] handle="k8s-pod-network.4a2aaea560632d1ad078023743f0f517f90a958485c928ce49d598d456d68d0b" host="ci-4081.2.1-e-b721934136" Dec 13 09:12:25.548896 containerd[1470]: 2024-12-13 09:12:25.487 [INFO][4269] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 09:12:25.548896 containerd[1470]: 2024-12-13 09:12:25.488 [INFO][4269] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.19.132/26] IPv6=[] ContainerID="4a2aaea560632d1ad078023743f0f517f90a958485c928ce49d598d456d68d0b" HandleID="k8s-pod-network.4a2aaea560632d1ad078023743f0f517f90a958485c928ce49d598d456d68d0b" Workload="ci--4081.2.1--e--b721934136-k8s-coredns--7db6d8ff4d--rbtvp-eth0" Dec 13 09:12:25.550535 containerd[1470]: 2024-12-13 09:12:25.498 [INFO][4228] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4a2aaea560632d1ad078023743f0f517f90a958485c928ce49d598d456d68d0b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rbtvp" WorkloadEndpoint="ci--4081.2.1--e--b721934136-k8s-coredns--7db6d8ff4d--rbtvp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--e--b721934136-k8s-coredns--7db6d8ff4d--rbtvp-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"eb0da44c-3f76-4633-8e65-0b9e15072d96", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 11, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-e-b721934136", ContainerID:"", Pod:"coredns-7db6d8ff4d-rbtvp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8a4830723c3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 09:12:25.550535 containerd[1470]: 2024-12-13 09:12:25.499 [INFO][4228] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.19.132/32] ContainerID="4a2aaea560632d1ad078023743f0f517f90a958485c928ce49d598d456d68d0b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rbtvp" WorkloadEndpoint="ci--4081.2.1--e--b721934136-k8s-coredns--7db6d8ff4d--rbtvp-eth0" Dec 13 09:12:25.550535 containerd[1470]: 2024-12-13 09:12:25.499 [INFO][4228] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8a4830723c3 ContainerID="4a2aaea560632d1ad078023743f0f517f90a958485c928ce49d598d456d68d0b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rbtvp" WorkloadEndpoint="ci--4081.2.1--e--b721934136-k8s-coredns--7db6d8ff4d--rbtvp-eth0" Dec 13 09:12:25.550535 containerd[1470]: 2024-12-13 09:12:25.514 [INFO][4228] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4a2aaea560632d1ad078023743f0f517f90a958485c928ce49d598d456d68d0b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rbtvp" WorkloadEndpoint="ci--4081.2.1--e--b721934136-k8s-coredns--7db6d8ff4d--rbtvp-eth0" Dec 13 09:12:25.550535 containerd[1470]: 2024-12-13 09:12:25.515 [INFO][4228] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4a2aaea560632d1ad078023743f0f517f90a958485c928ce49d598d456d68d0b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rbtvp" WorkloadEndpoint="ci--4081.2.1--e--b721934136-k8s-coredns--7db6d8ff4d--rbtvp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--e--b721934136-k8s-coredns--7db6d8ff4d--rbtvp-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"eb0da44c-3f76-4633-8e65-0b9e15072d96", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 11, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-e-b721934136", ContainerID:"4a2aaea560632d1ad078023743f0f517f90a958485c928ce49d598d456d68d0b", Pod:"coredns-7db6d8ff4d-rbtvp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8a4830723c3", MAC:"ce:39:9e:57:29:1a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 09:12:25.550535 containerd[1470]: 2024-12-13 09:12:25.534 [INFO][4228] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4a2aaea560632d1ad078023743f0f517f90a958485c928ce49d598d456d68d0b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rbtvp" WorkloadEndpoint="ci--4081.2.1--e--b721934136-k8s-coredns--7db6d8ff4d--rbtvp-eth0" Dec 13 09:12:25.609207 containerd[1470]: time="2024-12-13T09:12:25.605587072Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 09:12:25.609207 containerd[1470]: time="2024-12-13T09:12:25.605886813Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 09:12:25.609207 containerd[1470]: time="2024-12-13T09:12:25.605945828Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:12:25.609207 containerd[1470]: time="2024-12-13T09:12:25.606347946Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:12:25.651512 systemd-networkd[1367]: cali0cbaaffb976: Link UP Dec 13 09:12:25.654773 systemd-networkd[1367]: cali0cbaaffb976: Gained carrier Dec 13 09:12:25.685836 systemd[1]: Started cri-containerd-8432936e63ed4ba18a4a48f9eac9a878b920e94b6a5f898edde24073f72531bd.scope - libcontainer container 8432936e63ed4ba18a4a48f9eac9a878b920e94b6a5f898edde24073f72531bd. Dec 13 09:12:25.720558 containerd[1470]: time="2024-12-13T09:12:25.694466508Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 09:12:25.720558 containerd[1470]: time="2024-12-13T09:12:25.694537352Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 09:12:25.720558 containerd[1470]: time="2024-12-13T09:12:25.694557685Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:12:25.720558 containerd[1470]: time="2024-12-13T09:12:25.694715240Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:12:25.722129 containerd[1470]: time="2024-12-13T09:12:25.721829897Z" level=info msg="StopPodSandbox for \"5d769bdbc33915068200477ac35b6960319a60abf41d49c35de456ecb67f3210\"" Dec 13 09:12:25.729937 containerd[1470]: 2024-12-13 09:12:25.199 [INFO][4251] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.1--e--b721934136-k8s-calico--kube--controllers--847f66d7bd--gnjzf-eth0 calico-kube-controllers-847f66d7bd- calico-system 6c0ab554-eaa5-49a2-ba96-7901a803a4df 857 0 2024-12-13 09:11:55 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:847f66d7bd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.2.1-e-b721934136 calico-kube-controllers-847f66d7bd-gnjzf eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali0cbaaffb976 [] []}} ContainerID="02d1fac9c711b5692e758e70d755392b4e09057371e6200cac3a273374c95601" Namespace="calico-system" Pod="calico-kube-controllers-847f66d7bd-gnjzf" WorkloadEndpoint="ci--4081.2.1--e--b721934136-k8s-calico--kube--controllers--847f66d7bd--gnjzf-" Dec 13 09:12:25.729937 containerd[1470]: 2024-12-13 09:12:25.199 [INFO][4251] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="02d1fac9c711b5692e758e70d755392b4e09057371e6200cac3a273374c95601" Namespace="calico-system" Pod="calico-kube-controllers-847f66d7bd-gnjzf" WorkloadEndpoint="ci--4081.2.1--e--b721934136-k8s-calico--kube--controllers--847f66d7bd--gnjzf-eth0" Dec 13 09:12:25.729937 containerd[1470]: 2024-12-13 09:12:25.286 [INFO][4280] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="02d1fac9c711b5692e758e70d755392b4e09057371e6200cac3a273374c95601" HandleID="k8s-pod-network.02d1fac9c711b5692e758e70d755392b4e09057371e6200cac3a273374c95601" Workload="ci--4081.2.1--e--b721934136-k8s-calico--kube--controllers--847f66d7bd--gnjzf-eth0" Dec 13 09:12:25.729937 containerd[1470]: 2024-12-13 09:12:25.312 [INFO][4280] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="02d1fac9c711b5692e758e70d755392b4e09057371e6200cac3a273374c95601" HandleID="k8s-pod-network.02d1fac9c711b5692e758e70d755392b4e09057371e6200cac3a273374c95601" Workload="ci--4081.2.1--e--b721934136-k8s-calico--kube--controllers--847f66d7bd--gnjzf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004116b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.2.1-e-b721934136", "pod":"calico-kube-controllers-847f66d7bd-gnjzf", "timestamp":"2024-12-13 09:12:25.286717276 +0000 UTC"}, Hostname:"ci-4081.2.1-e-b721934136", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 09:12:25.729937 containerd[1470]: 2024-12-13 09:12:25.312 [INFO][4280] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 09:12:25.729937 containerd[1470]: 2024-12-13 09:12:25.488 [INFO][4280] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 09:12:25.729937 containerd[1470]: 2024-12-13 09:12:25.489 [INFO][4280] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.1-e-b721934136' Dec 13 09:12:25.729937 containerd[1470]: 2024-12-13 09:12:25.497 [INFO][4280] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.02d1fac9c711b5692e758e70d755392b4e09057371e6200cac3a273374c95601" host="ci-4081.2.1-e-b721934136" Dec 13 09:12:25.729937 containerd[1470]: 2024-12-13 09:12:25.509 [INFO][4280] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.1-e-b721934136" Dec 13 09:12:25.729937 containerd[1470]: 2024-12-13 09:12:25.523 [INFO][4280] ipam/ipam.go 489: Trying affinity for 192.168.19.128/26 host="ci-4081.2.1-e-b721934136" Dec 13 09:12:25.729937 containerd[1470]: 2024-12-13 09:12:25.545 [INFO][4280] ipam/ipam.go 155: Attempting to load block cidr=192.168.19.128/26 host="ci-4081.2.1-e-b721934136" Dec 13 09:12:25.729937 containerd[1470]: 2024-12-13 09:12:25.553 [INFO][4280] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.19.128/26 host="ci-4081.2.1-e-b721934136" Dec 13 09:12:25.729937 containerd[1470]: 2024-12-13 09:12:25.554 [INFO][4280] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.19.128/26 handle="k8s-pod-network.02d1fac9c711b5692e758e70d755392b4e09057371e6200cac3a273374c95601" host="ci-4081.2.1-e-b721934136" Dec 13 09:12:25.729937 containerd[1470]: 2024-12-13 09:12:25.561 [INFO][4280] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.02d1fac9c711b5692e758e70d755392b4e09057371e6200cac3a273374c95601 Dec 13 09:12:25.729937 containerd[1470]: 2024-12-13 09:12:25.574 [INFO][4280] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.19.128/26 handle="k8s-pod-network.02d1fac9c711b5692e758e70d755392b4e09057371e6200cac3a273374c95601" host="ci-4081.2.1-e-b721934136" Dec 13 09:12:25.729937 containerd[1470]: 2024-12-13 09:12:25.596 [INFO][4280] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.19.133/26] block=192.168.19.128/26 handle="k8s-pod-network.02d1fac9c711b5692e758e70d755392b4e09057371e6200cac3a273374c95601" host="ci-4081.2.1-e-b721934136" Dec 13 09:12:25.729937 containerd[1470]: 2024-12-13 09:12:25.596 [INFO][4280] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.19.133/26] handle="k8s-pod-network.02d1fac9c711b5692e758e70d755392b4e09057371e6200cac3a273374c95601" host="ci-4081.2.1-e-b721934136" Dec 13 09:12:25.729937 containerd[1470]: 2024-12-13 09:12:25.596 [INFO][4280] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 09:12:25.729937 containerd[1470]: 2024-12-13 09:12:25.597 [INFO][4280] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.19.133/26] IPv6=[] ContainerID="02d1fac9c711b5692e758e70d755392b4e09057371e6200cac3a273374c95601" HandleID="k8s-pod-network.02d1fac9c711b5692e758e70d755392b4e09057371e6200cac3a273374c95601" Workload="ci--4081.2.1--e--b721934136-k8s-calico--kube--controllers--847f66d7bd--gnjzf-eth0" Dec 13 09:12:25.731107 containerd[1470]: 2024-12-13 09:12:25.623 [INFO][4251] cni-plugin/k8s.go 386: Populated endpoint ContainerID="02d1fac9c711b5692e758e70d755392b4e09057371e6200cac3a273374c95601" Namespace="calico-system" Pod="calico-kube-controllers-847f66d7bd-gnjzf" WorkloadEndpoint="ci--4081.2.1--e--b721934136-k8s-calico--kube--controllers--847f66d7bd--gnjzf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--e--b721934136-k8s-calico--kube--controllers--847f66d7bd--gnjzf-eth0", GenerateName:"calico-kube-controllers-847f66d7bd-", Namespace:"calico-system", SelfLink:"", UID:"6c0ab554-eaa5-49a2-ba96-7901a803a4df", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 11, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"847f66d7bd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-e-b721934136", ContainerID:"", Pod:"calico-kube-controllers-847f66d7bd-gnjzf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.19.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0cbaaffb976", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 09:12:25.731107 containerd[1470]: 2024-12-13 09:12:25.625 [INFO][4251] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.19.133/32] ContainerID="02d1fac9c711b5692e758e70d755392b4e09057371e6200cac3a273374c95601" Namespace="calico-system" Pod="calico-kube-controllers-847f66d7bd-gnjzf" WorkloadEndpoint="ci--4081.2.1--e--b721934136-k8s-calico--kube--controllers--847f66d7bd--gnjzf-eth0" Dec 13 09:12:25.731107 containerd[1470]: 2024-12-13 09:12:25.625 [INFO][4251] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0cbaaffb976 ContainerID="02d1fac9c711b5692e758e70d755392b4e09057371e6200cac3a273374c95601" Namespace="calico-system" Pod="calico-kube-controllers-847f66d7bd-gnjzf" WorkloadEndpoint="ci--4081.2.1--e--b721934136-k8s-calico--kube--controllers--847f66d7bd--gnjzf-eth0" Dec 13 09:12:25.731107 containerd[1470]: 2024-12-13 09:12:25.656 [INFO][4251] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="02d1fac9c711b5692e758e70d755392b4e09057371e6200cac3a273374c95601" Namespace="calico-system" Pod="calico-kube-controllers-847f66d7bd-gnjzf" WorkloadEndpoint="ci--4081.2.1--e--b721934136-k8s-calico--kube--controllers--847f66d7bd--gnjzf-eth0" Dec 13 09:12:25.731107 containerd[1470]: 2024-12-13 09:12:25.658 [INFO][4251] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="02d1fac9c711b5692e758e70d755392b4e09057371e6200cac3a273374c95601" Namespace="calico-system" Pod="calico-kube-controllers-847f66d7bd-gnjzf" WorkloadEndpoint="ci--4081.2.1--e--b721934136-k8s-calico--kube--controllers--847f66d7bd--gnjzf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--e--b721934136-k8s-calico--kube--controllers--847f66d7bd--gnjzf-eth0", GenerateName:"calico-kube-controllers-847f66d7bd-", Namespace:"calico-system", SelfLink:"", UID:"6c0ab554-eaa5-49a2-ba96-7901a803a4df", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 11, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"847f66d7bd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-e-b721934136", ContainerID:"02d1fac9c711b5692e758e70d755392b4e09057371e6200cac3a273374c95601", Pod:"calico-kube-controllers-847f66d7bd-gnjzf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.19.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0cbaaffb976", MAC:"92:53:c9:af:30:7d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 09:12:25.731107 containerd[1470]: 2024-12-13 09:12:25.687 [INFO][4251] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="02d1fac9c711b5692e758e70d755392b4e09057371e6200cac3a273374c95601" Namespace="calico-system" Pod="calico-kube-controllers-847f66d7bd-gnjzf" WorkloadEndpoint="ci--4081.2.1--e--b721934136-k8s-calico--kube--controllers--847f66d7bd--gnjzf-eth0" Dec 13 09:12:25.757950 systemd[1]: Started cri-containerd-4a2aaea560632d1ad078023743f0f517f90a958485c928ce49d598d456d68d0b.scope - libcontainer container 4a2aaea560632d1ad078023743f0f517f90a958485c928ce49d598d456d68d0b. Dec 13 09:12:25.824739 containerd[1470]: time="2024-12-13T09:12:25.823213850Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 09:12:25.824739 containerd[1470]: time="2024-12-13T09:12:25.823333311Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 09:12:25.824739 containerd[1470]: time="2024-12-13T09:12:25.823354074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:12:25.829189 containerd[1470]: time="2024-12-13T09:12:25.827836028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:12:25.895433 systemd[1]: Started cri-containerd-02d1fac9c711b5692e758e70d755392b4e09057371e6200cac3a273374c95601.scope - libcontainer container 02d1fac9c711b5692e758e70d755392b4e09057371e6200cac3a273374c95601. Dec 13 09:12:25.935217 containerd[1470]: time="2024-12-13T09:12:25.933422852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rbtvp,Uid:eb0da44c-3f76-4633-8e65-0b9e15072d96,Namespace:kube-system,Attempt:1,} returns sandbox id \"4a2aaea560632d1ad078023743f0f517f90a958485c928ce49d598d456d68d0b\"" Dec 13 09:12:25.939120 kubelet[2567]: E1213 09:12:25.936502 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:12:25.957080 containerd[1470]: time="2024-12-13T09:12:25.957017620Z" level=info msg="CreateContainer within sandbox \"4a2aaea560632d1ad078023743f0f517f90a958485c928ce49d598d456d68d0b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 09:12:26.043835 containerd[1470]: time="2024-12-13T09:12:26.043747830Z" level=info msg="CreateContainer within sandbox \"4a2aaea560632d1ad078023743f0f517f90a958485c928ce49d598d456d68d0b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ce78201686f949a0adb55f434083a28dda1905b1b787df745dca911ca171f529\"" Dec 13 09:12:26.049809 containerd[1470]: time="2024-12-13T09:12:26.046021945Z" level=info msg="StartContainer for \"ce78201686f949a0adb55f434083a28dda1905b1b787df745dca911ca171f529\"" Dec 13 09:12:26.221613 containerd[1470]: 2024-12-13 09:12:25.995 [INFO][4408] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5d769bdbc33915068200477ac35b6960319a60abf41d49c35de456ecb67f3210" Dec 13 09:12:26.221613 containerd[1470]: 2024-12-13 09:12:25.996 [INFO][4408] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5d769bdbc33915068200477ac35b6960319a60abf41d49c35de456ecb67f3210" iface="eth0" netns="/var/run/netns/cni-67e575a4-c646-1229-be91-39838ccad310" Dec 13 09:12:26.221613 containerd[1470]: 2024-12-13 09:12:25.996 [INFO][4408] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5d769bdbc33915068200477ac35b6960319a60abf41d49c35de456ecb67f3210" iface="eth0" netns="/var/run/netns/cni-67e575a4-c646-1229-be91-39838ccad310" Dec 13 09:12:26.221613 containerd[1470]: 2024-12-13 09:12:25.996 [INFO][4408] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5d769bdbc33915068200477ac35b6960319a60abf41d49c35de456ecb67f3210" iface="eth0" netns="/var/run/netns/cni-67e575a4-c646-1229-be91-39838ccad310" Dec 13 09:12:26.221613 containerd[1470]: 2024-12-13 09:12:25.997 [INFO][4408] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5d769bdbc33915068200477ac35b6960319a60abf41d49c35de456ecb67f3210" Dec 13 09:12:26.221613 containerd[1470]: 2024-12-13 09:12:25.997 [INFO][4408] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5d769bdbc33915068200477ac35b6960319a60abf41d49c35de456ecb67f3210" Dec 13 09:12:26.221613 containerd[1470]: 2024-12-13 09:12:26.150 [INFO][4455] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5d769bdbc33915068200477ac35b6960319a60abf41d49c35de456ecb67f3210" HandleID="k8s-pod-network.5d769bdbc33915068200477ac35b6960319a60abf41d49c35de456ecb67f3210" Workload="ci--4081.2.1--e--b721934136-k8s-coredns--7db6d8ff4d--hcqvh-eth0" Dec 13 09:12:26.221613 containerd[1470]: 2024-12-13 09:12:26.150 [INFO][4455] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 09:12:26.221613 containerd[1470]: 2024-12-13 09:12:26.150 [INFO][4455] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 09:12:26.221613 containerd[1470]: 2024-12-13 09:12:26.169 [WARNING][4455] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5d769bdbc33915068200477ac35b6960319a60abf41d49c35de456ecb67f3210" HandleID="k8s-pod-network.5d769bdbc33915068200477ac35b6960319a60abf41d49c35de456ecb67f3210" Workload="ci--4081.2.1--e--b721934136-k8s-coredns--7db6d8ff4d--hcqvh-eth0" Dec 13 09:12:26.221613 containerd[1470]: 2024-12-13 09:12:26.169 [INFO][4455] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5d769bdbc33915068200477ac35b6960319a60abf41d49c35de456ecb67f3210" HandleID="k8s-pod-network.5d769bdbc33915068200477ac35b6960319a60abf41d49c35de456ecb67f3210" Workload="ci--4081.2.1--e--b721934136-k8s-coredns--7db6d8ff4d--hcqvh-eth0" Dec 13 09:12:26.221613 containerd[1470]: 2024-12-13 09:12:26.184 [INFO][4455] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 09:12:26.221613 containerd[1470]: 2024-12-13 09:12:26.195 [INFO][4408] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5d769bdbc33915068200477ac35b6960319a60abf41d49c35de456ecb67f3210" Dec 13 09:12:26.225172 containerd[1470]: time="2024-12-13T09:12:26.224732336Z" level=info msg="TearDown network for sandbox \"5d769bdbc33915068200477ac35b6960319a60abf41d49c35de456ecb67f3210\" successfully" Dec 13 09:12:26.226403 containerd[1470]: time="2024-12-13T09:12:26.225075851Z" level=info msg="StopPodSandbox for \"5d769bdbc33915068200477ac35b6960319a60abf41d49c35de456ecb67f3210\" returns successfully" Dec 13 09:12:26.242203 kubelet[2567]: E1213 09:12:26.242137 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:12:26.250740 containerd[1470]: time="2024-12-13T09:12:26.246947389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hcqvh,Uid:2399d564-1e25-4cf5-a873-070c1c53ce9a,Namespace:kube-system,Attempt:1,}" Dec 13 09:12:26.250470 systemd[1]: Started cri-containerd-ce78201686f949a0adb55f434083a28dda1905b1b787df745dca911ca171f529.scope - libcontainer container ce78201686f949a0adb55f434083a28dda1905b1b787df745dca911ca171f529. Dec 13 09:12:26.286760 containerd[1470]: time="2024-12-13T09:12:26.286066391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dd59c5464-h679k,Uid:4700244a-abfc-4fc6-93e2-57b920e50bc1,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"8432936e63ed4ba18a4a48f9eac9a878b920e94b6a5f898edde24073f72531bd\"" Dec 13 09:12:26.389891 containerd[1470]: time="2024-12-13T09:12:26.389247712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-847f66d7bd-gnjzf,Uid:6c0ab554-eaa5-49a2-ba96-7901a803a4df,Namespace:calico-system,Attempt:1,} returns sandbox id \"02d1fac9c711b5692e758e70d755392b4e09057371e6200cac3a273374c95601\"" Dec 13 09:12:26.447095 systemd[1]: run-containerd-runc-k8s.io-4a2aaea560632d1ad078023743f0f517f90a958485c928ce49d598d456d68d0b-runc.yIWm96.mount: Deactivated successfully. Dec 13 09:12:26.447285 systemd[1]: run-netns-cni\x2d67e575a4\x2dc646\x2d1229\x2dbe91\x2d39838ccad310.mount: Deactivated successfully. Dec 13 09:12:26.489704 containerd[1470]: time="2024-12-13T09:12:26.487355347Z" level=info msg="StartContainer for \"ce78201686f949a0adb55f434083a28dda1905b1b787df745dca911ca171f529\" returns successfully" Dec 13 09:12:26.701218 systemd-networkd[1367]: cali8a4830723c3: Gained IPv6LL Dec 13 09:12:26.867003 systemd-networkd[1367]: califc2b98bced3: Link UP Dec 13 09:12:26.867585 systemd-networkd[1367]: califc2b98bced3: Gained carrier Dec 13 09:12:26.924679 containerd[1470]: 2024-12-13 09:12:26.555 [INFO][4493] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.1--e--b721934136-k8s-coredns--7db6d8ff4d--hcqvh-eth0 coredns-7db6d8ff4d- kube-system 2399d564-1e25-4cf5-a873-070c1c53ce9a 873 0 2024-12-13 09:11:44 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.2.1-e-b721934136 coredns-7db6d8ff4d-hcqvh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califc2b98bced3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="69f8bd89f706fbf29b028b5e79efbb22900fe292e24968baf747e4eaab69a3c6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hcqvh" WorkloadEndpoint="ci--4081.2.1--e--b721934136-k8s-coredns--7db6d8ff4d--hcqvh-" Dec 13 09:12:26.924679 containerd[1470]: 2024-12-13 09:12:26.557 [INFO][4493] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="69f8bd89f706fbf29b028b5e79efbb22900fe292e24968baf747e4eaab69a3c6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hcqvh" WorkloadEndpoint="ci--4081.2.1--e--b721934136-k8s-coredns--7db6d8ff4d--hcqvh-eth0" Dec 13 09:12:26.924679 containerd[1470]: 2024-12-13 09:12:26.666 [INFO][4522] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="69f8bd89f706fbf29b028b5e79efbb22900fe292e24968baf747e4eaab69a3c6" HandleID="k8s-pod-network.69f8bd89f706fbf29b028b5e79efbb22900fe292e24968baf747e4eaab69a3c6" Workload="ci--4081.2.1--e--b721934136-k8s-coredns--7db6d8ff4d--hcqvh-eth0" Dec 13 09:12:26.924679 containerd[1470]: 2024-12-13 09:12:26.694 [INFO][4522] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="69f8bd89f706fbf29b028b5e79efbb22900fe292e24968baf747e4eaab69a3c6" HandleID="k8s-pod-network.69f8bd89f706fbf29b028b5e79efbb22900fe292e24968baf747e4eaab69a3c6" Workload="ci--4081.2.1--e--b721934136-k8s-coredns--7db6d8ff4d--hcqvh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fdd70), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.2.1-e-b721934136", "pod":"coredns-7db6d8ff4d-hcqvh", "timestamp":"2024-12-13 09:12:26.666536189 +0000 UTC"}, Hostname:"ci-4081.2.1-e-b721934136", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 09:12:26.924679 containerd[1470]: 2024-12-13 09:12:26.695 [INFO][4522] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 09:12:26.924679 containerd[1470]: 2024-12-13 09:12:26.695 [INFO][4522] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 09:12:26.924679 containerd[1470]: 2024-12-13 09:12:26.695 [INFO][4522] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.1-e-b721934136' Dec 13 09:12:26.924679 containerd[1470]: 2024-12-13 09:12:26.706 [INFO][4522] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.69f8bd89f706fbf29b028b5e79efbb22900fe292e24968baf747e4eaab69a3c6" host="ci-4081.2.1-e-b721934136" Dec 13 09:12:26.924679 containerd[1470]: 2024-12-13 09:12:26.733 [INFO][4522] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.1-e-b721934136" Dec 13 09:12:26.924679 containerd[1470]: 2024-12-13 09:12:26.745 [INFO][4522] ipam/ipam.go 489: Trying affinity for 192.168.19.128/26 host="ci-4081.2.1-e-b721934136" Dec 13 09:12:26.924679 containerd[1470]: 2024-12-13 09:12:26.749 [INFO][4522] ipam/ipam.go 155: Attempting to load block cidr=192.168.19.128/26 host="ci-4081.2.1-e-b721934136" Dec 13 09:12:26.924679 containerd[1470]: 2024-12-13 09:12:26.757 [INFO][4522] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.19.128/26 host="ci-4081.2.1-e-b721934136" Dec 13 09:12:26.924679 containerd[1470]: 2024-12-13 09:12:26.757 [INFO][4522] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.19.128/26 handle="k8s-pod-network.69f8bd89f706fbf29b028b5e79efbb22900fe292e24968baf747e4eaab69a3c6" host="ci-4081.2.1-e-b721934136" Dec 13 09:12:26.924679 containerd[1470]: 2024-12-13 09:12:26.768 [INFO][4522] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.69f8bd89f706fbf29b028b5e79efbb22900fe292e24968baf747e4eaab69a3c6 Dec 13 09:12:26.924679 containerd[1470]: 2024-12-13 09:12:26.793 [INFO][4522] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.19.128/26 handle="k8s-pod-network.69f8bd89f706fbf29b028b5e79efbb22900fe292e24968baf747e4eaab69a3c6" host="ci-4081.2.1-e-b721934136" Dec 13 09:12:26.924679 containerd[1470]: 2024-12-13 09:12:26.849 [INFO][4522] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.19.134/26] block=192.168.19.128/26 handle="k8s-pod-network.69f8bd89f706fbf29b028b5e79efbb22900fe292e24968baf747e4eaab69a3c6" host="ci-4081.2.1-e-b721934136" Dec 13 09:12:26.924679 containerd[1470]: 2024-12-13 09:12:26.849 [INFO][4522] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.19.134/26] handle="k8s-pod-network.69f8bd89f706fbf29b028b5e79efbb22900fe292e24968baf747e4eaab69a3c6" host="ci-4081.2.1-e-b721934136" Dec 13 09:12:26.924679 containerd[1470]: 2024-12-13 09:12:26.850 [INFO][4522] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 09:12:26.924679 containerd[1470]: 2024-12-13 09:12:26.850 [INFO][4522] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.19.134/26] IPv6=[] ContainerID="69f8bd89f706fbf29b028b5e79efbb22900fe292e24968baf747e4eaab69a3c6" HandleID="k8s-pod-network.69f8bd89f706fbf29b028b5e79efbb22900fe292e24968baf747e4eaab69a3c6" Workload="ci--4081.2.1--e--b721934136-k8s-coredns--7db6d8ff4d--hcqvh-eth0" Dec 13 09:12:26.925832 containerd[1470]: 2024-12-13 09:12:26.855 [INFO][4493] cni-plugin/k8s.go 386: Populated endpoint ContainerID="69f8bd89f706fbf29b028b5e79efbb22900fe292e24968baf747e4eaab69a3c6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hcqvh" WorkloadEndpoint="ci--4081.2.1--e--b721934136-k8s-coredns--7db6d8ff4d--hcqvh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--e--b721934136-k8s-coredns--7db6d8ff4d--hcqvh-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"2399d564-1e25-4cf5-a873-070c1c53ce9a", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 11, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-e-b721934136", ContainerID:"", Pod:"coredns-7db6d8ff4d-hcqvh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califc2b98bced3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 09:12:26.925832 containerd[1470]: 2024-12-13 09:12:26.856 [INFO][4493] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.19.134/32] ContainerID="69f8bd89f706fbf29b028b5e79efbb22900fe292e24968baf747e4eaab69a3c6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hcqvh" WorkloadEndpoint="ci--4081.2.1--e--b721934136-k8s-coredns--7db6d8ff4d--hcqvh-eth0" Dec 13 09:12:26.925832 containerd[1470]: 2024-12-13 09:12:26.856 [INFO][4493] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califc2b98bced3 ContainerID="69f8bd89f706fbf29b028b5e79efbb22900fe292e24968baf747e4eaab69a3c6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hcqvh" WorkloadEndpoint="ci--4081.2.1--e--b721934136-k8s-coredns--7db6d8ff4d--hcqvh-eth0" Dec 13 09:12:26.925832 containerd[1470]: 2024-12-13 09:12:26.866 [INFO][4493] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="69f8bd89f706fbf29b028b5e79efbb22900fe292e24968baf747e4eaab69a3c6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hcqvh" WorkloadEndpoint="ci--4081.2.1--e--b721934136-k8s-coredns--7db6d8ff4d--hcqvh-eth0" Dec 13 09:12:26.925832 containerd[1470]: 2024-12-13 09:12:26.870 [INFO][4493] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="69f8bd89f706fbf29b028b5e79efbb22900fe292e24968baf747e4eaab69a3c6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hcqvh" WorkloadEndpoint="ci--4081.2.1--e--b721934136-k8s-coredns--7db6d8ff4d--hcqvh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--e--b721934136-k8s-coredns--7db6d8ff4d--hcqvh-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"2399d564-1e25-4cf5-a873-070c1c53ce9a", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 11, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-e-b721934136", ContainerID:"69f8bd89f706fbf29b028b5e79efbb22900fe292e24968baf747e4eaab69a3c6", Pod:"coredns-7db6d8ff4d-hcqvh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califc2b98bced3", MAC:"6e:ac:b6:a5:21:ef", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 09:12:26.925832 containerd[1470]: 2024-12-13 09:12:26.912 [INFO][4493] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="69f8bd89f706fbf29b028b5e79efbb22900fe292e24968baf747e4eaab69a3c6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hcqvh" WorkloadEndpoint="ci--4081.2.1--e--b721934136-k8s-coredns--7db6d8ff4d--hcqvh-eth0" Dec 13 09:12:26.955877 systemd-networkd[1367]: cali5f5b3bcc039: Gained IPv6LL Dec 13 09:12:27.176881 containerd[1470]: time="2024-12-13T09:12:27.175625016Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 09:12:27.176881 containerd[1470]: time="2024-12-13T09:12:27.175759128Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 09:12:27.176881 containerd[1470]: time="2024-12-13T09:12:27.175779438Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:12:27.176881 containerd[1470]: time="2024-12-13T09:12:27.175909652Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:12:27.288667 kubelet[2567]: E1213 09:12:27.287751 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:12:27.324038 systemd[1]: Started cri-containerd-69f8bd89f706fbf29b028b5e79efbb22900fe292e24968baf747e4eaab69a3c6.scope - libcontainer container 69f8bd89f706fbf29b028b5e79efbb22900fe292e24968baf747e4eaab69a3c6. Dec 13 09:12:27.391671 kubelet[2567]: I1213 09:12:27.390649 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-rbtvp" podStartSLOduration=43.390602876 podStartE2EDuration="43.390602876s" podCreationTimestamp="2024-12-13 09:11:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 09:12:27.320687565 +0000 UTC m=+57.807517835" watchObservedRunningTime="2024-12-13 09:12:27.390602876 +0000 UTC m=+57.877433140" Dec 13 09:12:27.518745 containerd[1470]: time="2024-12-13T09:12:27.518513758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hcqvh,Uid:2399d564-1e25-4cf5-a873-070c1c53ce9a,Namespace:kube-system,Attempt:1,} returns sandbox id \"69f8bd89f706fbf29b028b5e79efbb22900fe292e24968baf747e4eaab69a3c6\"" Dec 13 09:12:27.523548 kubelet[2567]: E1213 09:12:27.522598 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:12:27.537295 containerd[1470]: time="2024-12-13T09:12:27.537152145Z" level=info msg="CreateContainer within sandbox \"69f8bd89f706fbf29b028b5e79efbb22900fe292e24968baf747e4eaab69a3c6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 09:12:27.574026 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2335384693.mount: Deactivated successfully. Dec 13 09:12:27.585696 containerd[1470]: time="2024-12-13T09:12:27.585205888Z" level=info msg="CreateContainer within sandbox \"69f8bd89f706fbf29b028b5e79efbb22900fe292e24968baf747e4eaab69a3c6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ae65f66dcfff186e8922f71f3977bedf7d7e223fbce0911974c78adba32b13f9\"" Dec 13 09:12:27.588377 containerd[1470]: time="2024-12-13T09:12:27.588315088Z" level=info msg="StartContainer for \"ae65f66dcfff186e8922f71f3977bedf7d7e223fbce0911974c78adba32b13f9\"" Dec 13 09:12:27.661677 systemd-networkd[1367]: cali0cbaaffb976: Gained IPv6LL Dec 13 09:12:27.692960 systemd[1]: Started cri-containerd-ae65f66dcfff186e8922f71f3977bedf7d7e223fbce0911974c78adba32b13f9.scope - libcontainer container ae65f66dcfff186e8922f71f3977bedf7d7e223fbce0911974c78adba32b13f9. Dec 13 09:12:27.819663 containerd[1470]: time="2024-12-13T09:12:27.818592339Z" level=info msg="StartContainer for \"ae65f66dcfff186e8922f71f3977bedf7d7e223fbce0911974c78adba32b13f9\" returns successfully" Dec 13 09:12:27.979967 systemd-networkd[1367]: califc2b98bced3: Gained IPv6LL Dec 13 09:12:28.301119 kubelet[2567]: E1213 09:12:28.299541 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:12:28.304762 kubelet[2567]: E1213 09:12:28.304713 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:12:28.388667 kubelet[2567]: I1213 09:12:28.386280 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-hcqvh" podStartSLOduration=44.38624994 podStartE2EDuration="44.38624994s" podCreationTimestamp="2024-12-13 09:11:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 09:12:28.347080051 +0000 UTC m=+58.833910297" watchObservedRunningTime="2024-12-13 09:12:28.38624994 +0000 UTC m=+58.873080196" Dec 13 09:12:29.218742 containerd[1470]: time="2024-12-13T09:12:29.217153956Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:12:29.220489 containerd[1470]: time="2024-12-13T09:12:29.220274906Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Dec 13 09:12:29.222467 containerd[1470]: time="2024-12-13T09:12:29.221727542Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:12:29.225302 containerd[1470]: time="2024-12-13T09:12:29.225246616Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:12:29.226431 containerd[1470]: time="2024-12-13T09:12:29.226376527Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 4.830199567s" Dec 13 09:12:29.226579 containerd[1470]: time="2024-12-13T09:12:29.226560747Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Dec 13 09:12:29.229302 containerd[1470]: time="2024-12-13T09:12:29.229213140Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Dec 13 09:12:29.234867 containerd[1470]: time="2024-12-13T09:12:29.234813064Z" level=info msg="CreateContainer within sandbox \"8e51e56433a015c057cf477d9e7bc20c0ce3a5f9f7650047b0cff25c77ef539a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 09:12:29.257281 containerd[1470]: time="2024-12-13T09:12:29.257079078Z" level=info msg="CreateContainer within sandbox \"8e51e56433a015c057cf477d9e7bc20c0ce3a5f9f7650047b0cff25c77ef539a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4cff131a8c9ddfe2dbbabeb0b7db1c92acc2dbaa4ca73b79e35f4ff29d7c8d57\"" Dec 13 09:12:29.259676 containerd[1470]: time="2024-12-13T09:12:29.258051170Z" level=info msg="StartContainer for \"4cff131a8c9ddfe2dbbabeb0b7db1c92acc2dbaa4ca73b79e35f4ff29d7c8d57\"" Dec 13 09:12:29.314339 kubelet[2567]: E1213 09:12:29.314294 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:12:29.356989 systemd[1]: Started cri-containerd-4cff131a8c9ddfe2dbbabeb0b7db1c92acc2dbaa4ca73b79e35f4ff29d7c8d57.scope - libcontainer container 4cff131a8c9ddfe2dbbabeb0b7db1c92acc2dbaa4ca73b79e35f4ff29d7c8d57. Dec 13 09:12:29.444255 containerd[1470]: time="2024-12-13T09:12:29.443625882Z" level=info msg="StartContainer for \"4cff131a8c9ddfe2dbbabeb0b7db1c92acc2dbaa4ca73b79e35f4ff29d7c8d57\" returns successfully" Dec 13 09:12:29.675767 containerd[1470]: time="2024-12-13T09:12:29.675703643Z" level=info msg="StopPodSandbox for \"e353ade0fd510c8423f00f163850e4ca31417bf10d3c7112535820bc6505345f\"" Dec 13 09:12:29.886948 containerd[1470]: 2024-12-13 09:12:29.791 [WARNING][4688] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e353ade0fd510c8423f00f163850e4ca31417bf10d3c7112535820bc6505345f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--e--b721934136-k8s-csi--node--driver--fg4tr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7e99ecb7-45e0-439d-b399-3755faed5090", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 11, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-e-b721934136", ContainerID:"90d621ea092d5e36dff1933a34fe75069f403b30150aa57d227006e1a890f29a", Pod:"csi-node-driver-fg4tr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.19.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8f9eacfb15e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 09:12:29.886948 containerd[1470]: 2024-12-13 09:12:29.792 [INFO][4688] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e353ade0fd510c8423f00f163850e4ca31417bf10d3c7112535820bc6505345f" Dec 13 09:12:29.886948 containerd[1470]: 2024-12-13 09:12:29.792 [INFO][4688] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e353ade0fd510c8423f00f163850e4ca31417bf10d3c7112535820bc6505345f" iface="eth0" netns="" Dec 13 09:12:29.886948 containerd[1470]: 2024-12-13 09:12:29.792 [INFO][4688] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e353ade0fd510c8423f00f163850e4ca31417bf10d3c7112535820bc6505345f" Dec 13 09:12:29.886948 containerd[1470]: 2024-12-13 09:12:29.792 [INFO][4688] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e353ade0fd510c8423f00f163850e4ca31417bf10d3c7112535820bc6505345f" Dec 13 09:12:29.886948 containerd[1470]: 2024-12-13 09:12:29.867 [INFO][4695] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e353ade0fd510c8423f00f163850e4ca31417bf10d3c7112535820bc6505345f" HandleID="k8s-pod-network.e353ade0fd510c8423f00f163850e4ca31417bf10d3c7112535820bc6505345f" Workload="ci--4081.2.1--e--b721934136-k8s-csi--node--driver--fg4tr-eth0" Dec 13 09:12:29.886948 containerd[1470]: 2024-12-13 09:12:29.867 [INFO][4695] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 09:12:29.886948 containerd[1470]: 2024-12-13 09:12:29.867 [INFO][4695] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 09:12:29.886948 containerd[1470]: 2024-12-13 09:12:29.876 [WARNING][4695] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e353ade0fd510c8423f00f163850e4ca31417bf10d3c7112535820bc6505345f" HandleID="k8s-pod-network.e353ade0fd510c8423f00f163850e4ca31417bf10d3c7112535820bc6505345f" Workload="ci--4081.2.1--e--b721934136-k8s-csi--node--driver--fg4tr-eth0" Dec 13 09:12:29.886948 containerd[1470]: 2024-12-13 09:12:29.876 [INFO][4695] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e353ade0fd510c8423f00f163850e4ca31417bf10d3c7112535820bc6505345f" HandleID="k8s-pod-network.e353ade0fd510c8423f00f163850e4ca31417bf10d3c7112535820bc6505345f" Workload="ci--4081.2.1--e--b721934136-k8s-csi--node--driver--fg4tr-eth0" Dec 13 09:12:29.886948 containerd[1470]: 2024-12-13 09:12:29.879 [INFO][4695] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 09:12:29.886948 containerd[1470]: 2024-12-13 09:12:29.881 [INFO][4688] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e353ade0fd510c8423f00f163850e4ca31417bf10d3c7112535820bc6505345f" Dec 13 09:12:29.886948 containerd[1470]: time="2024-12-13T09:12:29.886678526Z" level=info msg="TearDown network for sandbox \"e353ade0fd510c8423f00f163850e4ca31417bf10d3c7112535820bc6505345f\" successfully" Dec 13 09:12:29.886948 containerd[1470]: time="2024-12-13T09:12:29.886705767Z" level=info msg="StopPodSandbox for \"e353ade0fd510c8423f00f163850e4ca31417bf10d3c7112535820bc6505345f\" returns successfully" Dec 13 09:12:29.889088 containerd[1470]: time="2024-12-13T09:12:29.888130633Z" level=info msg="RemovePodSandbox for \"e353ade0fd510c8423f00f163850e4ca31417bf10d3c7112535820bc6505345f\"" Dec 13 09:12:29.889088 containerd[1470]: time="2024-12-13T09:12:29.888206304Z" level=info msg="Forcibly stopping sandbox \"e353ade0fd510c8423f00f163850e4ca31417bf10d3c7112535820bc6505345f\"" Dec 13 09:12:30.112556 containerd[1470]: 2024-12-13 09:12:29.977 [WARNING][4713] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e353ade0fd510c8423f00f163850e4ca31417bf10d3c7112535820bc6505345f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--e--b721934136-k8s-csi--node--driver--fg4tr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7e99ecb7-45e0-439d-b399-3755faed5090", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 11, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-e-b721934136", ContainerID:"90d621ea092d5e36dff1933a34fe75069f403b30150aa57d227006e1a890f29a", Pod:"csi-node-driver-fg4tr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.19.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8f9eacfb15e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 09:12:30.112556 containerd[1470]: 2024-12-13 09:12:29.977 [INFO][4713] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e353ade0fd510c8423f00f163850e4ca31417bf10d3c7112535820bc6505345f" Dec 13 09:12:30.112556 containerd[1470]: 2024-12-13 09:12:29.977 [INFO][4713] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e353ade0fd510c8423f00f163850e4ca31417bf10d3c7112535820bc6505345f" iface="eth0" netns="" Dec 13 09:12:30.112556 containerd[1470]: 2024-12-13 09:12:29.979 [INFO][4713] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e353ade0fd510c8423f00f163850e4ca31417bf10d3c7112535820bc6505345f" Dec 13 09:12:30.112556 containerd[1470]: 2024-12-13 09:12:29.979 [INFO][4713] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e353ade0fd510c8423f00f163850e4ca31417bf10d3c7112535820bc6505345f" Dec 13 09:12:30.112556 containerd[1470]: 2024-12-13 09:12:30.059 [INFO][4719] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e353ade0fd510c8423f00f163850e4ca31417bf10d3c7112535820bc6505345f" HandleID="k8s-pod-network.e353ade0fd510c8423f00f163850e4ca31417bf10d3c7112535820bc6505345f" Workload="ci--4081.2.1--e--b721934136-k8s-csi--node--driver--fg4tr-eth0" Dec 13 09:12:30.112556 containerd[1470]: 2024-12-13 09:12:30.059 [INFO][4719] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 09:12:30.112556 containerd[1470]: 2024-12-13 09:12:30.059 [INFO][4719] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 09:12:30.112556 containerd[1470]: 2024-12-13 09:12:30.094 [WARNING][4719] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e353ade0fd510c8423f00f163850e4ca31417bf10d3c7112535820bc6505345f" HandleID="k8s-pod-network.e353ade0fd510c8423f00f163850e4ca31417bf10d3c7112535820bc6505345f" Workload="ci--4081.2.1--e--b721934136-k8s-csi--node--driver--fg4tr-eth0" Dec 13 09:12:30.112556 containerd[1470]: 2024-12-13 09:12:30.094 [INFO][4719] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e353ade0fd510c8423f00f163850e4ca31417bf10d3c7112535820bc6505345f" HandleID="k8s-pod-network.e353ade0fd510c8423f00f163850e4ca31417bf10d3c7112535820bc6505345f" Workload="ci--4081.2.1--e--b721934136-k8s-csi--node--driver--fg4tr-eth0" Dec 13 09:12:30.112556 containerd[1470]: 2024-12-13 09:12:30.102 [INFO][4719] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 09:12:30.112556 containerd[1470]: 2024-12-13 09:12:30.104 [INFO][4713] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e353ade0fd510c8423f00f163850e4ca31417bf10d3c7112535820bc6505345f" Dec 13 09:12:30.116091 containerd[1470]: time="2024-12-13T09:12:30.113135144Z" level=info msg="TearDown network for sandbox \"e353ade0fd510c8423f00f163850e4ca31417bf10d3c7112535820bc6505345f\" successfully" Dec 13 09:12:30.129202 containerd[1470]: time="2024-12-13T09:12:30.129142926Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e353ade0fd510c8423f00f163850e4ca31417bf10d3c7112535820bc6505345f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 09:12:30.129418 containerd[1470]: time="2024-12-13T09:12:30.129399557Z" level=info msg="RemovePodSandbox \"e353ade0fd510c8423f00f163850e4ca31417bf10d3c7112535820bc6505345f\" returns successfully" Dec 13 09:12:30.131021 containerd[1470]: time="2024-12-13T09:12:30.130929791Z" level=info msg="StopPodSandbox for \"5d769bdbc33915068200477ac35b6960319a60abf41d49c35de456ecb67f3210\"" Dec 13 09:12:30.356321 containerd[1470]: 2024-12-13 09:12:30.263 [WARNING][4746] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5d769bdbc33915068200477ac35b6960319a60abf41d49c35de456ecb67f3210" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--e--b721934136-k8s-coredns--7db6d8ff4d--hcqvh-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"2399d564-1e25-4cf5-a873-070c1c53ce9a", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 11, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-e-b721934136", ContainerID:"69f8bd89f706fbf29b028b5e79efbb22900fe292e24968baf747e4eaab69a3c6", Pod:"coredns-7db6d8ff4d-hcqvh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califc2b98bced3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 09:12:30.356321 containerd[1470]: 2024-12-13 09:12:30.265 [INFO][4746] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5d769bdbc33915068200477ac35b6960319a60abf41d49c35de456ecb67f3210" Dec 13 09:12:30.356321 containerd[1470]: 2024-12-13 09:12:30.265 [INFO][4746] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5d769bdbc33915068200477ac35b6960319a60abf41d49c35de456ecb67f3210" iface="eth0" netns="" Dec 13 09:12:30.356321 containerd[1470]: 2024-12-13 09:12:30.265 [INFO][4746] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5d769bdbc33915068200477ac35b6960319a60abf41d49c35de456ecb67f3210" Dec 13 09:12:30.356321 containerd[1470]: 2024-12-13 09:12:30.266 [INFO][4746] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5d769bdbc33915068200477ac35b6960319a60abf41d49c35de456ecb67f3210" Dec 13 09:12:30.356321 containerd[1470]: 2024-12-13 09:12:30.320 [INFO][4754] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5d769bdbc33915068200477ac35b6960319a60abf41d49c35de456ecb67f3210" HandleID="k8s-pod-network.5d769bdbc33915068200477ac35b6960319a60abf41d49c35de456ecb67f3210" Workload="ci--4081.2.1--e--b721934136-k8s-coredns--7db6d8ff4d--hcqvh-eth0" Dec 13 09:12:30.356321 containerd[1470]: 2024-12-13 09:12:30.325 [INFO][4754] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 09:12:30.356321 containerd[1470]: 2024-12-13 09:12:30.326 [INFO][4754] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 09:12:30.356321 containerd[1470]: 2024-12-13 09:12:30.339 [WARNING][4754] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5d769bdbc33915068200477ac35b6960319a60abf41d49c35de456ecb67f3210" HandleID="k8s-pod-network.5d769bdbc33915068200477ac35b6960319a60abf41d49c35de456ecb67f3210" Workload="ci--4081.2.1--e--b721934136-k8s-coredns--7db6d8ff4d--hcqvh-eth0" Dec 13 09:12:30.356321 containerd[1470]: 2024-12-13 09:12:30.339 [INFO][4754] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5d769bdbc33915068200477ac35b6960319a60abf41d49c35de456ecb67f3210" HandleID="k8s-pod-network.5d769bdbc33915068200477ac35b6960319a60abf41d49c35de456ecb67f3210" Workload="ci--4081.2.1--e--b721934136-k8s-coredns--7db6d8ff4d--hcqvh-eth0" Dec 13 09:12:30.356321 containerd[1470]: 2024-12-13 09:12:30.342 [INFO][4754] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 09:12:30.356321 containerd[1470]: 2024-12-13 09:12:30.348 [INFO][4746] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5d769bdbc33915068200477ac35b6960319a60abf41d49c35de456ecb67f3210" Dec 13 09:12:30.356321 containerd[1470]: time="2024-12-13T09:12:30.356406072Z" level=info msg="TearDown network for sandbox \"5d769bdbc33915068200477ac35b6960319a60abf41d49c35de456ecb67f3210\" successfully" Dec 13 09:12:30.356321 containerd[1470]: time="2024-12-13T09:12:30.356444080Z" level=info msg="StopPodSandbox for \"5d769bdbc33915068200477ac35b6960319a60abf41d49c35de456ecb67f3210\" returns successfully" Dec 13 09:12:30.356321 containerd[1470]: time="2024-12-13T09:12:30.359239478Z" level=info msg="RemovePodSandbox for \"5d769bdbc33915068200477ac35b6960319a60abf41d49c35de456ecb67f3210\"" Dec 13 09:12:30.356321 containerd[1470]: time="2024-12-13T09:12:30.359816201Z" level=info msg="Forcibly stopping sandbox \"5d769bdbc33915068200477ac35b6960319a60abf41d49c35de456ecb67f3210\"" Dec 13 09:12:30.588063 containerd[1470]: 2024-12-13 09:12:30.498 [WARNING][4773] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5d769bdbc33915068200477ac35b6960319a60abf41d49c35de456ecb67f3210" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--e--b721934136-k8s-coredns--7db6d8ff4d--hcqvh-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"2399d564-1e25-4cf5-a873-070c1c53ce9a", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 11, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-e-b721934136", ContainerID:"69f8bd89f706fbf29b028b5e79efbb22900fe292e24968baf747e4eaab69a3c6", Pod:"coredns-7db6d8ff4d-hcqvh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califc2b98bced3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 09:12:30.588063 containerd[1470]: 2024-12-13 09:12:30.499 [INFO][4773] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5d769bdbc33915068200477ac35b6960319a60abf41d49c35de456ecb67f3210" Dec 13 09:12:30.588063 containerd[1470]: 2024-12-13 09:12:30.499 [INFO][4773] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5d769bdbc33915068200477ac35b6960319a60abf41d49c35de456ecb67f3210" iface="eth0" netns="" Dec 13 09:12:30.588063 containerd[1470]: 2024-12-13 09:12:30.499 [INFO][4773] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5d769bdbc33915068200477ac35b6960319a60abf41d49c35de456ecb67f3210" Dec 13 09:12:30.588063 containerd[1470]: 2024-12-13 09:12:30.499 [INFO][4773] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5d769bdbc33915068200477ac35b6960319a60abf41d49c35de456ecb67f3210" Dec 13 09:12:30.588063 containerd[1470]: 2024-12-13 09:12:30.561 [INFO][4781] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5d769bdbc33915068200477ac35b6960319a60abf41d49c35de456ecb67f3210" HandleID="k8s-pod-network.5d769bdbc33915068200477ac35b6960319a60abf41d49c35de456ecb67f3210" Workload="ci--4081.2.1--e--b721934136-k8s-coredns--7db6d8ff4d--hcqvh-eth0" Dec 13 09:12:30.588063 containerd[1470]: 2024-12-13 09:12:30.562 [INFO][4781] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 09:12:30.588063 containerd[1470]: 2024-12-13 09:12:30.562 [INFO][4781] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 09:12:30.588063 containerd[1470]: 2024-12-13 09:12:30.573 [WARNING][4781] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5d769bdbc33915068200477ac35b6960319a60abf41d49c35de456ecb67f3210" HandleID="k8s-pod-network.5d769bdbc33915068200477ac35b6960319a60abf41d49c35de456ecb67f3210" Workload="ci--4081.2.1--e--b721934136-k8s-coredns--7db6d8ff4d--hcqvh-eth0" Dec 13 09:12:30.588063 containerd[1470]: 2024-12-13 09:12:30.573 [INFO][4781] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5d769bdbc33915068200477ac35b6960319a60abf41d49c35de456ecb67f3210" HandleID="k8s-pod-network.5d769bdbc33915068200477ac35b6960319a60abf41d49c35de456ecb67f3210" Workload="ci--4081.2.1--e--b721934136-k8s-coredns--7db6d8ff4d--hcqvh-eth0" Dec 13 09:12:30.588063 containerd[1470]: 2024-12-13 09:12:30.578 [INFO][4781] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 09:12:30.588063 containerd[1470]: 2024-12-13 09:12:30.582 [INFO][4773] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5d769bdbc33915068200477ac35b6960319a60abf41d49c35de456ecb67f3210" Dec 13 09:12:30.588063 containerd[1470]: time="2024-12-13T09:12:30.587211503Z" level=info msg="TearDown network for sandbox \"5d769bdbc33915068200477ac35b6960319a60abf41d49c35de456ecb67f3210\" successfully" Dec 13 09:12:30.603534 containerd[1470]: time="2024-12-13T09:12:30.603356949Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5d769bdbc33915068200477ac35b6960319a60abf41d49c35de456ecb67f3210\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 09:12:30.603534 containerd[1470]: time="2024-12-13T09:12:30.603465698Z" level=info msg="RemovePodSandbox \"5d769bdbc33915068200477ac35b6960319a60abf41d49c35de456ecb67f3210\" returns successfully" Dec 13 09:12:30.605029 containerd[1470]: time="2024-12-13T09:12:30.604480808Z" level=info msg="StopPodSandbox for \"fcebbf9ff255a39523e24688a9b20c584b5a715ee32514148a982e7efd530f19\"" Dec 13 09:12:30.784789 containerd[1470]: 2024-12-13 09:12:30.729 [WARNING][4799] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fcebbf9ff255a39523e24688a9b20c584b5a715ee32514148a982e7efd530f19" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--e--b721934136-k8s-calico--apiserver--dd59c5464--8chwx-eth0", GenerateName:"calico-apiserver-dd59c5464-", Namespace:"calico-apiserver", SelfLink:"", UID:"57fcc6dd-566b-4786-b6c6-5e0d6f04624c", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 11, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dd59c5464", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-e-b721934136", ContainerID:"8e51e56433a015c057cf477d9e7bc20c0ce3a5f9f7650047b0cff25c77ef539a", Pod:"calico-apiserver-dd59c5464-8chwx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.19.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicd88d0c632b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 09:12:30.784789 containerd[1470]: 2024-12-13 09:12:30.730 [INFO][4799] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fcebbf9ff255a39523e24688a9b20c584b5a715ee32514148a982e7efd530f19" Dec 13 09:12:30.784789 containerd[1470]: 2024-12-13 09:12:30.730 [INFO][4799] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fcebbf9ff255a39523e24688a9b20c584b5a715ee32514148a982e7efd530f19" iface="eth0" netns="" Dec 13 09:12:30.784789 containerd[1470]: 2024-12-13 09:12:30.730 [INFO][4799] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fcebbf9ff255a39523e24688a9b20c584b5a715ee32514148a982e7efd530f19" Dec 13 09:12:30.784789 containerd[1470]: 2024-12-13 09:12:30.730 [INFO][4799] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fcebbf9ff255a39523e24688a9b20c584b5a715ee32514148a982e7efd530f19" Dec 13 09:12:30.784789 containerd[1470]: 2024-12-13 09:12:30.761 [INFO][4805] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fcebbf9ff255a39523e24688a9b20c584b5a715ee32514148a982e7efd530f19" HandleID="k8s-pod-network.fcebbf9ff255a39523e24688a9b20c584b5a715ee32514148a982e7efd530f19" Workload="ci--4081.2.1--e--b721934136-k8s-calico--apiserver--dd59c5464--8chwx-eth0" Dec 13 09:12:30.784789 containerd[1470]: 2024-12-13 09:12:30.761 [INFO][4805] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 09:12:30.784789 containerd[1470]: 2024-12-13 09:12:30.761 [INFO][4805] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 09:12:30.784789 containerd[1470]: 2024-12-13 09:12:30.773 [WARNING][4805] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fcebbf9ff255a39523e24688a9b20c584b5a715ee32514148a982e7efd530f19" HandleID="k8s-pod-network.fcebbf9ff255a39523e24688a9b20c584b5a715ee32514148a982e7efd530f19" Workload="ci--4081.2.1--e--b721934136-k8s-calico--apiserver--dd59c5464--8chwx-eth0" Dec 13 09:12:30.784789 containerd[1470]: 2024-12-13 09:12:30.773 [INFO][4805] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fcebbf9ff255a39523e24688a9b20c584b5a715ee32514148a982e7efd530f19" HandleID="k8s-pod-network.fcebbf9ff255a39523e24688a9b20c584b5a715ee32514148a982e7efd530f19" Workload="ci--4081.2.1--e--b721934136-k8s-calico--apiserver--dd59c5464--8chwx-eth0" Dec 13 09:12:30.784789 containerd[1470]: 2024-12-13 09:12:30.776 [INFO][4805] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 09:12:30.784789 containerd[1470]: 2024-12-13 09:12:30.780 [INFO][4799] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fcebbf9ff255a39523e24688a9b20c584b5a715ee32514148a982e7efd530f19" Dec 13 09:12:30.784789 containerd[1470]: time="2024-12-13T09:12:30.784066322Z" level=info msg="TearDown network for sandbox \"fcebbf9ff255a39523e24688a9b20c584b5a715ee32514148a982e7efd530f19\" successfully" Dec 13 09:12:30.784789 containerd[1470]: time="2024-12-13T09:12:30.784108378Z" level=info msg="StopPodSandbox for \"fcebbf9ff255a39523e24688a9b20c584b5a715ee32514148a982e7efd530f19\" returns successfully" Dec 13 09:12:30.788176 containerd[1470]: time="2024-12-13T09:12:30.784829501Z" level=info msg="RemovePodSandbox for \"fcebbf9ff255a39523e24688a9b20c584b5a715ee32514148a982e7efd530f19\"" Dec 13 09:12:30.788176 containerd[1470]: time="2024-12-13T09:12:30.784883975Z" level=info msg="Forcibly stopping sandbox \"fcebbf9ff255a39523e24688a9b20c584b5a715ee32514148a982e7efd530f19\"" Dec 13 09:12:30.917938 containerd[1470]: 2024-12-13 09:12:30.850 [WARNING][4823] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fcebbf9ff255a39523e24688a9b20c584b5a715ee32514148a982e7efd530f19" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--e--b721934136-k8s-calico--apiserver--dd59c5464--8chwx-eth0", GenerateName:"calico-apiserver-dd59c5464-", Namespace:"calico-apiserver", SelfLink:"", UID:"57fcc6dd-566b-4786-b6c6-5e0d6f04624c", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 11, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dd59c5464", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-e-b721934136", ContainerID:"8e51e56433a015c057cf477d9e7bc20c0ce3a5f9f7650047b0cff25c77ef539a", Pod:"calico-apiserver-dd59c5464-8chwx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.19.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicd88d0c632b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 09:12:30.917938 containerd[1470]: 2024-12-13 09:12:30.851 [INFO][4823] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fcebbf9ff255a39523e24688a9b20c584b5a715ee32514148a982e7efd530f19" Dec 13 09:12:30.917938 containerd[1470]: 2024-12-13 09:12:30.851 [INFO][4823] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fcebbf9ff255a39523e24688a9b20c584b5a715ee32514148a982e7efd530f19" iface="eth0" netns="" Dec 13 09:12:30.917938 containerd[1470]: 2024-12-13 09:12:30.851 [INFO][4823] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fcebbf9ff255a39523e24688a9b20c584b5a715ee32514148a982e7efd530f19" Dec 13 09:12:30.917938 containerd[1470]: 2024-12-13 09:12:30.851 [INFO][4823] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fcebbf9ff255a39523e24688a9b20c584b5a715ee32514148a982e7efd530f19" Dec 13 09:12:30.917938 containerd[1470]: 2024-12-13 09:12:30.893 [INFO][4829] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fcebbf9ff255a39523e24688a9b20c584b5a715ee32514148a982e7efd530f19" HandleID="k8s-pod-network.fcebbf9ff255a39523e24688a9b20c584b5a715ee32514148a982e7efd530f19" Workload="ci--4081.2.1--e--b721934136-k8s-calico--apiserver--dd59c5464--8chwx-eth0" Dec 13 09:12:30.917938 containerd[1470]: 2024-12-13 09:12:30.893 [INFO][4829] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 09:12:30.917938 containerd[1470]: 2024-12-13 09:12:30.893 [INFO][4829] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 09:12:30.917938 containerd[1470]: 2024-12-13 09:12:30.905 [WARNING][4829] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fcebbf9ff255a39523e24688a9b20c584b5a715ee32514148a982e7efd530f19" HandleID="k8s-pod-network.fcebbf9ff255a39523e24688a9b20c584b5a715ee32514148a982e7efd530f19" Workload="ci--4081.2.1--e--b721934136-k8s-calico--apiserver--dd59c5464--8chwx-eth0" Dec 13 09:12:30.917938 containerd[1470]: 2024-12-13 09:12:30.905 [INFO][4829] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fcebbf9ff255a39523e24688a9b20c584b5a715ee32514148a982e7efd530f19" HandleID="k8s-pod-network.fcebbf9ff255a39523e24688a9b20c584b5a715ee32514148a982e7efd530f19" Workload="ci--4081.2.1--e--b721934136-k8s-calico--apiserver--dd59c5464--8chwx-eth0" Dec 13 09:12:30.917938 containerd[1470]: 2024-12-13 09:12:30.910 [INFO][4829] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 09:12:30.917938 containerd[1470]: 2024-12-13 09:12:30.912 [INFO][4823] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fcebbf9ff255a39523e24688a9b20c584b5a715ee32514148a982e7efd530f19" Dec 13 09:12:30.917938 containerd[1470]: time="2024-12-13T09:12:30.916002102Z" level=info msg="TearDown network for sandbox \"fcebbf9ff255a39523e24688a9b20c584b5a715ee32514148a982e7efd530f19\" successfully" Dec 13 09:12:30.924261 containerd[1470]: time="2024-12-13T09:12:30.923026356Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fcebbf9ff255a39523e24688a9b20c584b5a715ee32514148a982e7efd530f19\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 09:12:30.924261 containerd[1470]: time="2024-12-13T09:12:30.923123221Z" level=info msg="RemovePodSandbox \"fcebbf9ff255a39523e24688a9b20c584b5a715ee32514148a982e7efd530f19\" returns successfully" Dec 13 09:12:30.924261 containerd[1470]: time="2024-12-13T09:12:30.923890134Z" level=info msg="StopPodSandbox for \"9083c936da8f8f702587c99d9cc2d53875a7abb59e3cc3bf1c72570e1620d593\"" Dec 13 09:12:31.184599 containerd[1470]: 2024-12-13 09:12:31.040 [WARNING][4847] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9083c936da8f8f702587c99d9cc2d53875a7abb59e3cc3bf1c72570e1620d593" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--e--b721934136-k8s-calico--kube--controllers--847f66d7bd--gnjzf-eth0", GenerateName:"calico-kube-controllers-847f66d7bd-", Namespace:"calico-system", SelfLink:"", UID:"6c0ab554-eaa5-49a2-ba96-7901a803a4df", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 11, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"847f66d7bd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-e-b721934136", ContainerID:"02d1fac9c711b5692e758e70d755392b4e09057371e6200cac3a273374c95601", Pod:"calico-kube-controllers-847f66d7bd-gnjzf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.19.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0cbaaffb976", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 09:12:31.184599 containerd[1470]: 2024-12-13 09:12:31.044 [INFO][4847] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9083c936da8f8f702587c99d9cc2d53875a7abb59e3cc3bf1c72570e1620d593" Dec 13 09:12:31.184599 containerd[1470]: 2024-12-13 09:12:31.044 [INFO][4847] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9083c936da8f8f702587c99d9cc2d53875a7abb59e3cc3bf1c72570e1620d593" iface="eth0" netns="" Dec 13 09:12:31.184599 containerd[1470]: 2024-12-13 09:12:31.044 [INFO][4847] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9083c936da8f8f702587c99d9cc2d53875a7abb59e3cc3bf1c72570e1620d593" Dec 13 09:12:31.184599 containerd[1470]: 2024-12-13 09:12:31.044 [INFO][4847] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9083c936da8f8f702587c99d9cc2d53875a7abb59e3cc3bf1c72570e1620d593" Dec 13 09:12:31.184599 containerd[1470]: 2024-12-13 09:12:31.132 [INFO][4853] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9083c936da8f8f702587c99d9cc2d53875a7abb59e3cc3bf1c72570e1620d593" HandleID="k8s-pod-network.9083c936da8f8f702587c99d9cc2d53875a7abb59e3cc3bf1c72570e1620d593" Workload="ci--4081.2.1--e--b721934136-k8s-calico--kube--controllers--847f66d7bd--gnjzf-eth0" Dec 13 09:12:31.184599 containerd[1470]: 2024-12-13 09:12:31.134 [INFO][4853] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 09:12:31.184599 containerd[1470]: 2024-12-13 09:12:31.134 [INFO][4853] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 09:12:31.184599 containerd[1470]: 2024-12-13 09:12:31.155 [WARNING][4853] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9083c936da8f8f702587c99d9cc2d53875a7abb59e3cc3bf1c72570e1620d593" HandleID="k8s-pod-network.9083c936da8f8f702587c99d9cc2d53875a7abb59e3cc3bf1c72570e1620d593" Workload="ci--4081.2.1--e--b721934136-k8s-calico--kube--controllers--847f66d7bd--gnjzf-eth0" Dec 13 09:12:31.184599 containerd[1470]: 2024-12-13 09:12:31.157 [INFO][4853] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9083c936da8f8f702587c99d9cc2d53875a7abb59e3cc3bf1c72570e1620d593" HandleID="k8s-pod-network.9083c936da8f8f702587c99d9cc2d53875a7abb59e3cc3bf1c72570e1620d593" Workload="ci--4081.2.1--e--b721934136-k8s-calico--kube--controllers--847f66d7bd--gnjzf-eth0" Dec 13 09:12:31.184599 containerd[1470]: 2024-12-13 09:12:31.163 [INFO][4853] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 09:12:31.184599 containerd[1470]: 2024-12-13 09:12:31.177 [INFO][4847] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9083c936da8f8f702587c99d9cc2d53875a7abb59e3cc3bf1c72570e1620d593" Dec 13 09:12:31.186850 containerd[1470]: time="2024-12-13T09:12:31.186459098Z" level=info msg="TearDown network for sandbox \"9083c936da8f8f702587c99d9cc2d53875a7abb59e3cc3bf1c72570e1620d593\" successfully" Dec 13 09:12:31.186850 containerd[1470]: time="2024-12-13T09:12:31.186694247Z" level=info msg="StopPodSandbox for \"9083c936da8f8f702587c99d9cc2d53875a7abb59e3cc3bf1c72570e1620d593\" returns successfully" Dec 13 09:12:31.191067 containerd[1470]: time="2024-12-13T09:12:31.190841289Z" level=info msg="RemovePodSandbox for \"9083c936da8f8f702587c99d9cc2d53875a7abb59e3cc3bf1c72570e1620d593\"" Dec 13 09:12:31.191693 containerd[1470]: time="2024-12-13T09:12:31.191309229Z" level=info msg="Forcibly stopping sandbox \"9083c936da8f8f702587c99d9cc2d53875a7abb59e3cc3bf1c72570e1620d593\"" Dec 13 09:12:31.339859 kubelet[2567]: I1213 09:12:31.338991 2567 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 09:12:31.472605 containerd[1470]: 2024-12-13 09:12:31.350 [WARNING][4875] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9083c936da8f8f702587c99d9cc2d53875a7abb59e3cc3bf1c72570e1620d593" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--e--b721934136-k8s-calico--kube--controllers--847f66d7bd--gnjzf-eth0", GenerateName:"calico-kube-controllers-847f66d7bd-", Namespace:"calico-system", SelfLink:"", UID:"6c0ab554-eaa5-49a2-ba96-7901a803a4df", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 11, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"847f66d7bd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-e-b721934136", ContainerID:"02d1fac9c711b5692e758e70d755392b4e09057371e6200cac3a273374c95601", Pod:"calico-kube-controllers-847f66d7bd-gnjzf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.19.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0cbaaffb976", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 09:12:31.472605 containerd[1470]: 2024-12-13 09:12:31.351 [INFO][4875] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9083c936da8f8f702587c99d9cc2d53875a7abb59e3cc3bf1c72570e1620d593" Dec 13 09:12:31.472605 containerd[1470]: 2024-12-13 09:12:31.351 [INFO][4875] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9083c936da8f8f702587c99d9cc2d53875a7abb59e3cc3bf1c72570e1620d593" iface="eth0" netns="" Dec 13 09:12:31.472605 containerd[1470]: 2024-12-13 09:12:31.351 [INFO][4875] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9083c936da8f8f702587c99d9cc2d53875a7abb59e3cc3bf1c72570e1620d593" Dec 13 09:12:31.472605 containerd[1470]: 2024-12-13 09:12:31.351 [INFO][4875] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9083c936da8f8f702587c99d9cc2d53875a7abb59e3cc3bf1c72570e1620d593" Dec 13 09:12:31.472605 containerd[1470]: 2024-12-13 09:12:31.430 [INFO][4882] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9083c936da8f8f702587c99d9cc2d53875a7abb59e3cc3bf1c72570e1620d593" HandleID="k8s-pod-network.9083c936da8f8f702587c99d9cc2d53875a7abb59e3cc3bf1c72570e1620d593" Workload="ci--4081.2.1--e--b721934136-k8s-calico--kube--controllers--847f66d7bd--gnjzf-eth0" Dec 13 09:12:31.472605 containerd[1470]: 2024-12-13 09:12:31.431 [INFO][4882] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 09:12:31.472605 containerd[1470]: 2024-12-13 09:12:31.431 [INFO][4882] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 09:12:31.472605 containerd[1470]: 2024-12-13 09:12:31.452 [WARNING][4882] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9083c936da8f8f702587c99d9cc2d53875a7abb59e3cc3bf1c72570e1620d593" HandleID="k8s-pod-network.9083c936da8f8f702587c99d9cc2d53875a7abb59e3cc3bf1c72570e1620d593" Workload="ci--4081.2.1--e--b721934136-k8s-calico--kube--controllers--847f66d7bd--gnjzf-eth0" Dec 13 09:12:31.472605 containerd[1470]: 2024-12-13 09:12:31.452 [INFO][4882] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9083c936da8f8f702587c99d9cc2d53875a7abb59e3cc3bf1c72570e1620d593" HandleID="k8s-pod-network.9083c936da8f8f702587c99d9cc2d53875a7abb59e3cc3bf1c72570e1620d593" Workload="ci--4081.2.1--e--b721934136-k8s-calico--kube--controllers--847f66d7bd--gnjzf-eth0" Dec 13 09:12:31.472605 containerd[1470]: 2024-12-13 09:12:31.456 [INFO][4882] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 09:12:31.472605 containerd[1470]: 2024-12-13 09:12:31.463 [INFO][4875] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9083c936da8f8f702587c99d9cc2d53875a7abb59e3cc3bf1c72570e1620d593" Dec 13 09:12:31.472605 containerd[1470]: time="2024-12-13T09:12:31.472370363Z" level=info msg="TearDown network for sandbox \"9083c936da8f8f702587c99d9cc2d53875a7abb59e3cc3bf1c72570e1620d593\" successfully" Dec 13 09:12:31.486381 containerd[1470]: time="2024-12-13T09:12:31.486163061Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9083c936da8f8f702587c99d9cc2d53875a7abb59e3cc3bf1c72570e1620d593\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 09:12:31.486381 containerd[1470]: time="2024-12-13T09:12:31.486262935Z" level=info msg="RemovePodSandbox \"9083c936da8f8f702587c99d9cc2d53875a7abb59e3cc3bf1c72570e1620d593\" returns successfully" Dec 13 09:12:31.488942 containerd[1470]: time="2024-12-13T09:12:31.487926462Z" level=info msg="StopPodSandbox for \"03211bcb90188fff2f38b44545c754fec32a8393084cab6450a3f72d350a8a3e\"" Dec 13 09:12:31.714811 containerd[1470]: time="2024-12-13T09:12:31.714730271Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:12:31.716824 containerd[1470]: time="2024-12-13T09:12:31.716722460Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Dec 13 09:12:31.717898 containerd[1470]: time="2024-12-13T09:12:31.717835933Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:12:31.721031 containerd[1470]: time="2024-12-13T09:12:31.720946543Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:12:31.724910 containerd[1470]: time="2024-12-13T09:12:31.723898272Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 2.494373312s" Dec 13 09:12:31.724910 containerd[1470]: time="2024-12-13T09:12:31.723955007Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Dec 13 09:12:31.727104 containerd[1470]: time="2024-12-13T09:12:31.727041972Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 09:12:31.730086 containerd[1470]: time="2024-12-13T09:12:31.729975302Z" level=info msg="CreateContainer within sandbox \"90d621ea092d5e36dff1933a34fe75069f403b30150aa57d227006e1a890f29a\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Dec 13 09:12:31.763251 containerd[1470]: 2024-12-13 09:12:31.635 [WARNING][4900] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="03211bcb90188fff2f38b44545c754fec32a8393084cab6450a3f72d350a8a3e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--e--b721934136-k8s-calico--apiserver--dd59c5464--h679k-eth0", GenerateName:"calico-apiserver-dd59c5464-", Namespace:"calico-apiserver", SelfLink:"", UID:"4700244a-abfc-4fc6-93e2-57b920e50bc1", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 11, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dd59c5464", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-e-b721934136", ContainerID:"8432936e63ed4ba18a4a48f9eac9a878b920e94b6a5f898edde24073f72531bd", Pod:"calico-apiserver-dd59c5464-h679k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.19.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5f5b3bcc039", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 09:12:31.763251 containerd[1470]: 2024-12-13 09:12:31.640 [INFO][4900] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="03211bcb90188fff2f38b44545c754fec32a8393084cab6450a3f72d350a8a3e" Dec 13 09:12:31.763251 containerd[1470]: 2024-12-13 09:12:31.641 [INFO][4900] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="03211bcb90188fff2f38b44545c754fec32a8393084cab6450a3f72d350a8a3e" iface="eth0" netns="" Dec 13 09:12:31.763251 containerd[1470]: 2024-12-13 09:12:31.641 [INFO][4900] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="03211bcb90188fff2f38b44545c754fec32a8393084cab6450a3f72d350a8a3e" Dec 13 09:12:31.763251 containerd[1470]: 2024-12-13 09:12:31.641 [INFO][4900] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="03211bcb90188fff2f38b44545c754fec32a8393084cab6450a3f72d350a8a3e" Dec 13 09:12:31.763251 containerd[1470]: 2024-12-13 09:12:31.735 [INFO][4907] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="03211bcb90188fff2f38b44545c754fec32a8393084cab6450a3f72d350a8a3e" HandleID="k8s-pod-network.03211bcb90188fff2f38b44545c754fec32a8393084cab6450a3f72d350a8a3e" Workload="ci--4081.2.1--e--b721934136-k8s-calico--apiserver--dd59c5464--h679k-eth0" Dec 13 09:12:31.763251 containerd[1470]: 2024-12-13 09:12:31.737 [INFO][4907] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 09:12:31.763251 containerd[1470]: 2024-12-13 09:12:31.737 [INFO][4907] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 09:12:31.763251 containerd[1470]: 2024-12-13 09:12:31.748 [WARNING][4907] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="03211bcb90188fff2f38b44545c754fec32a8393084cab6450a3f72d350a8a3e" HandleID="k8s-pod-network.03211bcb90188fff2f38b44545c754fec32a8393084cab6450a3f72d350a8a3e" Workload="ci--4081.2.1--e--b721934136-k8s-calico--apiserver--dd59c5464--h679k-eth0" Dec 13 09:12:31.763251 containerd[1470]: 2024-12-13 09:12:31.748 [INFO][4907] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="03211bcb90188fff2f38b44545c754fec32a8393084cab6450a3f72d350a8a3e" HandleID="k8s-pod-network.03211bcb90188fff2f38b44545c754fec32a8393084cab6450a3f72d350a8a3e" Workload="ci--4081.2.1--e--b721934136-k8s-calico--apiserver--dd59c5464--h679k-eth0" Dec 13 09:12:31.763251 containerd[1470]: 2024-12-13 09:12:31.751 [INFO][4907] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 09:12:31.763251 containerd[1470]: 2024-12-13 09:12:31.759 [INFO][4900] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="03211bcb90188fff2f38b44545c754fec32a8393084cab6450a3f72d350a8a3e" Dec 13 09:12:31.763251 containerd[1470]: time="2024-12-13T09:12:31.763175421Z" level=info msg="TearDown network for sandbox \"03211bcb90188fff2f38b44545c754fec32a8393084cab6450a3f72d350a8a3e\" successfully" Dec 13 09:12:31.766126 containerd[1470]: time="2024-12-13T09:12:31.763214235Z" level=info msg="StopPodSandbox for \"03211bcb90188fff2f38b44545c754fec32a8393084cab6450a3f72d350a8a3e\" returns successfully" Dec 13 09:12:31.766126 containerd[1470]: time="2024-12-13T09:12:31.765371662Z" level=info msg="RemovePodSandbox for \"03211bcb90188fff2f38b44545c754fec32a8393084cab6450a3f72d350a8a3e\"" Dec 13 09:12:31.766126 containerd[1470]: time="2024-12-13T09:12:31.765443320Z" level=info msg="Forcibly stopping sandbox \"03211bcb90188fff2f38b44545c754fec32a8393084cab6450a3f72d350a8a3e\"" Dec 13 09:12:31.767493 containerd[1470]: time="2024-12-13T09:12:31.767175163Z" level=info msg="CreateContainer within sandbox \"90d621ea092d5e36dff1933a34fe75069f403b30150aa57d227006e1a890f29a\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"2c6e8484ca40ab298af42b757cf790ba166550709348c0eb4a49bba75edfacc4\"" Dec 13 09:12:31.769960 containerd[1470]: time="2024-12-13T09:12:31.769274527Z" level=info msg="StartContainer for \"2c6e8484ca40ab298af42b757cf790ba166550709348c0eb4a49bba75edfacc4\"" Dec 13 09:12:31.873225 systemd[1]: Started cri-containerd-2c6e8484ca40ab298af42b757cf790ba166550709348c0eb4a49bba75edfacc4.scope - libcontainer container 2c6e8484ca40ab298af42b757cf790ba166550709348c0eb4a49bba75edfacc4. Dec 13 09:12:31.998610 containerd[1470]: time="2024-12-13T09:12:31.997536938Z" level=info msg="StartContainer for \"2c6e8484ca40ab298af42b757cf790ba166550709348c0eb4a49bba75edfacc4\" returns successfully" Dec 13 09:12:32.012016 containerd[1470]: 2024-12-13 09:12:31.885 [WARNING][4929] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="03211bcb90188fff2f38b44545c754fec32a8393084cab6450a3f72d350a8a3e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--e--b721934136-k8s-calico--apiserver--dd59c5464--h679k-eth0", GenerateName:"calico-apiserver-dd59c5464-", Namespace:"calico-apiserver", SelfLink:"", UID:"4700244a-abfc-4fc6-93e2-57b920e50bc1", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 11, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dd59c5464", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-e-b721934136", ContainerID:"8432936e63ed4ba18a4a48f9eac9a878b920e94b6a5f898edde24073f72531bd", Pod:"calico-apiserver-dd59c5464-h679k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.19.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5f5b3bcc039", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 09:12:32.012016 containerd[1470]: 2024-12-13 09:12:31.886 [INFO][4929] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="03211bcb90188fff2f38b44545c754fec32a8393084cab6450a3f72d350a8a3e" Dec 13 09:12:32.012016 containerd[1470]: 2024-12-13 09:12:31.886 [INFO][4929] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="03211bcb90188fff2f38b44545c754fec32a8393084cab6450a3f72d350a8a3e" iface="eth0" netns="" Dec 13 09:12:32.012016 containerd[1470]: 2024-12-13 09:12:31.886 [INFO][4929] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="03211bcb90188fff2f38b44545c754fec32a8393084cab6450a3f72d350a8a3e" Dec 13 09:12:32.012016 containerd[1470]: 2024-12-13 09:12:31.886 [INFO][4929] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="03211bcb90188fff2f38b44545c754fec32a8393084cab6450a3f72d350a8a3e" Dec 13 09:12:32.012016 containerd[1470]: 2024-12-13 09:12:31.976 [INFO][4954] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="03211bcb90188fff2f38b44545c754fec32a8393084cab6450a3f72d350a8a3e" HandleID="k8s-pod-network.03211bcb90188fff2f38b44545c754fec32a8393084cab6450a3f72d350a8a3e" Workload="ci--4081.2.1--e--b721934136-k8s-calico--apiserver--dd59c5464--h679k-eth0" Dec 13 09:12:32.012016 containerd[1470]: 2024-12-13 09:12:31.978 [INFO][4954] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 09:12:32.012016 containerd[1470]: 2024-12-13 09:12:31.978 [INFO][4954] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 09:12:32.012016 containerd[1470]: 2024-12-13 09:12:31.996 [WARNING][4954] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="03211bcb90188fff2f38b44545c754fec32a8393084cab6450a3f72d350a8a3e" HandleID="k8s-pod-network.03211bcb90188fff2f38b44545c754fec32a8393084cab6450a3f72d350a8a3e" Workload="ci--4081.2.1--e--b721934136-k8s-calico--apiserver--dd59c5464--h679k-eth0" Dec 13 09:12:32.012016 containerd[1470]: 2024-12-13 09:12:31.996 [INFO][4954] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="03211bcb90188fff2f38b44545c754fec32a8393084cab6450a3f72d350a8a3e" HandleID="k8s-pod-network.03211bcb90188fff2f38b44545c754fec32a8393084cab6450a3f72d350a8a3e" Workload="ci--4081.2.1--e--b721934136-k8s-calico--apiserver--dd59c5464--h679k-eth0" Dec 13 09:12:32.012016 containerd[1470]: 2024-12-13 09:12:32.002 [INFO][4954] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 09:12:32.012016 containerd[1470]: 2024-12-13 09:12:32.008 [INFO][4929] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="03211bcb90188fff2f38b44545c754fec32a8393084cab6450a3f72d350a8a3e" Dec 13 09:12:32.013678 containerd[1470]: time="2024-12-13T09:12:32.012027809Z" level=info msg="TearDown network for sandbox \"03211bcb90188fff2f38b44545c754fec32a8393084cab6450a3f72d350a8a3e\" successfully" Dec 13 09:12:32.020799 containerd[1470]: time="2024-12-13T09:12:32.020694876Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"03211bcb90188fff2f38b44545c754fec32a8393084cab6450a3f72d350a8a3e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 09:12:32.020968 containerd[1470]: time="2024-12-13T09:12:32.020823527Z" level=info msg="RemovePodSandbox \"03211bcb90188fff2f38b44545c754fec32a8393084cab6450a3f72d350a8a3e\" returns successfully" Dec 13 09:12:32.022386 containerd[1470]: time="2024-12-13T09:12:32.021936050Z" level=info msg="StopPodSandbox for \"9ed4efaa4a5040c925b271214c7529b89db9dde3b3fd73e3a218ea511bfea84b\"" Dec 13 09:12:32.117555 containerd[1470]: time="2024-12-13T09:12:32.117491146Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:12:32.120503 containerd[1470]: time="2024-12-13T09:12:32.120420973Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Dec 13 09:12:32.125743 containerd[1470]: time="2024-12-13T09:12:32.125537660Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 398.436686ms" Dec 13 09:12:32.125743 containerd[1470]: time="2024-12-13T09:12:32.125599893Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Dec 13 09:12:32.127946 containerd[1470]: time="2024-12-13T09:12:32.127154039Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Dec 13 09:12:32.133050 containerd[1470]: time="2024-12-13T09:12:32.132985608Z" level=info msg="CreateContainer within sandbox \"8432936e63ed4ba18a4a48f9eac9a878b920e94b6a5f898edde24073f72531bd\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 09:12:32.166954 containerd[1470]: time="2024-12-13T09:12:32.165625114Z" level=info msg="CreateContainer within sandbox \"8432936e63ed4ba18a4a48f9eac9a878b920e94b6a5f898edde24073f72531bd\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d4128a596b9fb4bfdd6c4c722b19e9149ee840e771415e1d24aee89582008a0a\"" Dec 13 09:12:32.166459 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2150764367.mount: Deactivated successfully. Dec 13 09:12:32.170424 containerd[1470]: time="2024-12-13T09:12:32.170194940Z" level=info msg="StartContainer for \"d4128a596b9fb4bfdd6c4c722b19e9149ee840e771415e1d24aee89582008a0a\"" Dec 13 09:12:32.193905 containerd[1470]: 2024-12-13 09:12:32.084 [WARNING][4983] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9ed4efaa4a5040c925b271214c7529b89db9dde3b3fd73e3a218ea511bfea84b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--e--b721934136-k8s-coredns--7db6d8ff4d--rbtvp-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"eb0da44c-3f76-4633-8e65-0b9e15072d96", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 11, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-e-b721934136", ContainerID:"4a2aaea560632d1ad078023743f0f517f90a958485c928ce49d598d456d68d0b", Pod:"coredns-7db6d8ff4d-rbtvp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8a4830723c3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 09:12:32.193905 containerd[1470]: 2024-12-13 09:12:32.085 [INFO][4983] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9ed4efaa4a5040c925b271214c7529b89db9dde3b3fd73e3a218ea511bfea84b" Dec 13 09:12:32.193905 containerd[1470]: 2024-12-13 09:12:32.085 [INFO][4983] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9ed4efaa4a5040c925b271214c7529b89db9dde3b3fd73e3a218ea511bfea84b" iface="eth0" netns="" Dec 13 09:12:32.193905 containerd[1470]: 2024-12-13 09:12:32.085 [INFO][4983] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9ed4efaa4a5040c925b271214c7529b89db9dde3b3fd73e3a218ea511bfea84b" Dec 13 09:12:32.193905 containerd[1470]: 2024-12-13 09:12:32.085 [INFO][4983] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9ed4efaa4a5040c925b271214c7529b89db9dde3b3fd73e3a218ea511bfea84b" Dec 13 09:12:32.193905 containerd[1470]: 2024-12-13 09:12:32.145 [INFO][4991] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9ed4efaa4a5040c925b271214c7529b89db9dde3b3fd73e3a218ea511bfea84b" HandleID="k8s-pod-network.9ed4efaa4a5040c925b271214c7529b89db9dde3b3fd73e3a218ea511bfea84b" Workload="ci--4081.2.1--e--b721934136-k8s-coredns--7db6d8ff4d--rbtvp-eth0" Dec 13 09:12:32.193905 containerd[1470]: 2024-12-13 09:12:32.145 [INFO][4991] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 09:12:32.193905 containerd[1470]: 2024-12-13 09:12:32.145 [INFO][4991] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 09:12:32.193905 containerd[1470]: 2024-12-13 09:12:32.165 [WARNING][4991] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9ed4efaa4a5040c925b271214c7529b89db9dde3b3fd73e3a218ea511bfea84b" HandleID="k8s-pod-network.9ed4efaa4a5040c925b271214c7529b89db9dde3b3fd73e3a218ea511bfea84b" Workload="ci--4081.2.1--e--b721934136-k8s-coredns--7db6d8ff4d--rbtvp-eth0" Dec 13 09:12:32.193905 containerd[1470]: 2024-12-13 09:12:32.165 [INFO][4991] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9ed4efaa4a5040c925b271214c7529b89db9dde3b3fd73e3a218ea511bfea84b" HandleID="k8s-pod-network.9ed4efaa4a5040c925b271214c7529b89db9dde3b3fd73e3a218ea511bfea84b" Workload="ci--4081.2.1--e--b721934136-k8s-coredns--7db6d8ff4d--rbtvp-eth0" Dec 13 09:12:32.193905 containerd[1470]: 2024-12-13 09:12:32.169 [INFO][4991] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 09:12:32.193905 containerd[1470]: 2024-12-13 09:12:32.173 [INFO][4983] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9ed4efaa4a5040c925b271214c7529b89db9dde3b3fd73e3a218ea511bfea84b" Dec 13 09:12:32.198662 containerd[1470]: time="2024-12-13T09:12:32.197907900Z" level=info msg="TearDown network for sandbox \"9ed4efaa4a5040c925b271214c7529b89db9dde3b3fd73e3a218ea511bfea84b\" successfully" Dec 13 09:12:32.198662 containerd[1470]: time="2024-12-13T09:12:32.197967880Z" level=info msg="StopPodSandbox for \"9ed4efaa4a5040c925b271214c7529b89db9dde3b3fd73e3a218ea511bfea84b\" returns successfully" Dec 13 09:12:32.200170 containerd[1470]: time="2024-12-13T09:12:32.200134249Z" level=info msg="RemovePodSandbox for \"9ed4efaa4a5040c925b271214c7529b89db9dde3b3fd73e3a218ea511bfea84b\"" Dec 13 09:12:32.201011 containerd[1470]: time="2024-12-13T09:12:32.200569212Z" level=info msg="Forcibly stopping sandbox \"9ed4efaa4a5040c925b271214c7529b89db9dde3b3fd73e3a218ea511bfea84b\"" Dec 13 09:12:32.231986 systemd[1]: Started cri-containerd-d4128a596b9fb4bfdd6c4c722b19e9149ee840e771415e1d24aee89582008a0a.scope - libcontainer container d4128a596b9fb4bfdd6c4c722b19e9149ee840e771415e1d24aee89582008a0a. Dec 13 09:12:32.362310 containerd[1470]: time="2024-12-13T09:12:32.362256541Z" level=info msg="StartContainer for \"d4128a596b9fb4bfdd6c4c722b19e9149ee840e771415e1d24aee89582008a0a\" returns successfully" Dec 13 09:12:32.373907 containerd[1470]: 2024-12-13 09:12:32.284 [WARNING][5027] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9ed4efaa4a5040c925b271214c7529b89db9dde3b3fd73e3a218ea511bfea84b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--e--b721934136-k8s-coredns--7db6d8ff4d--rbtvp-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"eb0da44c-3f76-4633-8e65-0b9e15072d96", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 11, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-e-b721934136", ContainerID:"4a2aaea560632d1ad078023743f0f517f90a958485c928ce49d598d456d68d0b", Pod:"coredns-7db6d8ff4d-rbtvp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8a4830723c3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 09:12:32.373907 containerd[1470]: 2024-12-13 09:12:32.284 [INFO][5027] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9ed4efaa4a5040c925b271214c7529b89db9dde3b3fd73e3a218ea511bfea84b" Dec 13 09:12:32.373907 containerd[1470]: 2024-12-13 09:12:32.284 [INFO][5027] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9ed4efaa4a5040c925b271214c7529b89db9dde3b3fd73e3a218ea511bfea84b" iface="eth0" netns="" Dec 13 09:12:32.373907 containerd[1470]: 2024-12-13 09:12:32.284 [INFO][5027] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9ed4efaa4a5040c925b271214c7529b89db9dde3b3fd73e3a218ea511bfea84b" Dec 13 09:12:32.373907 containerd[1470]: 2024-12-13 09:12:32.284 [INFO][5027] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9ed4efaa4a5040c925b271214c7529b89db9dde3b3fd73e3a218ea511bfea84b" Dec 13 09:12:32.373907 containerd[1470]: 2024-12-13 09:12:32.331 [INFO][5040] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9ed4efaa4a5040c925b271214c7529b89db9dde3b3fd73e3a218ea511bfea84b" HandleID="k8s-pod-network.9ed4efaa4a5040c925b271214c7529b89db9dde3b3fd73e3a218ea511bfea84b" Workload="ci--4081.2.1--e--b721934136-k8s-coredns--7db6d8ff4d--rbtvp-eth0" Dec 13 09:12:32.373907 containerd[1470]: 2024-12-13 09:12:32.332 [INFO][5040] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 09:12:32.373907 containerd[1470]: 2024-12-13 09:12:32.332 [INFO][5040] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 09:12:32.373907 containerd[1470]: 2024-12-13 09:12:32.352 [WARNING][5040] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9ed4efaa4a5040c925b271214c7529b89db9dde3b3fd73e3a218ea511bfea84b" HandleID="k8s-pod-network.9ed4efaa4a5040c925b271214c7529b89db9dde3b3fd73e3a218ea511bfea84b" Workload="ci--4081.2.1--e--b721934136-k8s-coredns--7db6d8ff4d--rbtvp-eth0" Dec 13 09:12:32.373907 containerd[1470]: 2024-12-13 09:12:32.352 [INFO][5040] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9ed4efaa4a5040c925b271214c7529b89db9dde3b3fd73e3a218ea511bfea84b" HandleID="k8s-pod-network.9ed4efaa4a5040c925b271214c7529b89db9dde3b3fd73e3a218ea511bfea84b" Workload="ci--4081.2.1--e--b721934136-k8s-coredns--7db6d8ff4d--rbtvp-eth0" Dec 13 09:12:32.373907 containerd[1470]: 2024-12-13 09:12:32.358 [INFO][5040] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 09:12:32.373907 containerd[1470]: 2024-12-13 09:12:32.368 [INFO][5027] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9ed4efaa4a5040c925b271214c7529b89db9dde3b3fd73e3a218ea511bfea84b" Dec 13 09:12:32.374531 containerd[1470]: time="2024-12-13T09:12:32.373945405Z" level=info msg="TearDown network for sandbox \"9ed4efaa4a5040c925b271214c7529b89db9dde3b3fd73e3a218ea511bfea84b\" successfully" Dec 13 09:12:32.381520 containerd[1470]: time="2024-12-13T09:12:32.381277411Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9ed4efaa4a5040c925b271214c7529b89db9dde3b3fd73e3a218ea511bfea84b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 09:12:32.381774 containerd[1470]: time="2024-12-13T09:12:32.381646945Z" level=info msg="RemovePodSandbox \"9ed4efaa4a5040c925b271214c7529b89db9dde3b3fd73e3a218ea511bfea84b\" returns successfully" Dec 13 09:12:32.386214 kubelet[2567]: I1213 09:12:32.385624 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-dd59c5464-8chwx" podStartSLOduration=31.445672993 podStartE2EDuration="37.385596702s" podCreationTimestamp="2024-12-13 09:11:55 +0000 UTC" firstStartedPulling="2024-12-13 09:12:23.288148685 +0000 UTC m=+53.774978941" lastFinishedPulling="2024-12-13 09:12:29.228072411 +0000 UTC m=+59.714902650" observedRunningTime="2024-12-13 09:12:30.353008681 +0000 UTC m=+60.839838938" watchObservedRunningTime="2024-12-13 09:12:32.385596702 +0000 UTC m=+62.872427023" Dec 13 09:12:32.865250 kubelet[2567]: I1213 09:12:32.865098 2567 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Dec 13 09:12:32.867550 kubelet[2567]: I1213 09:12:32.867419 2567 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Dec 13 09:12:33.408737 kubelet[2567]: I1213 09:12:33.407243 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-fg4tr" podStartSLOduration=29.135993897 podStartE2EDuration="38.407224888s" podCreationTimestamp="2024-12-13 09:11:55 +0000 UTC" firstStartedPulling="2024-12-13 09:12:22.454243518 +0000 UTC m=+52.941073753" lastFinishedPulling="2024-12-13 09:12:31.725474506 +0000 UTC m=+62.212304744" observedRunningTime="2024-12-13 09:12:32.389559446 +0000 UTC m=+62.876389702" watchObservedRunningTime="2024-12-13 09:12:33.407224888 +0000 UTC m=+63.894055144" Dec 13 09:12:34.385217 kubelet[2567]: I1213 09:12:34.385039 2567 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 09:12:34.684489 containerd[1470]: time="2024-12-13T09:12:34.683024158Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:12:34.684489 containerd[1470]: time="2024-12-13T09:12:34.683990965Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Dec 13 09:12:34.685322 containerd[1470]: time="2024-12-13T09:12:34.685275670Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:12:34.688807 containerd[1470]: time="2024-12-13T09:12:34.688744122Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:12:34.689938 containerd[1470]: time="2024-12-13T09:12:34.689893584Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 2.562694877s" Dec 13 09:12:34.690203 containerd[1470]: time="2024-12-13T09:12:34.690081447Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Dec 13 09:12:34.721467 containerd[1470]: time="2024-12-13T09:12:34.719459277Z" level=info msg="CreateContainer within sandbox \"02d1fac9c711b5692e758e70d755392b4e09057371e6200cac3a273374c95601\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Dec 13 09:12:34.753297 containerd[1470]: time="2024-12-13T09:12:34.753237415Z" level=info msg="CreateContainer within sandbox \"02d1fac9c711b5692e758e70d755392b4e09057371e6200cac3a273374c95601\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"14513a5bea4ebf2df827dcb8cacfe2df33ceca9753a3f0933eff438892eba0d0\"" Dec 13 09:12:34.754357 containerd[1470]: time="2024-12-13T09:12:34.754201483Z" level=info msg="StartContainer for \"14513a5bea4ebf2df827dcb8cacfe2df33ceca9753a3f0933eff438892eba0d0\"" Dec 13 09:12:34.826224 systemd[1]: Started cri-containerd-14513a5bea4ebf2df827dcb8cacfe2df33ceca9753a3f0933eff438892eba0d0.scope - libcontainer container 14513a5bea4ebf2df827dcb8cacfe2df33ceca9753a3f0933eff438892eba0d0. Dec 13 09:12:34.897360 containerd[1470]: time="2024-12-13T09:12:34.897155056Z" level=info msg="StartContainer for \"14513a5bea4ebf2df827dcb8cacfe2df33ceca9753a3f0933eff438892eba0d0\" returns successfully" Dec 13 09:12:35.409496 kubelet[2567]: I1213 09:12:35.408856 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-dd59c5464-h679k" podStartSLOduration=34.57735541 podStartE2EDuration="40.408824399s" podCreationTimestamp="2024-12-13 09:11:55 +0000 UTC" firstStartedPulling="2024-12-13 09:12:26.295329631 +0000 UTC m=+56.782159887" lastFinishedPulling="2024-12-13 09:12:32.126798614 +0000 UTC m=+62.613628876" observedRunningTime="2024-12-13 09:12:33.410829034 +0000 UTC m=+63.897659296" watchObservedRunningTime="2024-12-13 09:12:35.408824399 +0000 UTC m=+65.895654658" Dec 13 09:12:35.481371 kubelet[2567]: I1213 09:12:35.480356 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-847f66d7bd-gnjzf" podStartSLOduration=32.18588493 podStartE2EDuration="40.480339725s" podCreationTimestamp="2024-12-13 09:11:55 +0000 UTC" firstStartedPulling="2024-12-13 09:12:26.397053474 +0000 UTC m=+56.883883716" lastFinishedPulling="2024-12-13 09:12:34.691508256 +0000 UTC m=+65.178338511" observedRunningTime="2024-12-13 09:12:35.411264699 +0000 UTC m=+65.898094933" watchObservedRunningTime="2024-12-13 09:12:35.480339725 +0000 UTC m=+65.967169981" Dec 13 09:12:36.683399 kubelet[2567]: E1213 09:12:36.682768 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:12:38.436713 kubelet[2567]: I1213 09:12:38.436606 2567 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 09:12:39.893547 systemd[1]: run-containerd-runc-k8s.io-0a67b11ad9c135ef22fdb28c520e648fd71b29b33de06996744682e4d0613080-runc.WOg9gI.mount: Deactivated successfully. Dec 13 09:12:40.008300 kubelet[2567]: E1213 09:12:40.007903 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:12:53.683988 kubelet[2567]: E1213 09:12:53.683756 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:12:57.683532 kubelet[2567]: E1213 09:12:57.683439 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:13:06.682834 kubelet[2567]: E1213 09:13:06.682783 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:13:07.368097 kubelet[2567]: I1213 09:13:07.367693 2567 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 09:13:09.895236 systemd[1]: run-containerd-runc-k8s.io-0a67b11ad9c135ef22fdb28c520e648fd71b29b33de06996744682e4d0613080-runc.gIkBEG.mount: Deactivated successfully. Dec 13 09:13:21.683437 kubelet[2567]: E1213 09:13:21.683368 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:13:45.683071 kubelet[2567]: E1213 09:13:45.682832 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:13:46.682915 kubelet[2567]: E1213 09:13:46.682808 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:13:56.683271 kubelet[2567]: E1213 09:13:56.683102 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:13:58.778937 update_engine[1448]: I20241213 09:13:58.778776 1448 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Dec 13 09:13:58.778937 update_engine[1448]: I20241213 09:13:58.778893 1448 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Dec 13 09:13:58.781325 update_engine[1448]: I20241213 09:13:58.781002 1448 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Dec 13 09:13:58.783831 update_engine[1448]: I20241213 09:13:58.782812 1448 omaha_request_params.cc:62] Current group set to stable Dec 13 09:13:58.783831 update_engine[1448]: I20241213 09:13:58.783095 1448 update_attempter.cc:499] Already updated boot flags. Skipping. Dec 13 09:13:58.783831 update_engine[1448]: I20241213 09:13:58.783113 1448 update_attempter.cc:643] Scheduling an action processor start. Dec 13 09:13:58.783831 update_engine[1448]: I20241213 09:13:58.783142 1448 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 13 09:13:58.783831 update_engine[1448]: I20241213 09:13:58.783215 1448 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Dec 13 09:13:58.783831 update_engine[1448]: I20241213 09:13:58.783308 1448 omaha_request_action.cc:271] Posting an Omaha request to disabled Dec 13 09:13:58.783831 update_engine[1448]: I20241213 09:13:58.783319 1448 omaha_request_action.cc:272] Request: Dec 13 09:13:58.783831 update_engine[1448]: Dec 13 09:13:58.783831 update_engine[1448]: Dec 13 09:13:58.783831 update_engine[1448]: Dec 13 09:13:58.783831 update_engine[1448]: Dec 13 09:13:58.783831 update_engine[1448]: Dec 13 09:13:58.783831 update_engine[1448]: Dec 13 09:13:58.783831 update_engine[1448]: Dec 13 09:13:58.783831 update_engine[1448]: Dec 13 09:13:58.783831 update_engine[1448]: I20241213 09:13:58.783330 1448 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 09:13:58.799381 locksmithd[1479]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Dec 13 09:13:58.826181 update_engine[1448]: I20241213 09:13:58.825332 1448 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 09:13:58.826181 update_engine[1448]: I20241213 09:13:58.825960 1448 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 09:13:58.834859 update_engine[1448]: E20241213 09:13:58.834194 1448 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 09:13:58.834859 update_engine[1448]: I20241213 09:13:58.834753 1448 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Dec 13 09:14:01.685715 kubelet[2567]: E1213 09:14:01.685556 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:14:02.330161 systemd[1]: Started sshd@7-146.190.157.113:22-216.10.242.26:49394.service - OpenSSH per-connection server daemon (216.10.242.26:49394). Dec 13 09:14:02.663266 sshd[5353]: kex_exchange_identification: read: Connection reset by peer Dec 13 09:14:02.663266 sshd[5353]: Connection reset by 216.10.242.26 port 49394 Dec 13 09:14:02.665160 systemd[1]: sshd@7-146.190.157.113:22-216.10.242.26:49394.service: Deactivated successfully. Dec 13 09:14:04.683580 kubelet[2567]: E1213 09:14:04.683520 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:14:08.642694 update_engine[1448]: I20241213 09:14:08.642233 1448 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 09:14:08.642694 update_engine[1448]: I20241213 09:14:08.642561 1448 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 09:14:08.643181 update_engine[1448]: I20241213 09:14:08.642886 1448 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 09:14:08.643955 update_engine[1448]: E20241213 09:14:08.643893 1448 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 09:14:08.644098 update_engine[1448]: I20241213 09:14:08.643975 1448 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Dec 13 09:14:13.683243 kubelet[2567]: E1213 09:14:13.683185 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:14:18.642752 update_engine[1448]: I20241213 09:14:18.642624 1448 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 09:14:18.643374 update_engine[1448]: I20241213 09:14:18.643007 1448 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 09:14:18.643374 update_engine[1448]: I20241213 09:14:18.643311 1448 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 09:14:18.644238 update_engine[1448]: E20241213 09:14:18.644160 1448 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 09:14:18.644366 update_engine[1448]: I20241213 09:14:18.644273 1448 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Dec 13 09:14:24.683594 kubelet[2567]: E1213 09:14:24.683515 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:14:28.643401 update_engine[1448]: I20241213 09:14:28.643278 1448 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 09:14:28.643983 update_engine[1448]: I20241213 09:14:28.643693 1448 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 09:14:28.644035 update_engine[1448]: I20241213 09:14:28.644004 1448 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 09:14:28.645091 update_engine[1448]: E20241213 09:14:28.645015 1448 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 09:14:28.645301 update_engine[1448]: I20241213 09:14:28.645121 1448 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 13 09:14:28.645301 update_engine[1448]: I20241213 09:14:28.645136 1448 omaha_request_action.cc:617] Omaha request response: Dec 13 09:14:28.645301 update_engine[1448]: E20241213 09:14:28.645260 1448 omaha_request_action.cc:636] Omaha request network transfer failed. Dec 13 09:14:28.652389 update_engine[1448]: I20241213 09:14:28.651899 1448 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Dec 13 09:14:28.652389 update_engine[1448]: I20241213 09:14:28.651974 1448 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 09:14:28.652389 update_engine[1448]: I20241213 09:14:28.651988 1448 update_attempter.cc:306] Processing Done. Dec 13 09:14:28.652389 update_engine[1448]: E20241213 09:14:28.652014 1448 update_attempter.cc:619] Update failed. Dec 13 09:14:28.652389 update_engine[1448]: I20241213 09:14:28.652024 1448 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Dec 13 09:14:28.652389 update_engine[1448]: I20241213 09:14:28.652031 1448 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Dec 13 09:14:28.652389 update_engine[1448]: I20241213 09:14:28.652037 1448 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Dec 13 09:14:28.652389 update_engine[1448]: I20241213 09:14:28.652138 1448 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 13 09:14:28.652389 update_engine[1448]: I20241213 09:14:28.652179 1448 omaha_request_action.cc:271] Posting an Omaha request to disabled Dec 13 09:14:28.652389 update_engine[1448]: I20241213 09:14:28.652189 1448 omaha_request_action.cc:272] Request: Dec 13 09:14:28.652389 update_engine[1448]: Dec 13 09:14:28.652389 update_engine[1448]: Dec 13 09:14:28.652389 update_engine[1448]: Dec 13 09:14:28.652389 update_engine[1448]: Dec 13 09:14:28.652389 update_engine[1448]: Dec 13 09:14:28.652389 update_engine[1448]: Dec 13 09:14:28.652389 update_engine[1448]: I20241213 09:14:28.652199 1448 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 09:14:28.653033 update_engine[1448]: I20241213 09:14:28.652429 1448 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 09:14:28.653033 update_engine[1448]: I20241213 09:14:28.652760 1448 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 09:14:28.653105 locksmithd[1479]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Dec 13 09:14:28.654320 update_engine[1448]: E20241213 09:14:28.654248 1448 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 09:14:28.654477 update_engine[1448]: I20241213 09:14:28.654349 1448 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 13 09:14:28.654477 update_engine[1448]: I20241213 09:14:28.654363 1448 omaha_request_action.cc:617] Omaha request response: Dec 13 09:14:28.654477 update_engine[1448]: I20241213 09:14:28.654376 1448 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 09:14:28.654477 update_engine[1448]: I20241213 09:14:28.654383 1448 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 09:14:28.654477 update_engine[1448]: I20241213 09:14:28.654392 1448 update_attempter.cc:306] Processing Done. Dec 13 09:14:28.654477 update_engine[1448]: I20241213 09:14:28.654402 1448 update_attempter.cc:310] Error event sent. Dec 13 09:14:28.654477 update_engine[1448]: I20241213 09:14:28.654418 1448 update_check_scheduler.cc:74] Next update check in 41m0s Dec 13 09:14:28.655084 locksmithd[1479]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Dec 13 09:14:37.683676 kubelet[2567]: E1213 09:14:37.683181 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:14:39.868460 systemd[1]: run-containerd-runc-k8s.io-0a67b11ad9c135ef22fdb28c520e648fd71b29b33de06996744682e4d0613080-runc.UP4hSR.mount: Deactivated successfully. Dec 13 09:14:47.683112 kubelet[2567]: E1213 09:14:47.683017 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:14:53.207985 systemd[1]: run-containerd-runc-k8s.io-14513a5bea4ebf2df827dcb8cacfe2df33ceca9753a3f0933eff438892eba0d0-runc.cCjb4C.mount: Deactivated successfully. Dec 13 09:14:53.684622 kubelet[2567]: E1213 09:14:53.684569 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:15:06.682691 kubelet[2567]: E1213 09:15:06.682341 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:15:09.868890 systemd[1]: run-containerd-runc-k8s.io-0a67b11ad9c135ef22fdb28c520e648fd71b29b33de06996744682e4d0613080-runc.vjrWgI.mount: Deactivated successfully. Dec 13 09:15:19.683982 kubelet[2567]: E1213 09:15:19.682966 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:15:23.683766 kubelet[2567]: E1213 09:15:23.683267 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:15:27.683672 kubelet[2567]: E1213 09:15:27.682831 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:15:31.683194 kubelet[2567]: E1213 09:15:31.683108 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:15:53.225372 systemd[1]: run-containerd-runc-k8s.io-14513a5bea4ebf2df827dcb8cacfe2df33ceca9753a3f0933eff438892eba0d0-runc.CGlk6X.mount: Deactivated successfully. Dec 13 09:15:58.684742 kubelet[2567]: E1213 09:15:58.683211 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:16:05.683734 kubelet[2567]: E1213 09:16:05.683347 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:16:09.152539 systemd[1]: run-containerd-runc-k8s.io-14513a5bea4ebf2df827dcb8cacfe2df33ceca9753a3f0933eff438892eba0d0-runc.8fSXoo.mount: Deactivated successfully. Dec 13 09:16:10.688670 kubelet[2567]: E1213 09:16:10.688554 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:16:15.684966 kubelet[2567]: E1213 09:16:15.684856 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:16:20.800207 systemd[1]: Started sshd@8-146.190.157.113:22-2.57.122.33:55884.service - OpenSSH per-connection server daemon (2.57.122.33:55884). Dec 13 09:16:21.017704 sshd[5640]: Connection closed by 2.57.122.33 port 55884 Dec 13 09:16:21.016450 systemd[1]: sshd@8-146.190.157.113:22-2.57.122.33:55884.service: Deactivated successfully. Dec 13 09:16:37.685700 kubelet[2567]: E1213 09:16:37.685172 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:16:39.683956 kubelet[2567]: E1213 09:16:39.683270 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:16:42.051234 systemd[1]: Started sshd@9-146.190.157.113:22-92.255.85.188:33726.service - OpenSSH per-connection server daemon (92.255.85.188:33726). Dec 13 09:16:42.684756 kubelet[2567]: E1213 09:16:42.682779 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:16:43.277700 sshd[5688]: Connection closed by authenticating user root 92.255.85.188 port 33726 [preauth] Dec 13 09:16:43.281782 systemd[1]: sshd@9-146.190.157.113:22-92.255.85.188:33726.service: Deactivated successfully. Dec 13 09:16:44.692618 kubelet[2567]: E1213 09:16:44.692461 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:17:00.683806 kubelet[2567]: E1213 09:17:00.683184 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:17:25.684437 kubelet[2567]: E1213 09:17:25.684352 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:17:27.684362 kubelet[2567]: E1213 09:17:27.683739 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:17:37.685797 kubelet[2567]: E1213 09:17:37.683338 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:17:53.206886 systemd[1]: run-containerd-runc-k8s.io-14513a5bea4ebf2df827dcb8cacfe2df33ceca9753a3f0933eff438892eba0d0-runc.3NyVZY.mount: Deactivated successfully. Dec 13 09:17:54.683691 kubelet[2567]: E1213 09:17:54.683439 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:17:56.687480 kubelet[2567]: E1213 09:17:56.687336 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:18:00.685353 kubelet[2567]: E1213 09:18:00.684225 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:18:02.684682 kubelet[2567]: E1213 09:18:02.683402 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:18:09.142852 systemd[1]: run-containerd-runc-k8s.io-14513a5bea4ebf2df827dcb8cacfe2df33ceca9753a3f0933eff438892eba0d0-runc.jgFebT.mount: Deactivated successfully. Dec 13 09:18:09.684200 kubelet[2567]: E1213 09:18:09.683509 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:18:34.683102 kubelet[2567]: E1213 09:18:34.682779 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:18:39.151240 systemd[1]: run-containerd-runc-k8s.io-14513a5bea4ebf2df827dcb8cacfe2df33ceca9753a3f0933eff438892eba0d0-runc.YSM7zs.mount: Deactivated successfully. Dec 13 09:18:41.684402 kubelet[2567]: E1213 09:18:41.683441 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:18:44.685812 kubelet[2567]: E1213 09:18:44.685760 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:18:53.210841 systemd[1]: run-containerd-runc-k8s.io-14513a5bea4ebf2df827dcb8cacfe2df33ceca9753a3f0933eff438892eba0d0-runc.GD2HyX.mount: Deactivated successfully. Dec 13 09:19:04.683020 kubelet[2567]: E1213 09:19:04.682591 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:19:04.683891 kubelet[2567]: E1213 09:19:04.683767 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:19:07.685859 kubelet[2567]: E1213 09:19:07.685029 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:19:13.688821 kubelet[2567]: E1213 09:19:13.688744 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:19:14.686687 kubelet[2567]: E1213 09:19:14.684615 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:19:35.685290 kubelet[2567]: E1213 09:19:35.684399 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:19:39.874749 systemd[1]: run-containerd-runc-k8s.io-0a67b11ad9c135ef22fdb28c520e648fd71b29b33de06996744682e4d0613080-runc.zpwE4C.mount: Deactivated successfully. Dec 13 09:19:58.683162 kubelet[2567]: E1213 09:19:58.682792 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:19:59.683336 kubelet[2567]: E1213 09:19:59.683285 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:20:06.687711 kubelet[2567]: E1213 09:20:06.687301 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:20:09.868935 systemd[1]: run-containerd-runc-k8s.io-0a67b11ad9c135ef22fdb28c520e648fd71b29b33de06996744682e4d0613080-runc.5QWBF0.mount: Deactivated successfully. Dec 13 09:20:13.683465 kubelet[2567]: E1213 09:20:13.682924 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:20:31.684695 kubelet[2567]: E1213 09:20:31.683796 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:20:35.684735 kubelet[2567]: E1213 09:20:35.683572 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:20:36.683695 kubelet[2567]: E1213 09:20:36.683289 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:20:39.148798 systemd[1]: run-containerd-runc-k8s.io-14513a5bea4ebf2df827dcb8cacfe2df33ceca9753a3f0933eff438892eba0d0-runc.1f8BWy.mount: Deactivated successfully. Dec 13 09:20:59.685435 kubelet[2567]: E1213 09:20:59.685397 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:21:04.683052 kubelet[2567]: E1213 09:21:04.682978 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:21:10.683704 kubelet[2567]: E1213 09:21:10.683070 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:21:15.684468 kubelet[2567]: E1213 09:21:15.683032 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:21:39.146746 systemd[1]: run-containerd-runc-k8s.io-14513a5bea4ebf2df827dcb8cacfe2df33ceca9753a3f0933eff438892eba0d0-runc.Tn6pEx.mount: Deactivated successfully. Dec 13 09:21:39.683275 kubelet[2567]: E1213 09:21:39.682770 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:21:41.683869 kubelet[2567]: E1213 09:21:41.683116 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:21:49.684628 kubelet[2567]: E1213 09:21:49.683318 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:21:53.683462 kubelet[2567]: E1213 09:21:53.683098 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:22:15.687162 kubelet[2567]: E1213 09:22:15.687085 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:22:16.683333 kubelet[2567]: E1213 09:22:16.683277 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2"