Dec 13 08:52:15.008524 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Dec 12 23:15:00 -00 2024 Dec 13 08:52:15.008551 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 08:52:15.008565 kernel: BIOS-provided physical RAM map: Dec 13 08:52:15.008572 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 08:52:15.008579 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 08:52:15.008585 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 08:52:15.008594 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Dec 13 08:52:15.008601 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Dec 13 08:52:15.008608 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 08:52:15.008618 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 08:52:15.008628 kernel: NX (Execute Disable) protection: active Dec 13 08:52:15.008635 kernel: APIC: Static calls initialized Dec 13 08:52:15.008642 kernel: SMBIOS 2.8 present. Dec 13 08:52:15.008650 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Dec 13 08:52:15.008659 kernel: Hypervisor detected: KVM Dec 13 08:52:15.008670 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 08:52:15.008681 kernel: kvm-clock: using sched offset of 3935624471 cycles Dec 13 08:52:15.008690 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 08:52:15.008698 kernel: tsc: Detected 2294.608 MHz processor Dec 13 08:52:15.008707 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 08:52:15.010785 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 08:52:15.010799 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Dec 13 08:52:15.010809 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 13 08:52:15.010818 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 08:52:15.010833 kernel: ACPI: Early table checksum verification disabled Dec 13 08:52:15.010842 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Dec 13 08:52:15.010850 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 08:52:15.010859 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 08:52:15.010867 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 08:52:15.010876 kernel: ACPI: FACS 0x000000007FFE0000 000040 Dec 13 08:52:15.010884 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 08:52:15.010892 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 08:52:15.010900 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 08:52:15.010912 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 08:52:15.010920 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Dec 13 08:52:15.010928 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Dec 13 08:52:15.010936 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Dec 13 08:52:15.010945 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Dec 13 08:52:15.010953 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Dec 13 08:52:15.010961 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Dec 13 08:52:15.010980 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Dec 13 08:52:15.010989 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 08:52:15.010998 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 08:52:15.011006 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 13 08:52:15.011015 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Dec 13 08:52:15.011024 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Dec 13 08:52:15.011033 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Dec 13 08:52:15.011045 kernel: Zone ranges: Dec 13 08:52:15.011053 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 08:52:15.011062 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Dec 13 08:52:15.011071 kernel: Normal empty Dec 13 08:52:15.011080 kernel: Movable zone start for each node Dec 13 08:52:15.011089 kernel: Early memory node ranges Dec 13 08:52:15.011098 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 08:52:15.011107 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Dec 13 08:52:15.011115 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Dec 13 08:52:15.011127 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 08:52:15.011138 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 08:52:15.011147 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Dec 13 08:52:15.011156 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 08:52:15.011165 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 08:52:15.011174 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 08:52:15.011182 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 08:52:15.011191 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 08:52:15.011200 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 08:52:15.011212 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 08:52:15.011222 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 08:52:15.011253 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 08:52:15.011267 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 08:52:15.011279 kernel: TSC deadline timer available Dec 13 08:52:15.011293 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 08:52:15.011305 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 13 08:52:15.011316 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Dec 13 08:52:15.011333 kernel: Booting paravirtualized kernel on KVM Dec 13 08:52:15.011346 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 08:52:15.011364 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 13 08:52:15.011381 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Dec 13 08:52:15.011403 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Dec 13 08:52:15.011422 kernel: pcpu-alloc: [0] 0 1 Dec 13 08:52:15.011441 kernel: kvm-guest: PV spinlocks disabled, no host support Dec 13 08:52:15.011462 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 08:52:15.011482 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 08:52:15.011505 kernel: random: crng init done Dec 13 08:52:15.011524 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 08:52:15.011544 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 08:52:15.011563 kernel: Fallback order for Node 0: 0 Dec 13 08:52:15.011583 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Dec 13 08:52:15.011602 kernel: Policy zone: DMA32 Dec 13 08:52:15.011622 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 08:52:15.011642 kernel: Memory: 1971196K/2096612K available (12288K kernel code, 2299K rwdata, 22724K rodata, 42844K init, 2348K bss, 125156K reserved, 0K cma-reserved) Dec 13 08:52:15.011661 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 08:52:15.011684 kernel: Kernel/User page tables isolation: enabled Dec 13 08:52:15.011704 kernel: ftrace: allocating 37902 entries in 149 pages Dec 13 08:52:15.011737 kernel: ftrace: allocated 149 pages with 4 groups Dec 13 08:52:15.011757 kernel: Dynamic Preempt: voluntary Dec 13 08:52:15.011777 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 08:52:15.011802 kernel: rcu: RCU event tracing is enabled. Dec 13 08:52:15.011822 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 08:52:15.011858 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 08:52:15.011882 kernel: Rude variant of Tasks RCU enabled. Dec 13 08:52:15.011896 kernel: Tracing variant of Tasks RCU enabled. Dec 13 08:52:15.011909 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 08:52:15.011924 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 08:52:15.011947 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 13 08:52:15.011966 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 08:52:15.011979 kernel: Console: colour VGA+ 80x25 Dec 13 08:52:15.011994 kernel: printk: console [tty0] enabled Dec 13 08:52:15.012016 kernel: printk: console [ttyS0] enabled Dec 13 08:52:15.012036 kernel: ACPI: Core revision 20230628 Dec 13 08:52:15.012055 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 13 08:52:15.012081 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 08:52:15.012100 kernel: x2apic enabled Dec 13 08:52:15.012120 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 08:52:15.012140 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 08:52:15.012161 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x21134f58f0d, max_idle_ns: 440795217993 ns Dec 13 08:52:15.012173 kernel: Calibrating delay loop (skipped) preset value.. 4589.21 BogoMIPS (lpj=2294608) Dec 13 08:52:15.012181 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Dec 13 08:52:15.012191 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Dec 13 08:52:15.012212 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 08:52:15.012222 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 08:52:15.012231 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 08:52:15.012243 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 08:52:15.012252 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Dec 13 08:52:15.012262 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 08:52:15.012271 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 13 08:52:15.012280 kernel: MDS: Mitigation: Clear CPU buffers Dec 13 08:52:15.012290 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 08:52:15.012305 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 08:52:15.012315 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 08:52:15.012324 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 08:52:15.012333 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 08:52:15.012343 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 13 08:52:15.012352 kernel: Freeing SMP alternatives memory: 32K Dec 13 08:52:15.012361 kernel: pid_max: default: 32768 minimum: 301 Dec 13 08:52:15.012371 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 08:52:15.012383 kernel: landlock: Up and running. Dec 13 08:52:15.012392 kernel: SELinux: Initializing. Dec 13 08:52:15.012401 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 08:52:15.012411 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 08:52:15.012420 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Dec 13 08:52:15.012430 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 08:52:15.012439 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 08:52:15.012448 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 08:52:15.012458 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Dec 13 08:52:15.012470 kernel: signal: max sigframe size: 1776 Dec 13 08:52:15.012479 kernel: rcu: Hierarchical SRCU implementation. Dec 13 08:52:15.012489 kernel: rcu: Max phase no-delay instances is 400. Dec 13 08:52:15.012498 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 08:52:15.012507 kernel: smp: Bringing up secondary CPUs ... Dec 13 08:52:15.012516 kernel: smpboot: x86: Booting SMP configuration: Dec 13 08:52:15.012528 kernel: .... node #0, CPUs: #1 Dec 13 08:52:15.012537 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 08:52:15.012547 kernel: smpboot: Max logical packages: 1 Dec 13 08:52:15.012559 kernel: smpboot: Total of 2 processors activated (9178.43 BogoMIPS) Dec 13 08:52:15.012568 kernel: devtmpfs: initialized Dec 13 08:52:15.012578 kernel: x86/mm: Memory block size: 128MB Dec 13 08:52:15.012587 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 08:52:15.012596 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 08:52:15.012606 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 08:52:15.012615 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 08:52:15.012624 kernel: audit: initializing netlink subsys (disabled) Dec 13 08:52:15.012633 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 08:52:15.012645 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 08:52:15.012654 kernel: audit: type=2000 audit(1734079933.989:1): state=initialized audit_enabled=0 res=1 Dec 13 08:52:15.012663 kernel: cpuidle: using governor menu Dec 13 08:52:15.012672 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 08:52:15.012682 kernel: dca service started, version 1.12.1 Dec 13 08:52:15.014781 kernel: PCI: Using configuration type 1 for base access Dec 13 08:52:15.014795 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 08:52:15.014805 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 08:52:15.014814 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 08:52:15.014831 kernel: ACPI: Added _OSI(Module Device) Dec 13 08:52:15.014841 kernel: ACPI: Added _OSI(Processor Device) Dec 13 08:52:15.014850 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 08:52:15.014859 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 08:52:15.014869 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 08:52:15.014878 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 08:52:15.014888 kernel: ACPI: Interpreter enabled Dec 13 08:52:15.014897 kernel: ACPI: PM: (supports S0 S5) Dec 13 08:52:15.014907 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 08:52:15.014920 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 08:52:15.014929 kernel: PCI: Using E820 reservations for host bridge windows Dec 13 08:52:15.014939 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Dec 13 08:52:15.014948 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 08:52:15.015212 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 13 08:52:15.015325 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Dec 13 08:52:15.015423 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Dec 13 08:52:15.015440 kernel: acpiphp: Slot [3] registered Dec 13 08:52:15.015449 kernel: acpiphp: Slot [4] registered Dec 13 08:52:15.015459 kernel: acpiphp: Slot [5] registered Dec 13 08:52:15.015469 kernel: acpiphp: Slot [6] registered Dec 13 08:52:15.015478 kernel: acpiphp: Slot [7] registered Dec 13 08:52:15.015487 kernel: acpiphp: Slot [8] registered Dec 13 08:52:15.015497 kernel: acpiphp: Slot [9] registered Dec 13 08:52:15.015506 kernel: acpiphp: Slot [10] registered Dec 13 08:52:15.015516 kernel: acpiphp: Slot [11] registered Dec 13 08:52:15.015528 kernel: acpiphp: Slot [12] registered Dec 13 08:52:15.015537 kernel: acpiphp: Slot [13] registered Dec 13 08:52:15.015546 kernel: acpiphp: Slot [14] registered Dec 13 08:52:15.015556 kernel: acpiphp: Slot [15] registered Dec 13 08:52:15.015565 kernel: acpiphp: Slot [16] registered Dec 13 08:52:15.015574 kernel: acpiphp: Slot [17] registered Dec 13 08:52:15.015583 kernel: acpiphp: Slot [18] registered Dec 13 08:52:15.015592 kernel: acpiphp: Slot [19] registered Dec 13 08:52:15.015601 kernel: acpiphp: Slot [20] registered Dec 13 08:52:15.015611 kernel: acpiphp: Slot [21] registered Dec 13 08:52:15.015623 kernel: acpiphp: Slot [22] registered Dec 13 08:52:15.015632 kernel: acpiphp: Slot [23] registered Dec 13 08:52:15.015641 kernel: acpiphp: Slot [24] registered Dec 13 08:52:15.015650 kernel: acpiphp: Slot [25] registered Dec 13 08:52:15.015659 kernel: acpiphp: Slot [26] registered Dec 13 08:52:15.015669 kernel: acpiphp: Slot [27] registered Dec 13 08:52:15.015678 kernel: acpiphp: Slot [28] registered Dec 13 08:52:15.015687 kernel: acpiphp: Slot [29] registered Dec 13 08:52:15.015696 kernel: acpiphp: Slot [30] registered Dec 13 08:52:15.015708 kernel: acpiphp: Slot [31] registered Dec 13 08:52:15.015733 kernel: PCI host bridge to bus 0000:00 Dec 13 08:52:15.015859 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 08:52:15.015976 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 08:52:15.016065 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 08:52:15.016185 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Dec 13 08:52:15.016279 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Dec 13 08:52:15.016455 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 08:52:15.016595 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Dec 13 08:52:15.018785 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Dec 13 08:52:15.018990 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Dec 13 08:52:15.019098 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Dec 13 08:52:15.019199 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Dec 13 08:52:15.019296 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Dec 13 08:52:15.019440 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Dec 13 08:52:15.019581 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Dec 13 08:52:15.019708 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Dec 13 08:52:15.019898 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Dec 13 08:52:15.020020 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Dec 13 08:52:15.020122 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Dec 13 08:52:15.020229 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Dec 13 08:52:15.020338 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Dec 13 08:52:15.020474 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Dec 13 08:52:15.020631 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Dec 13 08:52:15.022887 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Dec 13 08:52:15.023053 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Dec 13 08:52:15.023185 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 08:52:15.023317 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Dec 13 08:52:15.023418 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Dec 13 08:52:15.023517 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Dec 13 08:52:15.023612 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Dec 13 08:52:15.023736 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Dec 13 08:52:15.023858 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Dec 13 08:52:15.023977 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Dec 13 08:52:15.024081 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Dec 13 08:52:15.024194 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Dec 13 08:52:15.024292 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Dec 13 08:52:15.024425 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Dec 13 08:52:15.024523 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Dec 13 08:52:15.024638 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Dec 13 08:52:15.026861 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Dec 13 08:52:15.026998 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Dec 13 08:52:15.027100 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Dec 13 08:52:15.027253 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Dec 13 08:52:15.027386 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Dec 13 08:52:15.027523 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Dec 13 08:52:15.027648 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Dec 13 08:52:15.027829 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Dec 13 08:52:15.027959 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Dec 13 08:52:15.028056 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Dec 13 08:52:15.028069 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 08:52:15.028079 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 08:52:15.028089 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 08:52:15.028098 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 08:52:15.028108 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 13 08:52:15.028121 kernel: iommu: Default domain type: Translated Dec 13 08:52:15.028130 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 08:52:15.028140 kernel: PCI: Using ACPI for IRQ routing Dec 13 08:52:15.028149 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 08:52:15.028158 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 08:52:15.028168 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Dec 13 08:52:15.028265 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Dec 13 08:52:15.028362 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Dec 13 08:52:15.028461 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 08:52:15.028474 kernel: vgaarb: loaded Dec 13 08:52:15.028483 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 13 08:52:15.028493 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 13 08:52:15.028502 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 08:52:15.028511 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 08:52:15.028521 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 08:52:15.028531 kernel: pnp: PnP ACPI init Dec 13 08:52:15.028540 kernel: pnp: PnP ACPI: found 4 devices Dec 13 08:52:15.028553 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 08:52:15.028563 kernel: NET: Registered PF_INET protocol family Dec 13 08:52:15.028572 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 08:52:15.028582 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 08:52:15.028591 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 08:52:15.028600 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 08:52:15.028610 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 13 08:52:15.028619 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 08:52:15.028629 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 08:52:15.028641 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 08:52:15.028650 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 08:52:15.028660 kernel: NET: Registered PF_XDP protocol family Dec 13 08:52:15.034706 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 08:52:15.034846 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 08:52:15.034936 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 08:52:15.035025 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Dec 13 08:52:15.035112 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Dec 13 08:52:15.035289 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Dec 13 08:52:15.035430 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 13 08:52:15.035446 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Dec 13 08:52:15.035548 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 38101 usecs Dec 13 08:52:15.035561 kernel: PCI: CLS 0 bytes, default 64 Dec 13 08:52:15.035571 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 08:52:15.035581 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x21134f58f0d, max_idle_ns: 440795217993 ns Dec 13 08:52:15.035591 kernel: Initialise system trusted keyrings Dec 13 08:52:15.035601 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 08:52:15.035616 kernel: Key type asymmetric registered Dec 13 08:52:15.035625 kernel: Asymmetric key parser 'x509' registered Dec 13 08:52:15.035643 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 08:52:15.035653 kernel: io scheduler mq-deadline registered Dec 13 08:52:15.035663 kernel: io scheduler kyber registered Dec 13 08:52:15.035672 kernel: io scheduler bfq registered Dec 13 08:52:15.035681 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 08:52:15.035691 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Dec 13 08:52:15.035700 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Dec 13 08:52:15.035725 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Dec 13 08:52:15.035735 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 08:52:15.035744 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 08:52:15.035754 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 08:52:15.035763 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 08:52:15.035773 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 08:52:15.035783 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 08:52:15.036028 kernel: rtc_cmos 00:03: RTC can wake from S4 Dec 13 08:52:15.036133 kernel: rtc_cmos 00:03: registered as rtc0 Dec 13 08:52:15.036223 kernel: rtc_cmos 00:03: setting system clock to 2024-12-13T08:52:14 UTC (1734079934) Dec 13 08:52:15.036315 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Dec 13 08:52:15.036327 kernel: intel_pstate: CPU model not supported Dec 13 08:52:15.036337 kernel: NET: Registered PF_INET6 protocol family Dec 13 08:52:15.036346 kernel: Segment Routing with IPv6 Dec 13 08:52:15.036356 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 08:52:15.036365 kernel: NET: Registered PF_PACKET protocol family Dec 13 08:52:15.036378 kernel: Key type dns_resolver registered Dec 13 08:52:15.036388 kernel: IPI shorthand broadcast: enabled Dec 13 08:52:15.036397 kernel: sched_clock: Marking stable (1160002699, 157798944)->(1464869616, -147067973) Dec 13 08:52:15.036407 kernel: registered taskstats version 1 Dec 13 08:52:15.036417 kernel: Loading compiled-in X.509 certificates Dec 13 08:52:15.036426 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: c82d546f528d79a5758dcebbc47fb6daf92836a0' Dec 13 08:52:15.036435 kernel: Key type .fscrypt registered Dec 13 08:52:15.036444 kernel: Key type fscrypt-provisioning registered Dec 13 08:52:15.036454 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 08:52:15.036466 kernel: ima: Allocated hash algorithm: sha1 Dec 13 08:52:15.036475 kernel: ima: No architecture policies found Dec 13 08:52:15.036485 kernel: clk: Disabling unused clocks Dec 13 08:52:15.036494 kernel: Freeing unused kernel image (initmem) memory: 42844K Dec 13 08:52:15.036503 kernel: Write protecting the kernel read-only data: 36864k Dec 13 08:52:15.036532 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Dec 13 08:52:15.036545 kernel: Run /init as init process Dec 13 08:52:15.036555 kernel: with arguments: Dec 13 08:52:15.036565 kernel: /init Dec 13 08:52:15.036578 kernel: with environment: Dec 13 08:52:15.036587 kernel: HOME=/ Dec 13 08:52:15.036597 kernel: TERM=linux Dec 13 08:52:15.036607 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 08:52:15.036619 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 08:52:15.036632 systemd[1]: Detected virtualization kvm. Dec 13 08:52:15.036645 systemd[1]: Detected architecture x86-64. Dec 13 08:52:15.036655 systemd[1]: Running in initrd. Dec 13 08:52:15.036668 systemd[1]: No hostname configured, using default hostname. Dec 13 08:52:15.036678 systemd[1]: Hostname set to . Dec 13 08:52:15.036688 systemd[1]: Initializing machine ID from VM UUID. Dec 13 08:52:15.036699 systemd[1]: Queued start job for default target initrd.target. Dec 13 08:52:15.036709 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 08:52:15.036737 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 08:52:15.036748 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 08:52:15.036759 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 08:52:15.036772 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 08:52:15.036783 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 08:52:15.036795 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 08:52:15.036805 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 08:52:15.036816 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 08:52:15.036826 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 08:52:15.036836 systemd[1]: Reached target paths.target - Path Units. Dec 13 08:52:15.036849 systemd[1]: Reached target slices.target - Slice Units. Dec 13 08:52:15.036860 systemd[1]: Reached target swap.target - Swaps. Dec 13 08:52:15.036873 systemd[1]: Reached target timers.target - Timer Units. Dec 13 08:52:15.036883 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 08:52:15.036893 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 08:52:15.036907 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 08:52:15.036917 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 08:52:15.036927 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 08:52:15.036938 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 08:52:15.036948 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 08:52:15.036988 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 08:52:15.036998 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 08:52:15.037008 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 08:52:15.037022 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 08:52:15.037032 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 08:52:15.037079 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 08:52:15.037090 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 08:52:15.037101 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 08:52:15.037122 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 08:52:15.037133 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 08:52:15.037143 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 08:52:15.037158 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 08:52:15.037201 systemd-journald[182]: Collecting audit messages is disabled. Dec 13 08:52:15.037229 systemd-journald[182]: Journal started Dec 13 08:52:15.037252 systemd-journald[182]: Runtime Journal (/run/log/journal/7ef76b28eed24c648304ab19b95443bb) is 4.9M, max 39.3M, 34.4M free. Dec 13 08:52:15.004914 systemd-modules-load[183]: Inserted module 'overlay' Dec 13 08:52:15.055377 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 08:52:15.055417 kernel: Bridge firewalling registered Dec 13 08:52:15.055431 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 08:52:15.051390 systemd-modules-load[183]: Inserted module 'br_netfilter' Dec 13 08:52:15.056340 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 08:52:15.057257 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 08:52:15.062631 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 08:52:15.072034 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 08:52:15.075963 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 08:52:15.077906 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 08:52:15.085912 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 08:52:15.100106 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 08:52:15.109997 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 08:52:15.118050 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 08:52:15.120360 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 08:52:15.128410 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 08:52:15.137998 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 08:52:15.151206 dracut-cmdline[216]: dracut-dracut-053 Dec 13 08:52:15.158758 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 08:52:15.196236 systemd-resolved[221]: Positive Trust Anchors: Dec 13 08:52:15.197327 systemd-resolved[221]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 08:52:15.197395 systemd-resolved[221]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 08:52:15.206230 systemd-resolved[221]: Defaulting to hostname 'linux'. Dec 13 08:52:15.209547 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 08:52:15.210406 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 08:52:15.267775 kernel: SCSI subsystem initialized Dec 13 08:52:15.278757 kernel: Loading iSCSI transport class v2.0-870. Dec 13 08:52:15.290751 kernel: iscsi: registered transport (tcp) Dec 13 08:52:15.316112 kernel: iscsi: registered transport (qla4xxx) Dec 13 08:52:15.316216 kernel: QLogic iSCSI HBA Driver Dec 13 08:52:15.373043 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 08:52:15.377967 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 08:52:15.418514 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 08:52:15.418601 kernel: device-mapper: uevent: version 1.0.3 Dec 13 08:52:15.418630 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 08:52:15.473807 kernel: raid6: avx2x4 gen() 16303 MB/s Dec 13 08:52:15.489805 kernel: raid6: avx2x2 gen() 17387 MB/s Dec 13 08:52:15.508133 kernel: raid6: avx2x1 gen() 13192 MB/s Dec 13 08:52:15.508221 kernel: raid6: using algorithm avx2x2 gen() 17387 MB/s Dec 13 08:52:15.527104 kernel: raid6: .... xor() 18859 MB/s, rmw enabled Dec 13 08:52:15.527190 kernel: raid6: using avx2x2 recovery algorithm Dec 13 08:52:15.551757 kernel: xor: automatically using best checksumming function avx Dec 13 08:52:15.744762 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 08:52:15.763123 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 08:52:15.771159 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 08:52:15.807410 systemd-udevd[402]: Using default interface naming scheme 'v255'. Dec 13 08:52:15.818390 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 08:52:15.829043 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 08:52:15.856088 dracut-pre-trigger[411]: rd.md=0: removing MD RAID activation Dec 13 08:52:15.910354 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 08:52:15.916189 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 08:52:16.031637 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 08:52:16.041823 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 08:52:16.079910 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 08:52:16.085019 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 08:52:16.088418 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 08:52:16.089956 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 08:52:16.100099 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 08:52:16.151840 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 08:52:16.155843 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Dec 13 08:52:16.230306 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Dec 13 08:52:16.230461 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 08:52:16.230477 kernel: GPT:9289727 != 125829119 Dec 13 08:52:16.230489 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 08:52:16.230501 kernel: GPT:9289727 != 125829119 Dec 13 08:52:16.230513 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 08:52:16.230525 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 08:52:16.230537 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 08:52:16.230553 kernel: scsi host0: Virtio SCSI HBA Dec 13 08:52:16.230692 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Dec 13 08:52:16.253898 kernel: virtio_blk virtio5: [vdb] 968 512-byte logical blocks (496 kB/484 KiB) Dec 13 08:52:16.265315 kernel: libata version 3.00 loaded. Dec 13 08:52:16.263959 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 08:52:16.270894 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 08:52:16.264139 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 08:52:16.267048 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 08:52:16.267727 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 08:52:16.278763 kernel: AES CTR mode by8 optimization enabled Dec 13 08:52:16.268000 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 08:52:16.292660 kernel: ata_piix 0000:00:01.1: version 2.13 Dec 13 08:52:16.313848 kernel: scsi host1: ata_piix Dec 13 08:52:16.314034 kernel: scsi host2: ata_piix Dec 13 08:52:16.314163 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Dec 13 08:52:16.314177 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Dec 13 08:52:16.268652 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 08:52:16.277182 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 08:52:16.331941 kernel: ACPI: bus type USB registered Dec 13 08:52:16.343771 kernel: usbcore: registered new interface driver usbfs Dec 13 08:52:16.355906 kernel: usbcore: registered new interface driver hub Dec 13 08:52:16.355982 kernel: usbcore: registered new device driver usb Dec 13 08:52:16.374838 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 13 08:52:16.428570 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (460) Dec 13 08:52:16.428602 kernel: BTRFS: device fsid c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (449) Dec 13 08:52:16.432175 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 13 08:52:16.433649 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 08:52:16.442991 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 08:52:16.450370 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 13 08:52:16.451365 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 13 08:52:16.461845 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 08:52:16.476235 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 08:52:16.479948 disk-uuid[533]: Primary Header is updated. Dec 13 08:52:16.479948 disk-uuid[533]: Secondary Entries is updated. Dec 13 08:52:16.479948 disk-uuid[533]: Secondary Header is updated. Dec 13 08:52:16.496872 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 08:52:16.518751 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 08:52:16.524289 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 08:52:16.537759 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Dec 13 08:52:16.549587 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Dec 13 08:52:16.549881 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Dec 13 08:52:16.550085 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Dec 13 08:52:16.550354 kernel: hub 1-0:1.0: USB hub found Dec 13 08:52:16.551880 kernel: hub 1-0:1.0: 2 ports detected Dec 13 08:52:17.518753 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 08:52:17.519387 disk-uuid[537]: The operation has completed successfully. Dec 13 08:52:17.570133 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 08:52:17.570296 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 08:52:17.611129 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 08:52:17.615226 sh[562]: Success Dec 13 08:52:17.632785 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 08:52:17.708253 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 08:52:17.709386 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 08:52:17.716907 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 08:52:17.742101 kernel: BTRFS info (device dm-0): first mount of filesystem c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be Dec 13 08:52:17.742200 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 08:52:17.742221 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 08:52:17.745074 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 08:52:17.746829 kernel: BTRFS info (device dm-0): using free space tree Dec 13 08:52:17.759956 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 08:52:17.761648 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 08:52:17.768022 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 08:52:17.770694 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 08:52:17.794222 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 08:52:17.794327 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 08:52:17.794365 kernel: BTRFS info (device vda6): using free space tree Dec 13 08:52:17.798757 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 08:52:17.816891 kernel: BTRFS info (device vda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 08:52:17.816057 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 08:52:17.827111 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 08:52:17.834244 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 08:52:17.959902 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 08:52:17.985037 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 08:52:17.997797 ignition[662]: Ignition 2.19.0 Dec 13 08:52:17.997812 ignition[662]: Stage: fetch-offline Dec 13 08:52:17.997882 ignition[662]: no configs at "/usr/lib/ignition/base.d" Dec 13 08:52:17.997897 ignition[662]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 13 08:52:18.004021 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 08:52:17.998049 ignition[662]: parsed url from cmdline: "" Dec 13 08:52:17.998054 ignition[662]: no config URL provided Dec 13 08:52:17.998061 ignition[662]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 08:52:17.998071 ignition[662]: no config at "/usr/lib/ignition/user.ign" Dec 13 08:52:17.998079 ignition[662]: failed to fetch config: resource requires networking Dec 13 08:52:17.998411 ignition[662]: Ignition finished successfully Dec 13 08:52:18.010547 systemd-networkd[751]: lo: Link UP Dec 13 08:52:18.010552 systemd-networkd[751]: lo: Gained carrier Dec 13 08:52:18.012950 systemd-networkd[751]: Enumeration completed Dec 13 08:52:18.013355 systemd-networkd[751]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Dec 13 08:52:18.013359 systemd-networkd[751]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Dec 13 08:52:18.014252 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 08:52:18.014579 systemd-networkd[751]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 08:52:18.014585 systemd-networkd[751]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 08:52:18.016532 systemd[1]: Reached target network.target - Network. Dec 13 08:52:18.016679 systemd-networkd[751]: eth0: Link UP Dec 13 08:52:18.016683 systemd-networkd[751]: eth0: Gained carrier Dec 13 08:52:18.016692 systemd-networkd[751]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Dec 13 08:52:18.021066 systemd-networkd[751]: eth1: Link UP Dec 13 08:52:18.021070 systemd-networkd[751]: eth1: Gained carrier Dec 13 08:52:18.021080 systemd-networkd[751]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 08:52:18.024574 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 08:52:18.030818 systemd-networkd[751]: eth0: DHCPv4 address 144.126.221.125/20, gateway 144.126.208.1 acquired from 169.254.169.253 Dec 13 08:52:18.037928 systemd-networkd[751]: eth1: DHCPv4 address 10.124.0.6/20, gateway 10.124.0.1 acquired from 169.254.169.253 Dec 13 08:52:18.058226 ignition[755]: Ignition 2.19.0 Dec 13 08:52:18.058945 ignition[755]: Stage: fetch Dec 13 08:52:18.059218 ignition[755]: no configs at "/usr/lib/ignition/base.d" Dec 13 08:52:18.059231 ignition[755]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 13 08:52:18.059360 ignition[755]: parsed url from cmdline: "" Dec 13 08:52:18.059364 ignition[755]: no config URL provided Dec 13 08:52:18.059371 ignition[755]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 08:52:18.059380 ignition[755]: no config at "/usr/lib/ignition/user.ign" Dec 13 08:52:18.059406 ignition[755]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Dec 13 08:52:18.090252 ignition[755]: GET result: OK Dec 13 08:52:18.090510 ignition[755]: parsing config with SHA512: 50df8efea52a5e6f20575b8eb96cfa5156fac6b009ae0b60b86067aa30550b3111f1d7d77e5a045aea46a8e33e9fcb86b2f2b460bff17600ca59ec43fc627e1e Dec 13 08:52:18.096894 unknown[755]: fetched base config from "system" Dec 13 08:52:18.096912 unknown[755]: fetched base config from "system" Dec 13 08:52:18.097877 ignition[755]: fetch: fetch complete Dec 13 08:52:18.096933 unknown[755]: fetched user config from "digitalocean" Dec 13 08:52:18.097885 ignition[755]: fetch: fetch passed Dec 13 08:52:18.097984 ignition[755]: Ignition finished successfully Dec 13 08:52:18.101052 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 08:52:18.108980 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 08:52:18.136545 ignition[762]: Ignition 2.19.0 Dec 13 08:52:18.136612 ignition[762]: Stage: kargs Dec 13 08:52:18.137047 ignition[762]: no configs at "/usr/lib/ignition/base.d" Dec 13 08:52:18.137065 ignition[762]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 13 08:52:18.138621 ignition[762]: kargs: kargs passed Dec 13 08:52:18.138699 ignition[762]: Ignition finished successfully Dec 13 08:52:18.144131 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 08:52:18.151064 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 08:52:18.176087 ignition[768]: Ignition 2.19.0 Dec 13 08:52:18.176102 ignition[768]: Stage: disks Dec 13 08:52:18.176518 ignition[768]: no configs at "/usr/lib/ignition/base.d" Dec 13 08:52:18.176541 ignition[768]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 13 08:52:18.179002 ignition[768]: disks: disks passed Dec 13 08:52:18.180808 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 08:52:18.179065 ignition[768]: Ignition finished successfully Dec 13 08:52:18.186329 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 08:52:18.187813 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 08:52:18.188810 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 08:52:18.190194 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 08:52:18.191578 systemd[1]: Reached target basic.target - Basic System. Dec 13 08:52:18.202057 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 08:52:18.219987 systemd-fsck[776]: ROOT: clean, 14/553520 files, 52654/553472 blocks Dec 13 08:52:18.225916 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 08:52:18.232905 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 08:52:18.346331 kernel: EXT4-fs (vda9): mounted filesystem 390119fa-ab9c-4f50-b046-3b5c76c46193 r/w with ordered data mode. Quota mode: none. Dec 13 08:52:18.346904 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 08:52:18.348157 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 08:52:18.363991 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 08:52:18.366870 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 08:52:18.370950 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Dec 13 08:52:18.380835 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (784) Dec 13 08:52:18.381056 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 13 08:52:18.382647 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 08:52:18.391822 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 08:52:18.391881 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 08:52:18.391910 kernel: BTRFS info (device vda6): using free space tree Dec 13 08:52:18.382688 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 08:52:18.388192 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 08:52:18.400922 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 08:52:18.403612 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 08:52:18.407100 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 08:52:18.488758 initrd-setup-root[815]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 08:52:18.509313 initrd-setup-root[822]: cut: /sysroot/etc/group: No such file or directory Dec 13 08:52:18.510465 coreos-metadata[787]: Dec 13 08:52:18.509 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Dec 13 08:52:18.511848 coreos-metadata[786]: Dec 13 08:52:18.509 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Dec 13 08:52:18.516702 initrd-setup-root[829]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 08:52:18.522569 coreos-metadata[787]: Dec 13 08:52:18.522 INFO Fetch successful Dec 13 08:52:18.524140 coreos-metadata[786]: Dec 13 08:52:18.524 INFO Fetch successful Dec 13 08:52:18.530751 initrd-setup-root[836]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 08:52:18.534691 coreos-metadata[787]: Dec 13 08:52:18.534 INFO wrote hostname ci-4081.2.1-4-b1553ec4eb to /sysroot/etc/hostname Dec 13 08:52:18.533549 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Dec 13 08:52:18.533710 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Dec 13 08:52:18.539177 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 08:52:18.661562 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 08:52:18.670990 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 08:52:18.675077 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 08:52:18.686762 kernel: BTRFS info (device vda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 08:52:18.723158 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 08:52:18.734014 ignition[906]: INFO : Ignition 2.19.0 Dec 13 08:52:18.735953 ignition[906]: INFO : Stage: mount Dec 13 08:52:18.735953 ignition[906]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 08:52:18.735953 ignition[906]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 13 08:52:18.739472 ignition[906]: INFO : mount: mount passed Dec 13 08:52:18.739472 ignition[906]: INFO : Ignition finished successfully Dec 13 08:52:18.739513 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 08:52:18.740312 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 08:52:18.758048 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 08:52:18.774131 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 08:52:18.796833 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (917) Dec 13 08:52:18.800089 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 08:52:18.800199 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 08:52:18.801788 kernel: BTRFS info (device vda6): using free space tree Dec 13 08:52:18.806755 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 08:52:18.809806 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 08:52:18.845612 ignition[934]: INFO : Ignition 2.19.0 Dec 13 08:52:18.845612 ignition[934]: INFO : Stage: files Dec 13 08:52:18.847586 ignition[934]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 08:52:18.847586 ignition[934]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 13 08:52:18.847586 ignition[934]: DEBUG : files: compiled without relabeling support, skipping Dec 13 08:52:18.850651 ignition[934]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 08:52:18.850651 ignition[934]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 08:52:18.854563 ignition[934]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 08:52:18.855939 ignition[934]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 08:52:18.857413 unknown[934]: wrote ssh authorized keys file for user: core Dec 13 08:52:18.858494 ignition[934]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 08:52:18.859935 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 08:52:18.861162 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 08:52:18.900263 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 08:52:18.964921 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 08:52:18.964921 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 08:52:18.967192 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 08:52:18.967192 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 08:52:18.967192 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 08:52:18.967192 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 08:52:18.967192 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 08:52:18.967192 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 08:52:18.967192 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 08:52:18.967192 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 08:52:18.967192 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 08:52:18.967192 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 08:52:18.967192 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 08:52:18.967192 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 08:52:18.982407 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Dec 13 08:52:19.257055 systemd-networkd[751]: eth0: Gained IPv6LL Dec 13 08:52:19.453500 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 13 08:52:19.715215 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 08:52:19.715215 ignition[934]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 13 08:52:19.717860 ignition[934]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 08:52:19.717860 ignition[934]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 08:52:19.717860 ignition[934]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 13 08:52:19.717860 ignition[934]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 13 08:52:19.717860 ignition[934]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 08:52:19.717860 ignition[934]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 08:52:19.717860 ignition[934]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 08:52:19.717860 ignition[934]: INFO : files: files passed Dec 13 08:52:19.717860 ignition[934]: INFO : Ignition finished successfully Dec 13 08:52:19.720042 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 08:52:19.729069 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 08:52:19.740048 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 08:52:19.745188 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 08:52:19.745316 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 08:52:19.753542 initrd-setup-root-after-ignition[963]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 08:52:19.753542 initrd-setup-root-after-ignition[963]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 08:52:19.757095 initrd-setup-root-after-ignition[967]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 08:52:19.760657 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 08:52:19.761600 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 08:52:19.771344 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 08:52:19.815039 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 08:52:19.815201 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 08:52:19.816750 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 08:52:19.817833 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 08:52:19.819156 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 08:52:19.830452 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 08:52:19.833216 systemd-networkd[751]: eth1: Gained IPv6LL Dec 13 08:52:19.847409 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 08:52:19.851960 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 08:52:19.883994 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 08:52:19.884827 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 08:52:19.886499 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 08:52:19.887829 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 08:52:19.888118 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 08:52:19.889541 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 08:52:19.891122 systemd[1]: Stopped target basic.target - Basic System. Dec 13 08:52:19.892394 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 08:52:19.893563 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 08:52:19.894919 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 08:52:19.896248 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 08:52:19.897527 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 08:52:19.898905 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 08:52:19.900360 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 08:52:19.901531 systemd[1]: Stopped target swap.target - Swaps. Dec 13 08:52:19.902624 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 08:52:19.902967 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 08:52:19.904418 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 08:52:19.905971 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 08:52:19.907163 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 08:52:19.907388 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 08:52:19.908687 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 08:52:19.908985 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 08:52:19.910631 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 08:52:19.910904 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 08:52:19.912348 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 08:52:19.912568 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 08:52:19.913687 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 13 08:52:19.914006 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 08:52:19.928657 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 08:52:19.932106 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 08:52:19.932837 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 08:52:19.933218 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 08:52:19.936975 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 08:52:19.937202 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 08:52:19.946521 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 08:52:19.947480 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 08:52:19.963905 ignition[987]: INFO : Ignition 2.19.0 Dec 13 08:52:19.963905 ignition[987]: INFO : Stage: umount Dec 13 08:52:19.966556 ignition[987]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 08:52:19.966556 ignition[987]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 13 08:52:19.966556 ignition[987]: INFO : umount: umount passed Dec 13 08:52:19.966556 ignition[987]: INFO : Ignition finished successfully Dec 13 08:52:19.966822 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 08:52:19.966996 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 08:52:19.968724 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 08:52:19.968789 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 08:52:19.969348 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 08:52:19.969391 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 08:52:19.970935 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 08:52:19.971017 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 08:52:19.972365 systemd[1]: Stopped target network.target - Network. Dec 13 08:52:19.972857 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 08:52:19.972920 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 08:52:19.973446 systemd[1]: Stopped target paths.target - Path Units. Dec 13 08:52:19.984180 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 08:52:19.987848 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 08:52:19.989136 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 08:52:19.990454 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 08:52:19.991878 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 08:52:19.991942 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 08:52:19.992891 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 08:52:19.992952 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 08:52:19.993878 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 08:52:19.993947 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 08:52:19.995015 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 08:52:19.995084 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 08:52:19.996291 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 08:52:19.997440 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 08:52:19.999956 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 08:52:20.000549 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 08:52:20.000651 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 08:52:20.002832 systemd-networkd[751]: eth0: DHCPv6 lease lost Dec 13 08:52:20.004357 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 08:52:20.004437 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 08:52:20.005825 systemd-networkd[751]: eth1: DHCPv6 lease lost Dec 13 08:52:20.007154 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 08:52:20.007326 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 08:52:20.010967 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 08:52:20.011394 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 08:52:20.013566 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 08:52:20.013672 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 08:52:20.019860 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 08:52:20.020415 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 08:52:20.020491 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 08:52:20.021122 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 08:52:20.021168 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 08:52:20.021787 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 08:52:20.021828 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 08:52:20.024030 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 08:52:20.024083 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 08:52:20.025825 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 08:52:20.041150 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 08:52:20.041833 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 08:52:20.044010 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 08:52:20.044123 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 08:52:20.046426 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 08:52:20.046518 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 08:52:20.047966 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 08:52:20.048020 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 08:52:20.049090 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 08:52:20.049147 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 08:52:20.050762 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 08:52:20.050835 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 08:52:20.051827 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 08:52:20.051877 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 08:52:20.060002 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 08:52:20.063085 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 08:52:20.063183 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 08:52:20.063887 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 13 08:52:20.063963 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 08:52:20.064559 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 08:52:20.064602 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 08:52:20.066774 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 08:52:20.066828 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 08:52:20.068591 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 08:52:20.068695 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 08:52:20.070063 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 08:52:20.076975 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 08:52:20.088969 systemd[1]: Switching root. Dec 13 08:52:20.151051 systemd-journald[182]: Received SIGTERM from PID 1 (systemd). Dec 13 08:52:20.151129 systemd-journald[182]: Journal stopped Dec 13 08:52:21.596138 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 08:52:21.596235 kernel: SELinux: policy capability open_perms=1 Dec 13 08:52:21.596250 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 08:52:21.596267 kernel: SELinux: policy capability always_check_network=0 Dec 13 08:52:21.596280 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 08:52:21.596292 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 08:52:21.596304 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 08:52:21.596316 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 08:52:21.596328 kernel: audit: type=1403 audit(1734079940.392:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 08:52:21.596350 systemd[1]: Successfully loaded SELinux policy in 48.165ms. Dec 13 08:52:21.596382 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 19.943ms. Dec 13 08:52:21.596397 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 08:52:21.596414 systemd[1]: Detected virtualization kvm. Dec 13 08:52:21.596432 systemd[1]: Detected architecture x86-64. Dec 13 08:52:21.596445 systemd[1]: Detected first boot. Dec 13 08:52:21.596458 systemd[1]: Hostname set to . Dec 13 08:52:21.596472 systemd[1]: Initializing machine ID from VM UUID. Dec 13 08:52:21.596485 zram_generator::config[1029]: No configuration found. Dec 13 08:52:21.596499 systemd[1]: Populated /etc with preset unit settings. Dec 13 08:52:21.596512 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 08:52:21.596530 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 08:52:21.596543 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 08:52:21.596558 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 08:52:21.596575 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 08:52:21.596589 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 08:52:21.596602 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 08:52:21.596616 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 08:52:21.596643 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 08:52:21.596660 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 08:52:21.596673 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 08:52:21.596686 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 08:52:21.596700 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 08:52:21.603980 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 08:52:21.604056 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 08:52:21.604085 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 08:52:21.604113 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 08:52:21.604139 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 08:52:21.604173 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 08:52:21.604199 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 08:52:21.604226 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 08:52:21.606020 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 08:52:21.606066 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 08:52:21.606095 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 08:52:21.606129 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 08:52:21.606175 systemd[1]: Reached target slices.target - Slice Units. Dec 13 08:52:21.606202 systemd[1]: Reached target swap.target - Swaps. Dec 13 08:52:21.606230 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 08:52:21.606258 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 08:52:21.606285 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 08:52:21.606312 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 08:52:21.606340 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 08:52:21.606375 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 08:52:21.606398 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 08:52:21.606424 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 08:52:21.606442 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 08:52:21.606459 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 08:52:21.606478 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 08:52:21.606495 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 08:52:21.606514 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 08:52:21.606533 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 08:52:21.606550 systemd[1]: Reached target machines.target - Containers. Dec 13 08:52:21.606572 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 08:52:21.606590 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 08:52:21.606608 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 08:52:21.606645 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 08:52:21.606673 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 08:52:21.606700 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 08:52:21.607304 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 08:52:21.607341 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 08:52:21.607375 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 08:52:21.607403 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 08:52:21.607442 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 08:52:21.607469 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 08:52:21.607496 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 08:52:21.607549 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 08:52:21.607588 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 08:52:21.607621 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 08:52:21.607650 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 08:52:21.607683 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 08:52:21.608981 systemd-journald[1105]: Collecting audit messages is disabled. Dec 13 08:52:21.609068 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 08:52:21.609096 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 08:52:21.609123 systemd[1]: Stopped verity-setup.service. Dec 13 08:52:21.609151 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 08:52:21.609177 kernel: fuse: init (API version 7.39) Dec 13 08:52:21.609205 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 08:52:21.609231 kernel: loop: module loaded Dec 13 08:52:21.609260 systemd-journald[1105]: Journal started Dec 13 08:52:21.609309 systemd-journald[1105]: Runtime Journal (/run/log/journal/7ef76b28eed24c648304ab19b95443bb) is 4.9M, max 39.3M, 34.4M free. Dec 13 08:52:21.236953 systemd[1]: Queued start job for default target multi-user.target. Dec 13 08:52:21.611482 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 08:52:21.262753 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 13 08:52:21.620079 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 08:52:21.263397 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 08:52:21.623441 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 08:52:21.625548 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 08:52:21.626819 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 08:52:21.627934 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 08:52:21.629108 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 08:52:21.631091 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 08:52:21.631242 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 08:52:21.632312 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 08:52:21.633780 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 08:52:21.634658 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 08:52:21.634840 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 08:52:21.636309 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 08:52:21.636469 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 08:52:21.638474 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 08:52:21.638633 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 08:52:21.645635 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 08:52:21.646614 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 08:52:21.647628 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 08:52:21.699646 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 08:52:21.707747 kernel: ACPI: bus type drm_connector registered Dec 13 08:52:21.710881 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 08:52:21.722855 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 08:52:21.723544 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 08:52:21.723623 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 08:52:21.729592 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 08:52:21.738010 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 08:52:21.740949 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 08:52:21.741948 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 08:52:21.746914 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 08:52:21.753807 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 08:52:21.754815 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 08:52:21.761305 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 08:52:21.763890 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 08:52:21.771026 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 08:52:21.771545 systemd-journald[1105]: Time spent on flushing to /var/log/journal/7ef76b28eed24c648304ab19b95443bb is 118.988ms for 980 entries. Dec 13 08:52:21.771545 systemd-journald[1105]: System Journal (/var/log/journal/7ef76b28eed24c648304ab19b95443bb) is 8.0M, max 195.6M, 187.6M free. Dec 13 08:52:21.920851 systemd-journald[1105]: Received client request to flush runtime journal. Dec 13 08:52:21.923446 kernel: loop0: detected capacity change from 0 to 140768 Dec 13 08:52:21.784043 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 08:52:21.791021 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 08:52:21.795857 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 08:52:21.812119 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 08:52:21.813081 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 08:52:21.814050 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 08:52:21.938926 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 08:52:21.816020 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 08:52:21.819266 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 08:52:21.820848 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 08:52:21.824435 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 08:52:21.828962 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 08:52:21.838971 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 08:52:21.844046 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 08:52:21.903234 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 08:52:21.918350 udevadm[1156]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 08:52:21.928205 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 08:52:21.933197 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 08:52:21.935015 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 08:52:21.960166 systemd-tmpfiles[1148]: ACLs are not supported, ignoring. Dec 13 08:52:21.960210 systemd-tmpfiles[1148]: ACLs are not supported, ignoring. Dec 13 08:52:21.964743 kernel: loop1: detected capacity change from 0 to 142488 Dec 13 08:52:21.975343 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 08:52:21.982937 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 08:52:22.015765 kernel: loop2: detected capacity change from 0 to 8 Dec 13 08:52:22.035741 kernel: loop3: detected capacity change from 0 to 210664 Dec 13 08:52:22.094345 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 08:52:22.102768 kernel: loop4: detected capacity change from 0 to 140768 Dec 13 08:52:22.108525 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 08:52:22.131856 kernel: loop5: detected capacity change from 0 to 142488 Dec 13 08:52:22.166037 kernel: loop6: detected capacity change from 0 to 8 Dec 13 08:52:22.173751 kernel: loop7: detected capacity change from 0 to 210664 Dec 13 08:52:22.189349 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Dec 13 08:52:22.189398 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Dec 13 08:52:22.193977 (sd-merge)[1174]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Dec 13 08:52:22.194881 (sd-merge)[1174]: Merged extensions into '/usr'. Dec 13 08:52:22.203233 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 08:52:22.212922 systemd[1]: Reloading requested from client PID 1147 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 08:52:22.212941 systemd[1]: Reloading... Dec 13 08:52:22.419864 zram_generator::config[1205]: No configuration found. Dec 13 08:52:22.585503 ldconfig[1142]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 08:52:22.644963 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 08:52:22.722350 systemd[1]: Reloading finished in 508 ms. Dec 13 08:52:22.756477 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 08:52:22.762149 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 08:52:22.769130 systemd[1]: Starting ensure-sysext.service... Dec 13 08:52:22.776011 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 08:52:22.803904 systemd[1]: Reloading requested from client PID 1248 ('systemctl') (unit ensure-sysext.service)... Dec 13 08:52:22.803923 systemd[1]: Reloading... Dec 13 08:52:22.813628 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 08:52:22.816895 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 08:52:22.820437 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 08:52:22.820864 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Dec 13 08:52:22.821023 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Dec 13 08:52:22.826771 systemd-tmpfiles[1249]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 08:52:22.827906 systemd-tmpfiles[1249]: Skipping /boot Dec 13 08:52:22.846378 systemd-tmpfiles[1249]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 08:52:22.846591 systemd-tmpfiles[1249]: Skipping /boot Dec 13 08:52:22.950747 zram_generator::config[1279]: No configuration found. Dec 13 08:52:23.142234 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 08:52:23.232989 systemd[1]: Reloading finished in 428 ms. Dec 13 08:52:23.255094 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 08:52:23.256600 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 08:52:23.282004 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 08:52:23.294077 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 08:52:23.299996 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 08:52:23.311084 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 08:52:23.318582 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 08:52:23.332007 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 08:52:23.345911 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 08:52:23.353039 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 08:52:23.353470 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 08:52:23.362960 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 08:52:23.373402 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 08:52:23.381387 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 08:52:23.382457 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 08:52:23.382697 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 08:52:23.391478 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 08:52:23.391979 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 08:52:23.392483 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 08:52:23.393227 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 08:52:23.399152 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 08:52:23.413317 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 08:52:23.415114 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 08:52:23.426131 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 08:52:23.428250 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 08:52:23.428565 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 08:52:23.440301 systemd[1]: Finished ensure-sysext.service. Dec 13 08:52:23.447980 systemd-udevd[1327]: Using default interface naming scheme 'v255'. Dec 13 08:52:23.453116 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 08:52:23.463361 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 08:52:23.467387 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 08:52:23.470614 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 08:52:23.470987 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 08:52:23.478577 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 08:52:23.488675 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 08:52:23.495807 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 08:52:23.502093 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 08:52:23.515537 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 08:52:23.517050 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 08:52:23.518115 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 08:52:23.523922 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 08:52:23.529252 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 08:52:23.529874 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 08:52:23.541902 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 08:52:23.542171 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 08:52:23.543398 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 08:52:23.562278 augenrules[1369]: No rules Dec 13 08:52:23.569973 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 08:52:23.572977 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 08:52:23.691774 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 13 08:52:23.734745 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1366) Dec 13 08:52:23.759435 systemd-resolved[1325]: Positive Trust Anchors: Dec 13 08:52:23.759989 systemd-resolved[1325]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 08:52:23.760182 systemd-resolved[1325]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 08:52:23.767270 systemd-networkd[1363]: lo: Link UP Dec 13 08:52:23.767283 systemd-networkd[1363]: lo: Gained carrier Dec 13 08:52:23.767920 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Dec 13 08:52:23.768050 systemd-resolved[1325]: Using system hostname 'ci-4081.2.1-4-b1553ec4eb'. Dec 13 08:52:23.768861 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 08:52:23.769130 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 08:52:23.777111 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 08:52:23.787136 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 08:52:23.798420 systemd-networkd[1363]: Enumeration completed Dec 13 08:52:23.798950 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 08:52:23.801041 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 08:52:23.801122 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 08:52:23.801153 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 08:52:23.801568 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 08:52:23.803542 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 08:52:23.806329 systemd-networkd[1363]: eth1: Configuring with /run/systemd/network/10-4a:51:bc:df:1c:9d.network. Dec 13 08:52:23.809473 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 08:52:23.809839 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 08:52:23.813888 systemd[1]: Reached target network.target - Network. Dec 13 08:52:23.817807 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 08:52:23.818879 systemd-networkd[1363]: eth1: Link UP Dec 13 08:52:23.818893 systemd-networkd[1363]: eth1: Gained carrier Dec 13 08:52:23.826040 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 08:52:23.828455 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 08:52:23.828848 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 08:52:23.833179 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 08:52:23.841256 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 08:52:23.843202 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 08:52:23.859252 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 08:52:23.860817 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 08:52:23.865195 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 08:52:23.873193 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1366) Dec 13 08:52:23.874007 kernel: ISO 9660 Extensions: RRIP_1991A Dec 13 08:52:23.877631 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Dec 13 08:52:23.907751 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1387) Dec 13 08:52:23.941743 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Dec 13 08:52:23.949793 kernel: ACPI: button: Power Button [PWRF] Dec 13 08:52:24.008747 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Dec 13 08:52:24.014290 systemd-networkd[1363]: eth0: Configuring with /run/systemd/network/10-aa:be:fb:f8:30:9b.network. Dec 13 08:52:24.017421 systemd-networkd[1363]: eth0: Link UP Dec 13 08:52:24.017437 systemd-networkd[1363]: eth0: Gained carrier Dec 13 08:52:24.030745 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Dec 13 08:52:24.051752 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 08:52:24.074942 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 08:52:24.091735 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Dec 13 08:52:24.095743 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Dec 13 08:52:24.100256 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 08:52:24.109723 kernel: Console: switching to colour dummy device 80x25 Dec 13 08:52:24.109895 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Dec 13 08:52:24.109916 kernel: [drm] features: -context_init Dec 13 08:52:24.117045 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 08:52:24.123733 kernel: [drm] number of scanouts: 1 Dec 13 08:52:24.123790 kernel: [drm] number of cap sets: 0 Dec 13 08:52:24.127392 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Dec 13 08:52:24.136750 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Dec 13 08:52:24.136882 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 08:52:24.134336 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 08:52:24.134612 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 08:52:24.144741 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Dec 13 08:52:24.163432 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 08:52:24.167859 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 08:52:24.185065 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 08:52:24.185386 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 08:52:24.212620 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 08:52:24.349955 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 08:52:24.356827 kernel: EDAC MC: Ver: 3.0.0 Dec 13 08:52:24.387067 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 08:52:24.396151 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 08:52:24.412647 lvm[1433]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 08:52:24.447227 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 08:52:24.449643 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 08:52:24.449872 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 08:52:24.450195 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 08:52:24.450386 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 08:52:24.451215 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 08:52:24.452561 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 08:52:24.452706 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 08:52:24.453534 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 08:52:24.453590 systemd[1]: Reached target paths.target - Path Units. Dec 13 08:52:24.453684 systemd[1]: Reached target timers.target - Timer Units. Dec 13 08:52:24.457224 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 08:52:24.460133 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 08:52:24.467051 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 08:52:24.472281 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 08:52:24.477626 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 08:52:24.479956 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 08:52:24.480703 systemd[1]: Reached target basic.target - Basic System. Dec 13 08:52:24.481426 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 08:52:24.481474 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 08:52:24.486941 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 08:52:24.493084 lvm[1437]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 08:52:24.499065 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 13 08:52:24.509188 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 08:52:24.523128 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 08:52:24.538998 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 08:52:24.539827 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 08:52:24.544612 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 08:52:24.557077 coreos-metadata[1439]: Dec 13 08:52:24.556 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Dec 13 08:52:24.560070 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 08:52:24.562623 jq[1441]: false Dec 13 08:52:24.572879 coreos-metadata[1439]: Dec 13 08:52:24.570 INFO Fetch successful Dec 13 08:52:24.573028 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 08:52:24.580619 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 08:52:24.599122 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 08:52:24.602746 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 08:52:24.603657 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 08:52:24.612550 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 08:52:24.613274 dbus-daemon[1440]: [system] SELinux support is enabled Dec 13 08:52:24.620944 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 08:52:24.622687 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 08:52:24.629237 extend-filesystems[1444]: Found loop4 Dec 13 08:52:24.631626 extend-filesystems[1444]: Found loop5 Dec 13 08:52:24.636398 extend-filesystems[1444]: Found loop6 Dec 13 08:52:24.636398 extend-filesystems[1444]: Found loop7 Dec 13 08:52:24.636398 extend-filesystems[1444]: Found vda Dec 13 08:52:24.636398 extend-filesystems[1444]: Found vda1 Dec 13 08:52:24.636398 extend-filesystems[1444]: Found vda2 Dec 13 08:52:24.636398 extend-filesystems[1444]: Found vda3 Dec 13 08:52:24.636398 extend-filesystems[1444]: Found usr Dec 13 08:52:24.636398 extend-filesystems[1444]: Found vda4 Dec 13 08:52:24.636398 extend-filesystems[1444]: Found vda6 Dec 13 08:52:24.636398 extend-filesystems[1444]: Found vda7 Dec 13 08:52:24.636398 extend-filesystems[1444]: Found vda9 Dec 13 08:52:24.636398 extend-filesystems[1444]: Checking size of /dev/vda9 Dec 13 08:52:24.636961 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 08:52:24.664486 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 08:52:24.664756 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 08:52:24.677575 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 08:52:24.678014 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 08:52:24.694479 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 08:52:24.694885 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 08:52:24.699967 extend-filesystems[1444]: Resized partition /dev/vda9 Dec 13 08:52:24.707988 extend-filesystems[1468]: resize2fs 1.47.1 (20-May-2024) Dec 13 08:52:24.732737 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Dec 13 08:52:24.732834 jq[1455]: true Dec 13 08:52:24.752194 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1382) Dec 13 08:52:24.753164 jq[1476]: true Dec 13 08:52:24.763064 tar[1463]: linux-amd64/helm Dec 13 08:52:24.775366 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 08:52:24.780737 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 08:52:24.780787 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 08:52:24.781298 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 08:52:24.781386 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Dec 13 08:52:24.781404 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 08:52:24.783650 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 13 08:52:24.802577 update_engine[1453]: I20241213 08:52:24.801090 1453 main.cc:92] Flatcar Update Engine starting Dec 13 08:52:24.802099 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 08:52:24.807898 systemd[1]: Started update-engine.service - Update Engine. Dec 13 08:52:24.808337 update_engine[1453]: I20241213 08:52:24.807982 1453 update_check_scheduler.cc:74] Next update check in 8m7s Dec 13 08:52:24.812592 systemd-logind[1451]: New seat seat0. Dec 13 08:52:24.815403 systemd-logind[1451]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 08:52:24.815431 systemd-logind[1451]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 08:52:24.819106 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 08:52:24.821227 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 08:52:24.846260 (ntainerd)[1483]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 08:52:24.858837 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Dec 13 08:52:24.899815 extend-filesystems[1468]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 08:52:24.899815 extend-filesystems[1468]: old_desc_blocks = 1, new_desc_blocks = 8 Dec 13 08:52:24.899815 extend-filesystems[1468]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Dec 13 08:52:24.905924 extend-filesystems[1444]: Resized filesystem in /dev/vda9 Dec 13 08:52:24.905924 extend-filesystems[1444]: Found vdb Dec 13 08:52:24.902677 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 08:52:24.903080 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 08:52:24.944836 bash[1503]: Updated "/home/core/.ssh/authorized_keys" Dec 13 08:52:24.951736 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 08:52:24.971412 systemd[1]: Starting sshkeys.service... Dec 13 08:52:25.086183 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 13 08:52:25.100375 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 13 08:52:25.179591 coreos-metadata[1508]: Dec 13 08:52:25.174 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Dec 13 08:52:25.190093 coreos-metadata[1508]: Dec 13 08:52:25.188 INFO Fetch successful Dec 13 08:52:25.201644 unknown[1508]: wrote ssh authorized keys file for user: core Dec 13 08:52:25.222519 locksmithd[1488]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 08:52:25.272613 update-ssh-keys[1516]: Updated "/home/core/.ssh/authorized_keys" Dec 13 08:52:25.272112 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 13 08:52:25.278091 systemd[1]: Finished sshkeys.service. Dec 13 08:52:25.336957 systemd-networkd[1363]: eth1: Gained IPv6LL Dec 13 08:52:25.342301 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 08:52:25.343505 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 08:52:25.362246 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 08:52:25.367816 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 08:52:25.401128 systemd-networkd[1363]: eth0: Gained IPv6LL Dec 13 08:52:25.454898 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 08:52:25.469180 containerd[1483]: time="2024-12-13T08:52:25.468981344Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 08:52:25.571750 containerd[1483]: time="2024-12-13T08:52:25.570286395Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 08:52:25.576070 containerd[1483]: time="2024-12-13T08:52:25.575914693Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 08:52:25.576070 containerd[1483]: time="2024-12-13T08:52:25.576064804Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 08:52:25.576207 containerd[1483]: time="2024-12-13T08:52:25.576097242Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 08:52:25.576345 containerd[1483]: time="2024-12-13T08:52:25.576321228Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 08:52:25.577313 containerd[1483]: time="2024-12-13T08:52:25.577271542Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 08:52:25.577466 containerd[1483]: time="2024-12-13T08:52:25.577438695Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 08:52:25.577496 containerd[1483]: time="2024-12-13T08:52:25.577469813Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 08:52:25.577919 containerd[1483]: time="2024-12-13T08:52:25.577889186Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 08:52:25.577952 containerd[1483]: time="2024-12-13T08:52:25.577920218Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 08:52:25.577952 containerd[1483]: time="2024-12-13T08:52:25.577942390Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 08:52:25.577996 containerd[1483]: time="2024-12-13T08:52:25.577959509Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 08:52:25.578148 containerd[1483]: time="2024-12-13T08:52:25.578122215Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 08:52:25.578472 containerd[1483]: time="2024-12-13T08:52:25.578445936Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 08:52:25.581538 containerd[1483]: time="2024-12-13T08:52:25.581402387Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 08:52:25.581538 containerd[1483]: time="2024-12-13T08:52:25.581463183Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 08:52:25.582058 containerd[1483]: time="2024-12-13T08:52:25.581688427Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 08:52:25.582058 containerd[1483]: time="2024-12-13T08:52:25.581789812Z" level=info msg="metadata content store policy set" policy=shared Dec 13 08:52:25.602265 containerd[1483]: time="2024-12-13T08:52:25.602190917Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 08:52:25.602378 containerd[1483]: time="2024-12-13T08:52:25.602308361Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 08:52:25.602378 containerd[1483]: time="2024-12-13T08:52:25.602329889Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 08:52:25.602425 containerd[1483]: time="2024-12-13T08:52:25.602411757Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 08:52:25.602464 containerd[1483]: time="2024-12-13T08:52:25.602436262Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 08:52:25.602702 containerd[1483]: time="2024-12-13T08:52:25.602683357Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 08:52:25.604776 containerd[1483]: time="2024-12-13T08:52:25.604730144Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 08:52:25.605023 containerd[1483]: time="2024-12-13T08:52:25.605001585Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 08:52:25.605054 containerd[1483]: time="2024-12-13T08:52:25.605029399Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 08:52:25.605081 containerd[1483]: time="2024-12-13T08:52:25.605052922Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 08:52:25.605106 containerd[1483]: time="2024-12-13T08:52:25.605077678Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 08:52:25.605106 containerd[1483]: time="2024-12-13T08:52:25.605098000Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 08:52:25.605801 containerd[1483]: time="2024-12-13T08:52:25.605120265Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 08:52:25.605801 containerd[1483]: time="2024-12-13T08:52:25.605142974Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 08:52:25.605801 containerd[1483]: time="2024-12-13T08:52:25.605166256Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 08:52:25.605801 containerd[1483]: time="2024-12-13T08:52:25.605185148Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 08:52:25.605801 containerd[1483]: time="2024-12-13T08:52:25.605203563Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 08:52:25.605801 containerd[1483]: time="2024-12-13T08:52:25.605224485Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 08:52:25.605801 containerd[1483]: time="2024-12-13T08:52:25.605255815Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 08:52:25.605801 containerd[1483]: time="2024-12-13T08:52:25.605276211Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 08:52:25.605801 containerd[1483]: time="2024-12-13T08:52:25.605308240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 08:52:25.605801 containerd[1483]: time="2024-12-13T08:52:25.605329142Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 08:52:25.605801 containerd[1483]: time="2024-12-13T08:52:25.605347449Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 08:52:25.605801 containerd[1483]: time="2024-12-13T08:52:25.605369350Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 08:52:25.605801 containerd[1483]: time="2024-12-13T08:52:25.605387905Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 08:52:25.605801 containerd[1483]: time="2024-12-13T08:52:25.605408278Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 08:52:25.606105 containerd[1483]: time="2024-12-13T08:52:25.605428115Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 08:52:25.606105 containerd[1483]: time="2024-12-13T08:52:25.605448674Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 08:52:25.606105 containerd[1483]: time="2024-12-13T08:52:25.605467761Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 08:52:25.606105 containerd[1483]: time="2024-12-13T08:52:25.605485569Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 08:52:25.606105 containerd[1483]: time="2024-12-13T08:52:25.605503088Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 08:52:25.606105 containerd[1483]: time="2024-12-13T08:52:25.605527194Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 08:52:25.606105 containerd[1483]: time="2024-12-13T08:52:25.605556829Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 08:52:25.606105 containerd[1483]: time="2024-12-13T08:52:25.605574208Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 08:52:25.606105 containerd[1483]: time="2024-12-13T08:52:25.605595045Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 08:52:25.609740 containerd[1483]: time="2024-12-13T08:52:25.607347226Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 08:52:25.609740 containerd[1483]: time="2024-12-13T08:52:25.608856419Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 08:52:25.609740 containerd[1483]: time="2024-12-13T08:52:25.608897896Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 08:52:25.609740 containerd[1483]: time="2024-12-13T08:52:25.608917492Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 08:52:25.609740 containerd[1483]: time="2024-12-13T08:52:25.608934672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 08:52:25.609740 containerd[1483]: time="2024-12-13T08:52:25.608982052Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 08:52:25.609740 containerd[1483]: time="2024-12-13T08:52:25.609002711Z" level=info msg="NRI interface is disabled by configuration." Dec 13 08:52:25.609740 containerd[1483]: time="2024-12-13T08:52:25.609020802Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 08:52:25.609999 containerd[1483]: time="2024-12-13T08:52:25.609503183Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 08:52:25.609999 containerd[1483]: time="2024-12-13T08:52:25.609626866Z" level=info msg="Connect containerd service" Dec 13 08:52:25.609999 containerd[1483]: time="2024-12-13T08:52:25.609705622Z" level=info msg="using legacy CRI server" Dec 13 08:52:25.609999 containerd[1483]: time="2024-12-13T08:52:25.609735177Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 08:52:25.609999 containerd[1483]: time="2024-12-13T08:52:25.609933640Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 08:52:25.615120 containerd[1483]: time="2024-12-13T08:52:25.615070270Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 08:52:25.615850 containerd[1483]: time="2024-12-13T08:52:25.615787472Z" level=info msg="Start subscribing containerd event" Dec 13 08:52:25.615898 containerd[1483]: time="2024-12-13T08:52:25.615882686Z" level=info msg="Start recovering state" Dec 13 08:52:25.615999 containerd[1483]: time="2024-12-13T08:52:25.615985060Z" level=info msg="Start event monitor" Dec 13 08:52:25.616027 containerd[1483]: time="2024-12-13T08:52:25.616012504Z" level=info msg="Start snapshots syncer" Dec 13 08:52:25.616027 containerd[1483]: time="2024-12-13T08:52:25.616025468Z" level=info msg="Start cni network conf syncer for default" Dec 13 08:52:25.616073 containerd[1483]: time="2024-12-13T08:52:25.616035800Z" level=info msg="Start streaming server" Dec 13 08:52:25.618024 containerd[1483]: time="2024-12-13T08:52:25.617987037Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 08:52:25.620408 containerd[1483]: time="2024-12-13T08:52:25.619860649Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 08:52:25.626992 containerd[1483]: time="2024-12-13T08:52:25.624033534Z" level=info msg="containerd successfully booted in 0.159722s" Dec 13 08:52:25.624213 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 08:52:25.822440 sshd_keygen[1486]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 08:52:25.855706 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 08:52:25.870853 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 08:52:25.877004 systemd[1]: Started sshd@0-144.126.221.125:22-147.75.109.163:56616.service - OpenSSH per-connection server daemon (147.75.109.163:56616). Dec 13 08:52:25.897103 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 08:52:25.897638 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 08:52:25.912199 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 08:52:25.982578 sshd[1547]: Accepted publickey for core from 147.75.109.163 port 56616 ssh2: RSA SHA256:GmRBCjv5DLbtT++ktFQz5R9M6+onrAQ9dTcgZ+NRPZM Dec 13 08:52:25.982677 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 08:52:25.987634 sshd[1547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:52:25.998565 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 08:52:26.005429 tar[1463]: linux-amd64/LICENSE Dec 13 08:52:26.005429 tar[1463]: linux-amd64/README.md Dec 13 08:52:26.012119 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 08:52:26.016864 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 08:52:26.038109 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 08:52:26.052230 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 08:52:26.070267 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 08:52:26.079122 systemd-logind[1451]: New session 1 of user core. Dec 13 08:52:26.094749 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 08:52:26.107245 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 08:52:26.119683 (systemd)[1562]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 08:52:26.285555 systemd[1562]: Queued start job for default target default.target. Dec 13 08:52:26.295382 systemd[1562]: Created slice app.slice - User Application Slice. Dec 13 08:52:26.295500 systemd[1562]: Reached target paths.target - Paths. Dec 13 08:52:26.295534 systemd[1562]: Reached target timers.target - Timers. Dec 13 08:52:26.297574 systemd[1562]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 08:52:26.323354 systemd[1562]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 08:52:26.323625 systemd[1562]: Reached target sockets.target - Sockets. Dec 13 08:52:26.323650 systemd[1562]: Reached target basic.target - Basic System. Dec 13 08:52:26.323711 systemd[1562]: Reached target default.target - Main User Target. Dec 13 08:52:26.324119 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 08:52:26.324292 systemd[1562]: Startup finished in 193ms. Dec 13 08:52:26.333017 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 08:52:26.420071 systemd[1]: Started sshd@1-144.126.221.125:22-147.75.109.163:53216.service - OpenSSH per-connection server daemon (147.75.109.163:53216). Dec 13 08:52:26.512610 sshd[1573]: Accepted publickey for core from 147.75.109.163 port 53216 ssh2: RSA SHA256:GmRBCjv5DLbtT++ktFQz5R9M6+onrAQ9dTcgZ+NRPZM Dec 13 08:52:26.514404 sshd[1573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:52:26.523706 systemd-logind[1451]: New session 2 of user core. Dec 13 08:52:26.529017 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 08:52:26.600588 sshd[1573]: pam_unix(sshd:session): session closed for user core Dec 13 08:52:26.613441 systemd[1]: sshd@1-144.126.221.125:22-147.75.109.163:53216.service: Deactivated successfully. Dec 13 08:52:26.616078 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 08:52:26.618903 systemd-logind[1451]: Session 2 logged out. Waiting for processes to exit. Dec 13 08:52:26.625894 systemd[1]: Started sshd@2-144.126.221.125:22-147.75.109.163:53220.service - OpenSSH per-connection server daemon (147.75.109.163:53220). Dec 13 08:52:26.629474 systemd-logind[1451]: Removed session 2. Dec 13 08:52:26.673441 sshd[1580]: Accepted publickey for core from 147.75.109.163 port 53220 ssh2: RSA SHA256:GmRBCjv5DLbtT++ktFQz5R9M6+onrAQ9dTcgZ+NRPZM Dec 13 08:52:26.675963 sshd[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:52:26.683657 systemd-logind[1451]: New session 3 of user core. Dec 13 08:52:26.687932 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 08:52:26.755901 sshd[1580]: pam_unix(sshd:session): session closed for user core Dec 13 08:52:26.760226 systemd[1]: sshd@2-144.126.221.125:22-147.75.109.163:53220.service: Deactivated successfully. Dec 13 08:52:26.761208 systemd-logind[1451]: Session 3 logged out. Waiting for processes to exit. Dec 13 08:52:26.763300 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 08:52:26.764862 systemd-logind[1451]: Removed session 3. Dec 13 08:52:26.861967 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 08:52:26.862664 (kubelet)[1591]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 08:52:26.864424 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 08:52:26.866837 systemd[1]: Startup finished in 1.308s (kernel) + 5.669s (initrd) + 6.521s (userspace) = 13.498s. Dec 13 08:52:27.683382 kubelet[1591]: E1213 08:52:27.683213 1591 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 08:52:27.685866 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 08:52:27.686047 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 08:52:27.686610 systemd[1]: kubelet.service: Consumed 1.253s CPU time. Dec 13 08:52:31.229965 systemd-resolved[1325]: Clock change detected. Flushing caches. Dec 13 08:52:31.230302 systemd-timesyncd[1345]: Contacted time server 212.227.240.160:123 (1.flatcar.pool.ntp.org). Dec 13 08:52:31.230397 systemd-timesyncd[1345]: Initial clock synchronization to Fri 2024-12-13 08:52:31.229863 UTC. Dec 13 08:52:37.585718 systemd[1]: Started sshd@3-144.126.221.125:22-147.75.109.163:34384.service - OpenSSH per-connection server daemon (147.75.109.163:34384). Dec 13 08:52:37.639243 sshd[1604]: Accepted publickey for core from 147.75.109.163 port 34384 ssh2: RSA SHA256:GmRBCjv5DLbtT++ktFQz5R9M6+onrAQ9dTcgZ+NRPZM Dec 13 08:52:37.641286 sshd[1604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:52:37.647521 systemd-logind[1451]: New session 4 of user core. Dec 13 08:52:37.653550 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 08:52:37.716457 sshd[1604]: pam_unix(sshd:session): session closed for user core Dec 13 08:52:37.728438 systemd[1]: sshd@3-144.126.221.125:22-147.75.109.163:34384.service: Deactivated successfully. Dec 13 08:52:37.730581 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 08:52:37.733395 systemd-logind[1451]: Session 4 logged out. Waiting for processes to exit. Dec 13 08:52:37.739671 systemd[1]: Started sshd@4-144.126.221.125:22-147.75.109.163:34398.service - OpenSSH per-connection server daemon (147.75.109.163:34398). Dec 13 08:52:37.741845 systemd-logind[1451]: Removed session 4. Dec 13 08:52:37.779537 sshd[1611]: Accepted publickey for core from 147.75.109.163 port 34398 ssh2: RSA SHA256:GmRBCjv5DLbtT++ktFQz5R9M6+onrAQ9dTcgZ+NRPZM Dec 13 08:52:37.781541 sshd[1611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:52:37.786779 systemd-logind[1451]: New session 5 of user core. Dec 13 08:52:37.793522 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 08:52:37.850088 sshd[1611]: pam_unix(sshd:session): session closed for user core Dec 13 08:52:37.866441 systemd[1]: sshd@4-144.126.221.125:22-147.75.109.163:34398.service: Deactivated successfully. Dec 13 08:52:37.868812 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 08:52:37.869752 systemd-logind[1451]: Session 5 logged out. Waiting for processes to exit. Dec 13 08:52:37.877715 systemd[1]: Started sshd@5-144.126.221.125:22-147.75.109.163:34408.service - OpenSSH per-connection server daemon (147.75.109.163:34408). Dec 13 08:52:37.879842 systemd-logind[1451]: Removed session 5. Dec 13 08:52:37.918537 sshd[1618]: Accepted publickey for core from 147.75.109.163 port 34408 ssh2: RSA SHA256:GmRBCjv5DLbtT++ktFQz5R9M6+onrAQ9dTcgZ+NRPZM Dec 13 08:52:37.920910 sshd[1618]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:52:37.926261 systemd-logind[1451]: New session 6 of user core. Dec 13 08:52:37.933487 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 08:52:37.995766 sshd[1618]: pam_unix(sshd:session): session closed for user core Dec 13 08:52:38.009118 systemd[1]: sshd@5-144.126.221.125:22-147.75.109.163:34408.service: Deactivated successfully. Dec 13 08:52:38.012507 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 08:52:38.014274 systemd-logind[1451]: Session 6 logged out. Waiting for processes to exit. Dec 13 08:52:38.020739 systemd[1]: Started sshd@6-144.126.221.125:22-147.75.109.163:34414.service - OpenSSH per-connection server daemon (147.75.109.163:34414). Dec 13 08:52:38.023601 systemd-logind[1451]: Removed session 6. Dec 13 08:52:38.073413 sshd[1625]: Accepted publickey for core from 147.75.109.163 port 34414 ssh2: RSA SHA256:GmRBCjv5DLbtT++ktFQz5R9M6+onrAQ9dTcgZ+NRPZM Dec 13 08:52:38.074951 sshd[1625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:52:38.081206 systemd-logind[1451]: New session 7 of user core. Dec 13 08:52:38.088507 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 08:52:38.162740 sudo[1628]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 08:52:38.163853 sudo[1628]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 08:52:38.181590 sudo[1628]: pam_unix(sudo:session): session closed for user root Dec 13 08:52:38.185391 sshd[1625]: pam_unix(sshd:session): session closed for user core Dec 13 08:52:38.205412 systemd[1]: sshd@6-144.126.221.125:22-147.75.109.163:34414.service: Deactivated successfully. Dec 13 08:52:38.207738 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 08:52:38.210364 systemd-logind[1451]: Session 7 logged out. Waiting for processes to exit. Dec 13 08:52:38.214610 systemd[1]: Started sshd@7-144.126.221.125:22-147.75.109.163:34422.service - OpenSSH per-connection server daemon (147.75.109.163:34422). Dec 13 08:52:38.216238 systemd-logind[1451]: Removed session 7. Dec 13 08:52:38.266279 sshd[1633]: Accepted publickey for core from 147.75.109.163 port 34422 ssh2: RSA SHA256:GmRBCjv5DLbtT++ktFQz5R9M6+onrAQ9dTcgZ+NRPZM Dec 13 08:52:38.268140 sshd[1633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:52:38.274366 systemd-logind[1451]: New session 8 of user core. Dec 13 08:52:38.280531 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 08:52:38.341097 sudo[1637]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 08:52:38.341658 sudo[1637]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 08:52:38.346734 sudo[1637]: pam_unix(sudo:session): session closed for user root Dec 13 08:52:38.354179 sudo[1636]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 08:52:38.355096 sudo[1636]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 08:52:38.379721 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 13 08:52:38.381764 auditctl[1640]: No rules Dec 13 08:52:38.382966 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 08:52:38.383171 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 13 08:52:38.386145 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 08:52:38.430616 augenrules[1658]: No rules Dec 13 08:52:38.431894 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 08:52:38.433600 sudo[1636]: pam_unix(sudo:session): session closed for user root Dec 13 08:52:38.438468 sshd[1633]: pam_unix(sshd:session): session closed for user core Dec 13 08:52:38.454704 systemd[1]: sshd@7-144.126.221.125:22-147.75.109.163:34422.service: Deactivated successfully. Dec 13 08:52:38.456746 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 08:52:38.459517 systemd-logind[1451]: Session 8 logged out. Waiting for processes to exit. Dec 13 08:52:38.464779 systemd[1]: Started sshd@8-144.126.221.125:22-147.75.109.163:34428.service - OpenSSH per-connection server daemon (147.75.109.163:34428). Dec 13 08:52:38.466657 systemd-logind[1451]: Removed session 8. Dec 13 08:52:38.507192 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 08:52:38.515544 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 08:52:38.518224 sshd[1666]: Accepted publickey for core from 147.75.109.163 port 34428 ssh2: RSA SHA256:GmRBCjv5DLbtT++ktFQz5R9M6+onrAQ9dTcgZ+NRPZM Dec 13 08:52:38.520721 sshd[1666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:52:38.536926 systemd-logind[1451]: New session 9 of user core. Dec 13 08:52:38.540443 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 08:52:38.605931 sudo[1672]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 08:52:38.606398 sudo[1672]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 08:52:38.667968 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 08:52:38.668330 (kubelet)[1682]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 08:52:38.752472 kubelet[1682]: E1213 08:52:38.752324 1682 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 08:52:38.757559 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 08:52:38.757703 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 08:52:39.202161 (dockerd)[1702]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 08:52:39.202802 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 08:52:39.762269 dockerd[1702]: time="2024-12-13T08:52:39.762211309Z" level=info msg="Starting up" Dec 13 08:52:39.950308 dockerd[1702]: time="2024-12-13T08:52:39.950260229Z" level=info msg="Loading containers: start." Dec 13 08:52:40.093236 kernel: Initializing XFRM netlink socket Dec 13 08:52:40.192465 systemd-networkd[1363]: docker0: Link UP Dec 13 08:52:40.224817 dockerd[1702]: time="2024-12-13T08:52:40.224748171Z" level=info msg="Loading containers: done." Dec 13 08:52:40.252717 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3658294250-merged.mount: Deactivated successfully. Dec 13 08:52:40.261691 dockerd[1702]: time="2024-12-13T08:52:40.261634978Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 08:52:40.261868 dockerd[1702]: time="2024-12-13T08:52:40.261831401Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Dec 13 08:52:40.261997 dockerd[1702]: time="2024-12-13T08:52:40.261979736Z" level=info msg="Daemon has completed initialization" Dec 13 08:52:40.339017 dockerd[1702]: time="2024-12-13T08:52:40.338893191Z" level=info msg="API listen on /run/docker.sock" Dec 13 08:52:40.339375 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 08:52:41.676456 containerd[1483]: time="2024-12-13T08:52:41.676390044Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\"" Dec 13 08:52:41.685413 systemd-resolved[1325]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Dec 13 08:52:42.369414 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount594828019.mount: Deactivated successfully. Dec 13 08:52:43.798465 containerd[1483]: time="2024-12-13T08:52:43.798396295Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:52:43.802398 containerd[1483]: time="2024-12-13T08:52:43.802304740Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.8: active requests=0, bytes read=32675642" Dec 13 08:52:43.804528 containerd[1483]: time="2024-12-13T08:52:43.804460497Z" level=info msg="ImageCreate event name:\"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:52:43.811380 containerd[1483]: time="2024-12-13T08:52:43.811287003Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:52:43.813354 containerd[1483]: time="2024-12-13T08:52:43.813027706Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.8\" with image id \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\", size \"32672442\" in 2.136573497s" Dec 13 08:52:43.813354 containerd[1483]: time="2024-12-13T08:52:43.813085952Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\" returns image reference \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\"" Dec 13 08:52:43.852226 containerd[1483]: time="2024-12-13T08:52:43.852178074Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\"" Dec 13 08:52:44.780827 systemd-resolved[1325]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Dec 13 08:52:45.726524 containerd[1483]: time="2024-12-13T08:52:45.726408919Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:52:45.729756 containerd[1483]: time="2024-12-13T08:52:45.729661990Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.8: active requests=0, bytes read=29606409" Dec 13 08:52:45.734608 containerd[1483]: time="2024-12-13T08:52:45.734520354Z" level=info msg="ImageCreate event name:\"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:52:45.741344 containerd[1483]: time="2024-12-13T08:52:45.741208744Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:52:45.743815 containerd[1483]: time="2024-12-13T08:52:45.743755862Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.8\" with image id \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\", size \"31051521\" in 1.891373985s" Dec 13 08:52:45.744092 containerd[1483]: time="2024-12-13T08:52:45.743976682Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\" returns image reference \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\"" Dec 13 08:52:45.790383 containerd[1483]: time="2024-12-13T08:52:45.789792080Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\"" Dec 13 08:52:46.662574 systemd[1]: Started sshd@9-144.126.221.125:22-218.92.0.229:56432.service - OpenSSH per-connection server daemon (218.92.0.229:56432). Dec 13 08:52:47.113720 containerd[1483]: time="2024-12-13T08:52:47.113532685Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:52:47.116324 containerd[1483]: time="2024-12-13T08:52:47.116232310Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.8: active requests=0, bytes read=17783035" Dec 13 08:52:47.119261 containerd[1483]: time="2024-12-13T08:52:47.119164898Z" level=info msg="ImageCreate event name:\"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:52:47.127032 containerd[1483]: time="2024-12-13T08:52:47.126917120Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:52:47.128947 containerd[1483]: time="2024-12-13T08:52:47.128717338Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.8\" with image id \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\", size \"19228165\" in 1.338758952s" Dec 13 08:52:47.128947 containerd[1483]: time="2024-12-13T08:52:47.128781313Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\" returns image reference \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\"" Dec 13 08:52:47.169250 containerd[1483]: time="2024-12-13T08:52:47.169169381Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Dec 13 08:52:48.132643 sshd[1936]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.229 user=root Dec 13 08:52:48.423591 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2751097256.mount: Deactivated successfully. Dec 13 08:52:48.950214 containerd[1483]: time="2024-12-13T08:52:48.950117975Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:52:48.951918 containerd[1483]: time="2024-12-13T08:52:48.951864397Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.8: active requests=0, bytes read=29057470" Dec 13 08:52:48.954067 containerd[1483]: time="2024-12-13T08:52:48.953996828Z" level=info msg="ImageCreate event name:\"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:52:48.958523 containerd[1483]: time="2024-12-13T08:52:48.958433375Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:52:48.959852 containerd[1483]: time="2024-12-13T08:52:48.959634329Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.8\" with image id \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\", size \"29056489\" in 1.790375078s" Dec 13 08:52:48.959852 containerd[1483]: time="2024-12-13T08:52:48.959683496Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\"" Dec 13 08:52:48.991507 containerd[1483]: time="2024-12-13T08:52:48.991457506Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 08:52:48.994337 systemd-resolved[1325]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. Dec 13 08:52:49.008341 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 08:52:49.019648 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 08:52:49.169743 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 08:52:49.181125 (kubelet)[1953]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 08:52:49.246875 kubelet[1953]: E1213 08:52:49.246732 1953 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 08:52:49.250367 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 08:52:49.250551 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 08:52:49.611702 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3088931928.mount: Deactivated successfully. Dec 13 08:52:50.035441 sshd[1925]: PAM: Permission denied for root from 218.92.0.229 Dec 13 08:52:50.362771 sshd[2005]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.229 user=root Dec 13 08:52:50.604224 containerd[1483]: time="2024-12-13T08:52:50.604085168Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:52:50.609008 containerd[1483]: time="2024-12-13T08:52:50.608891501Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Dec 13 08:52:50.610873 containerd[1483]: time="2024-12-13T08:52:50.610799980Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:52:50.619465 containerd[1483]: time="2024-12-13T08:52:50.619291481Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:52:50.623488 containerd[1483]: time="2024-12-13T08:52:50.623295321Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.631794807s" Dec 13 08:52:50.623488 containerd[1483]: time="2024-12-13T08:52:50.623345629Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 08:52:50.657033 containerd[1483]: time="2024-12-13T08:52:50.656678663Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 08:52:51.277622 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2005445177.mount: Deactivated successfully. Dec 13 08:52:51.291230 containerd[1483]: time="2024-12-13T08:52:51.291122207Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:52:51.292831 containerd[1483]: time="2024-12-13T08:52:51.292760274Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Dec 13 08:52:51.295465 containerd[1483]: time="2024-12-13T08:52:51.295397378Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:52:51.299911 containerd[1483]: time="2024-12-13T08:52:51.299844683Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:52:51.300789 containerd[1483]: time="2024-12-13T08:52:51.300584284Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 643.853339ms" Dec 13 08:52:51.300789 containerd[1483]: time="2024-12-13T08:52:51.300626792Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 08:52:51.328612 containerd[1483]: time="2024-12-13T08:52:51.328566012Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Dec 13 08:52:51.943897 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3792995577.mount: Deactivated successfully. Dec 13 08:52:52.678180 sshd[1925]: PAM: Permission denied for root from 218.92.0.229 Dec 13 08:52:53.002505 sshd[2064]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.229 user=root Dec 13 08:52:53.873314 containerd[1483]: time="2024-12-13T08:52:53.873251959Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:52:53.877538 containerd[1483]: time="2024-12-13T08:52:53.877448002Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Dec 13 08:52:53.880537 containerd[1483]: time="2024-12-13T08:52:53.880457745Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:52:53.887126 containerd[1483]: time="2024-12-13T08:52:53.887060229Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:52:53.889526 containerd[1483]: time="2024-12-13T08:52:53.889333728Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.560725714s" Dec 13 08:52:53.889526 containerd[1483]: time="2024-12-13T08:52:53.889382865Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Dec 13 08:52:55.257895 sshd[1925]: PAM: Permission denied for root from 218.92.0.229 Dec 13 08:52:55.425557 sshd[1925]: Received disconnect from 218.92.0.229 port 56432:11: [preauth] Dec 13 08:52:55.425557 sshd[1925]: Disconnected from authenticating user root 218.92.0.229 port 56432 [preauth] Dec 13 08:52:55.424275 systemd[1]: sshd@9-144.126.221.125:22-218.92.0.229:56432.service: Deactivated successfully. Dec 13 08:52:55.617564 systemd[1]: Started sshd@10-144.126.221.125:22-218.92.0.229:25422.service - OpenSSH per-connection server daemon (218.92.0.229:25422). Dec 13 08:52:56.881505 sshd[2126]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.229 user=root Dec 13 08:52:57.915617 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 08:52:57.929817 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 08:52:57.958598 systemd[1]: Reloading requested from client PID 2133 ('systemctl') (unit session-9.scope)... Dec 13 08:52:57.958617 systemd[1]: Reloading... Dec 13 08:52:58.105526 zram_generator::config[2176]: No configuration found. Dec 13 08:52:58.238108 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 08:52:58.330845 systemd[1]: Reloading finished in 371 ms. Dec 13 08:52:58.399631 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 08:52:58.399761 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 08:52:58.400170 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 08:52:58.406705 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 08:52:58.549639 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 08:52:58.562745 (kubelet)[2230]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 08:52:58.628989 kubelet[2230]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 08:52:58.628989 kubelet[2230]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 08:52:58.628989 kubelet[2230]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 08:52:58.633492 kubelet[2230]: I1213 08:52:58.633390 2230 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 08:52:58.893787 kubelet[2230]: I1213 08:52:58.893608 2230 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 08:52:58.893787 kubelet[2230]: I1213 08:52:58.893654 2230 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 08:52:58.894536 kubelet[2230]: I1213 08:52:58.894495 2230 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 08:52:58.923184 kubelet[2230]: I1213 08:52:58.923123 2230 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 08:52:58.925289 kubelet[2230]: E1213 08:52:58.925246 2230 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://144.126.221.125:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 144.126.221.125:6443: connect: connection refused Dec 13 08:52:58.953207 kubelet[2230]: I1213 08:52:58.952727 2230 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 08:52:58.954956 kubelet[2230]: I1213 08:52:58.954411 2230 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 08:52:58.954956 kubelet[2230]: I1213 08:52:58.954472 2230 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.2.1-4-b1553ec4eb","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 08:52:58.955650 kubelet[2230]: I1213 08:52:58.955617 2230 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 08:52:58.955852 kubelet[2230]: I1213 08:52:58.955821 2230 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 08:52:58.957483 kubelet[2230]: I1213 08:52:58.957229 2230 state_mem.go:36] "Initialized new in-memory state store" Dec 13 08:52:58.958781 kubelet[2230]: I1213 08:52:58.958451 2230 kubelet.go:400] "Attempting to sync node with API server" Dec 13 08:52:58.958781 kubelet[2230]: I1213 08:52:58.958506 2230 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 08:52:58.958781 kubelet[2230]: I1213 08:52:58.958538 2230 kubelet.go:312] "Adding apiserver pod source" Dec 13 08:52:58.958781 kubelet[2230]: I1213 08:52:58.958556 2230 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 08:52:58.963260 kubelet[2230]: W1213 08:52:58.962835 2230 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://144.126.221.125:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.1-4-b1553ec4eb&limit=500&resourceVersion=0": dial tcp 144.126.221.125:6443: connect: connection refused Dec 13 08:52:58.963260 kubelet[2230]: E1213 08:52:58.962930 2230 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://144.126.221.125:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.1-4-b1553ec4eb&limit=500&resourceVersion=0": dial tcp 144.126.221.125:6443: connect: connection refused Dec 13 08:52:58.963260 kubelet[2230]: W1213 08:52:58.963044 2230 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://144.126.221.125:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 144.126.221.125:6443: connect: connection refused Dec 13 08:52:58.963260 kubelet[2230]: E1213 08:52:58.963095 2230 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://144.126.221.125:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 144.126.221.125:6443: connect: connection refused Dec 13 08:52:58.964140 kubelet[2230]: I1213 08:52:58.963830 2230 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 08:52:58.967093 kubelet[2230]: I1213 08:52:58.966006 2230 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 08:52:58.967093 kubelet[2230]: W1213 08:52:58.966092 2230 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 08:52:58.967431 kubelet[2230]: I1213 08:52:58.967405 2230 server.go:1264] "Started kubelet" Dec 13 08:52:58.974216 kubelet[2230]: I1213 08:52:58.973831 2230 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 08:52:58.974783 kubelet[2230]: I1213 08:52:58.974705 2230 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 08:52:58.975784 kubelet[2230]: I1213 08:52:58.975235 2230 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 08:52:58.975784 kubelet[2230]: E1213 08:52:58.975455 2230 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://144.126.221.125:6443/api/v1/namespaces/default/events\": dial tcp 144.126.221.125:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.2.1-4-b1553ec4eb.1810b08da677a268 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.2.1-4-b1553ec4eb,UID:ci-4081.2.1-4-b1553ec4eb,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.2.1-4-b1553ec4eb,},FirstTimestamp:2024-12-13 08:52:58.967376488 +0000 UTC m=+0.399545613,LastTimestamp:2024-12-13 08:52:58.967376488 +0000 UTC m=+0.399545613,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.2.1-4-b1553ec4eb,}" Dec 13 08:52:58.977621 kubelet[2230]: I1213 08:52:58.977037 2230 server.go:455] "Adding debug handlers to kubelet server" Dec 13 08:52:58.980569 kubelet[2230]: I1213 08:52:58.979647 2230 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 08:52:58.982015 kubelet[2230]: E1213 08:52:58.981962 2230 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.2.1-4-b1553ec4eb\" not found" Dec 13 08:52:58.982169 kubelet[2230]: I1213 08:52:58.982158 2230 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 08:52:58.982419 kubelet[2230]: I1213 08:52:58.982398 2230 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 08:52:58.982590 kubelet[2230]: I1213 08:52:58.982576 2230 reconciler.go:26] "Reconciler: start to sync state" Dec 13 08:52:58.983176 kubelet[2230]: W1213 08:52:58.983120 2230 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://144.126.221.125:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 144.126.221.125:6443: connect: connection refused Dec 13 08:52:58.983324 kubelet[2230]: E1213 08:52:58.983308 2230 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://144.126.221.125:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 144.126.221.125:6443: connect: connection refused Dec 13 08:52:58.983738 kubelet[2230]: E1213 08:52:58.983697 2230 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://144.126.221.125:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.1-4-b1553ec4eb?timeout=10s\": dial tcp 144.126.221.125:6443: connect: connection refused" interval="200ms" Dec 13 08:52:58.988977 kubelet[2230]: I1213 08:52:58.988881 2230 factory.go:221] Registration of the systemd container factory successfully Dec 13 08:52:58.989122 kubelet[2230]: I1213 08:52:58.989035 2230 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 08:52:58.994334 kubelet[2230]: I1213 08:52:58.993979 2230 factory.go:221] Registration of the containerd container factory successfully Dec 13 08:52:59.007140 kubelet[2230]: E1213 08:52:59.006978 2230 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 08:52:59.013345 kubelet[2230]: I1213 08:52:59.013304 2230 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 08:52:59.016213 kubelet[2230]: I1213 08:52:59.015494 2230 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 08:52:59.016213 kubelet[2230]: I1213 08:52:59.015533 2230 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 08:52:59.016213 kubelet[2230]: I1213 08:52:59.015555 2230 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 08:52:59.016213 kubelet[2230]: E1213 08:52:59.015607 2230 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 08:52:59.021528 sshd[2124]: PAM: Permission denied for root from 218.92.0.229 Dec 13 08:52:59.022014 kubelet[2230]: W1213 08:52:59.021781 2230 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://144.126.221.125:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 144.126.221.125:6443: connect: connection refused Dec 13 08:52:59.022014 kubelet[2230]: E1213 08:52:59.021850 2230 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://144.126.221.125:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 144.126.221.125:6443: connect: connection refused Dec 13 08:52:59.024841 kubelet[2230]: I1213 08:52:59.024429 2230 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 08:52:59.024841 kubelet[2230]: I1213 08:52:59.024447 2230 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 08:52:59.024841 kubelet[2230]: I1213 08:52:59.024466 2230 state_mem.go:36] "Initialized new in-memory state store" Dec 13 08:52:59.027825 kubelet[2230]: I1213 08:52:59.027798 2230 policy_none.go:49] "None policy: Start" Dec 13 08:52:59.029235 kubelet[2230]: I1213 08:52:59.028879 2230 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 08:52:59.029235 kubelet[2230]: I1213 08:52:59.028922 2230 state_mem.go:35] "Initializing new in-memory state store" Dec 13 08:52:59.040713 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 08:52:59.056306 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 08:52:59.062697 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 08:52:59.076062 kubelet[2230]: I1213 08:52:59.076032 2230 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 08:52:59.083696 kubelet[2230]: I1213 08:52:59.083476 2230 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 08:52:59.083696 kubelet[2230]: I1213 08:52:59.083653 2230 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 08:52:59.087525 kubelet[2230]: E1213 08:52:59.087262 2230 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.2.1-4-b1553ec4eb\" not found" Dec 13 08:52:59.088510 kubelet[2230]: I1213 08:52:59.088317 2230 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.1-4-b1553ec4eb" Dec 13 08:52:59.089234 kubelet[2230]: E1213 08:52:59.088834 2230 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://144.126.221.125:6443/api/v1/nodes\": dial tcp 144.126.221.125:6443: connect: connection refused" node="ci-4081.2.1-4-b1553ec4eb" Dec 13 08:52:59.115930 kubelet[2230]: I1213 08:52:59.115864 2230 topology_manager.go:215] "Topology Admit Handler" podUID="7a6f7e84ca04122a92f7e467cb349ffd" podNamespace="kube-system" podName="kube-apiserver-ci-4081.2.1-4-b1553ec4eb" Dec 13 08:52:59.117386 kubelet[2230]: I1213 08:52:59.117352 2230 topology_manager.go:215] "Topology Admit Handler" podUID="61a286f76e6a2ed5bb12dbe7c80446bf" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.2.1-4-b1553ec4eb" Dec 13 08:52:59.118674 kubelet[2230]: I1213 08:52:59.118539 2230 topology_manager.go:215] "Topology Admit Handler" podUID="446e043bc931e01d4f774324938423db" podNamespace="kube-system" podName="kube-scheduler-ci-4081.2.1-4-b1553ec4eb" Dec 13 08:52:59.127417 systemd[1]: Created slice kubepods-burstable-pod7a6f7e84ca04122a92f7e467cb349ffd.slice - libcontainer container kubepods-burstable-pod7a6f7e84ca04122a92f7e467cb349ffd.slice. Dec 13 08:52:59.148982 systemd[1]: Created slice kubepods-burstable-pod61a286f76e6a2ed5bb12dbe7c80446bf.slice - libcontainer container kubepods-burstable-pod61a286f76e6a2ed5bb12dbe7c80446bf.slice. Dec 13 08:52:59.161496 systemd[1]: Created slice kubepods-burstable-pod446e043bc931e01d4f774324938423db.slice - libcontainer container kubepods-burstable-pod446e043bc931e01d4f774324938423db.slice. Dec 13 08:52:59.184570 kubelet[2230]: E1213 08:52:59.184514 2230 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://144.126.221.125:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.1-4-b1553ec4eb?timeout=10s\": dial tcp 144.126.221.125:6443: connect: connection refused" interval="400ms" Dec 13 08:52:59.284272 kubelet[2230]: I1213 08:52:59.284133 2230 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7a6f7e84ca04122a92f7e467cb349ffd-k8s-certs\") pod \"kube-apiserver-ci-4081.2.1-4-b1553ec4eb\" (UID: \"7a6f7e84ca04122a92f7e467cb349ffd\") " pod="kube-system/kube-apiserver-ci-4081.2.1-4-b1553ec4eb" Dec 13 08:52:59.284709 kubelet[2230]: I1213 08:52:59.284276 2230 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7a6f7e84ca04122a92f7e467cb349ffd-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.2.1-4-b1553ec4eb\" (UID: \"7a6f7e84ca04122a92f7e467cb349ffd\") " pod="kube-system/kube-apiserver-ci-4081.2.1-4-b1553ec4eb" Dec 13 08:52:59.284709 kubelet[2230]: I1213 08:52:59.284333 2230 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/61a286f76e6a2ed5bb12dbe7c80446bf-ca-certs\") pod \"kube-controller-manager-ci-4081.2.1-4-b1553ec4eb\" (UID: \"61a286f76e6a2ed5bb12dbe7c80446bf\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-4-b1553ec4eb" Dec 13 08:52:59.284709 kubelet[2230]: I1213 08:52:59.284369 2230 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/61a286f76e6a2ed5bb12dbe7c80446bf-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.2.1-4-b1553ec4eb\" (UID: \"61a286f76e6a2ed5bb12dbe7c80446bf\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-4-b1553ec4eb" Dec 13 08:52:59.284709 kubelet[2230]: I1213 08:52:59.284407 2230 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/61a286f76e6a2ed5bb12dbe7c80446bf-k8s-certs\") pod \"kube-controller-manager-ci-4081.2.1-4-b1553ec4eb\" (UID: \"61a286f76e6a2ed5bb12dbe7c80446bf\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-4-b1553ec4eb" Dec 13 08:52:59.284709 kubelet[2230]: I1213 08:52:59.284432 2230 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/61a286f76e6a2ed5bb12dbe7c80446bf-kubeconfig\") pod \"kube-controller-manager-ci-4081.2.1-4-b1553ec4eb\" (UID: \"61a286f76e6a2ed5bb12dbe7c80446bf\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-4-b1553ec4eb" Dec 13 08:52:59.284976 kubelet[2230]: I1213 08:52:59.284464 2230 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/61a286f76e6a2ed5bb12dbe7c80446bf-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.2.1-4-b1553ec4eb\" (UID: \"61a286f76e6a2ed5bb12dbe7c80446bf\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-4-b1553ec4eb" Dec 13 08:52:59.284976 kubelet[2230]: I1213 08:52:59.284491 2230 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/446e043bc931e01d4f774324938423db-kubeconfig\") pod \"kube-scheduler-ci-4081.2.1-4-b1553ec4eb\" (UID: \"446e043bc931e01d4f774324938423db\") " pod="kube-system/kube-scheduler-ci-4081.2.1-4-b1553ec4eb" Dec 13 08:52:59.284976 kubelet[2230]: I1213 08:52:59.284515 2230 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7a6f7e84ca04122a92f7e467cb349ffd-ca-certs\") pod \"kube-apiserver-ci-4081.2.1-4-b1553ec4eb\" (UID: \"7a6f7e84ca04122a92f7e467cb349ffd\") " pod="kube-system/kube-apiserver-ci-4081.2.1-4-b1553ec4eb" Dec 13 08:52:59.290337 kubelet[2230]: I1213 08:52:59.290302 2230 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.1-4-b1553ec4eb" Dec 13 08:52:59.290760 kubelet[2230]: E1213 08:52:59.290732 2230 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://144.126.221.125:6443/api/v1/nodes\": dial tcp 144.126.221.125:6443: connect: connection refused" node="ci-4081.2.1-4-b1553ec4eb" Dec 13 08:52:59.361259 sshd[2261]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.229 user=root Dec 13 08:52:59.445864 kubelet[2230]: E1213 08:52:59.445786 2230 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:52:59.446787 containerd[1483]: time="2024-12-13T08:52:59.446725093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.2.1-4-b1553ec4eb,Uid:7a6f7e84ca04122a92f7e467cb349ffd,Namespace:kube-system,Attempt:0,}" Dec 13 08:52:59.453944 kubelet[2230]: E1213 08:52:59.453745 2230 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://144.126.221.125:6443/api/v1/namespaces/default/events\": dial tcp 144.126.221.125:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.2.1-4-b1553ec4eb.1810b08da677a268 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.2.1-4-b1553ec4eb,UID:ci-4081.2.1-4-b1553ec4eb,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.2.1-4-b1553ec4eb,},FirstTimestamp:2024-12-13 08:52:58.967376488 +0000 UTC m=+0.399545613,LastTimestamp:2024-12-13 08:52:58.967376488 +0000 UTC m=+0.399545613,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.2.1-4-b1553ec4eb,}" Dec 13 08:52:59.457406 kubelet[2230]: E1213 08:52:59.457335 2230 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:52:59.463933 containerd[1483]: time="2024-12-13T08:52:59.463861124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.2.1-4-b1553ec4eb,Uid:61a286f76e6a2ed5bb12dbe7c80446bf,Namespace:kube-system,Attempt:0,}" Dec 13 08:52:59.465489 kubelet[2230]: E1213 08:52:59.465424 2230 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:52:59.466066 containerd[1483]: time="2024-12-13T08:52:59.466021765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.2.1-4-b1553ec4eb,Uid:446e043bc931e01d4f774324938423db,Namespace:kube-system,Attempt:0,}" Dec 13 08:52:59.585746 kubelet[2230]: E1213 08:52:59.585680 2230 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://144.126.221.125:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.1-4-b1553ec4eb?timeout=10s\": dial tcp 144.126.221.125:6443: connect: connection refused" interval="800ms" Dec 13 08:52:59.693249 kubelet[2230]: I1213 08:52:59.692695 2230 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.1-4-b1553ec4eb" Dec 13 08:52:59.693249 kubelet[2230]: E1213 08:52:59.693105 2230 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://144.126.221.125:6443/api/v1/nodes\": dial tcp 144.126.221.125:6443: connect: connection refused" node="ci-4081.2.1-4-b1553ec4eb" Dec 13 08:52:59.782847 kubelet[2230]: W1213 08:52:59.782664 2230 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://144.126.221.125:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.1-4-b1553ec4eb&limit=500&resourceVersion=0": dial tcp 144.126.221.125:6443: connect: connection refused Dec 13 08:52:59.782847 kubelet[2230]: E1213 08:52:59.782762 2230 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://144.126.221.125:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.1-4-b1553ec4eb&limit=500&resourceVersion=0": dial tcp 144.126.221.125:6443: connect: connection refused Dec 13 08:53:00.004159 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1824932450.mount: Deactivated successfully. Dec 13 08:53:00.059998 containerd[1483]: time="2024-12-13T08:53:00.059778604Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 08:53:00.064866 containerd[1483]: time="2024-12-13T08:53:00.064770322Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 08:53:00.067893 containerd[1483]: time="2024-12-13T08:53:00.067792762Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Dec 13 08:53:00.072347 containerd[1483]: time="2024-12-13T08:53:00.072054591Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 08:53:00.076147 containerd[1483]: time="2024-12-13T08:53:00.076017617Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 08:53:00.081568 containerd[1483]: time="2024-12-13T08:53:00.081458435Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 08:53:00.089331 containerd[1483]: time="2024-12-13T08:53:00.089186656Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 08:53:00.100803 containerd[1483]: time="2024-12-13T08:53:00.095696131Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 08:53:00.100803 containerd[1483]: time="2024-12-13T08:53:00.100052633Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 653.217469ms" Dec 13 08:53:00.104990 containerd[1483]: time="2024-12-13T08:53:00.104752150Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 638.632426ms" Dec 13 08:53:00.106568 containerd[1483]: time="2024-12-13T08:53:00.106235566Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 642.259655ms" Dec 13 08:53:00.225147 kubelet[2230]: W1213 08:53:00.225006 2230 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://144.126.221.125:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 144.126.221.125:6443: connect: connection refused Dec 13 08:53:00.225147 kubelet[2230]: E1213 08:53:00.225112 2230 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://144.126.221.125:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 144.126.221.125:6443: connect: connection refused Dec 13 08:53:00.315495 kubelet[2230]: W1213 08:53:00.314857 2230 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://144.126.221.125:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 144.126.221.125:6443: connect: connection refused Dec 13 08:53:00.319225 kubelet[2230]: E1213 08:53:00.319045 2230 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://144.126.221.125:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 144.126.221.125:6443: connect: connection refused Dec 13 08:53:00.356844 kubelet[2230]: W1213 08:53:00.356535 2230 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://144.126.221.125:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 144.126.221.125:6443: connect: connection refused Dec 13 08:53:00.359272 kubelet[2230]: E1213 08:53:00.356945 2230 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://144.126.221.125:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 144.126.221.125:6443: connect: connection refused Dec 13 08:53:00.383605 containerd[1483]: time="2024-12-13T08:53:00.383435803Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 08:53:00.384127 containerd[1483]: time="2024-12-13T08:53:00.384018550Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 08:53:00.384351 containerd[1483]: time="2024-12-13T08:53:00.384291323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:53:00.385497 containerd[1483]: time="2024-12-13T08:53:00.385410879Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:53:00.386530 kubelet[2230]: E1213 08:53:00.386451 2230 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://144.126.221.125:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.1-4-b1553ec4eb?timeout=10s\": dial tcp 144.126.221.125:6443: connect: connection refused" interval="1.6s" Dec 13 08:53:00.401589 containerd[1483]: time="2024-12-13T08:53:00.401406521Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 08:53:00.401884 containerd[1483]: time="2024-12-13T08:53:00.401495025Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 08:53:00.402086 containerd[1483]: time="2024-12-13T08:53:00.402033871Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:53:00.402460 containerd[1483]: time="2024-12-13T08:53:00.402390125Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:53:00.406493 containerd[1483]: time="2024-12-13T08:53:00.406355901Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 08:53:00.409686 containerd[1483]: time="2024-12-13T08:53:00.406764238Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 08:53:00.409686 containerd[1483]: time="2024-12-13T08:53:00.409342605Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:53:00.409686 containerd[1483]: time="2024-12-13T08:53:00.409502725Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:53:00.438400 systemd[1]: Started cri-containerd-c69f7959ea7832280a2bc79e5e6ca808c5861b103450032c68c30d47b7965104.scope - libcontainer container c69f7959ea7832280a2bc79e5e6ca808c5861b103450032c68c30d47b7965104. Dec 13 08:53:00.457846 systemd[1]: Started cri-containerd-56db482e8c01345ae9e272e5bf7ad08760a660d098cd2d606a9ad137ed5f97e6.scope - libcontainer container 56db482e8c01345ae9e272e5bf7ad08760a660d098cd2d606a9ad137ed5f97e6. Dec 13 08:53:00.463045 systemd[1]: Started cri-containerd-6b3b1a502a8c0372a5e6a89ef65879bbf2ee45f27a37c29a1cb537bab85e54f0.scope - libcontainer container 6b3b1a502a8c0372a5e6a89ef65879bbf2ee45f27a37c29a1cb537bab85e54f0. Dec 13 08:53:00.496418 kubelet[2230]: I1213 08:53:00.496376 2230 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.1-4-b1553ec4eb" Dec 13 08:53:00.496953 kubelet[2230]: E1213 08:53:00.496833 2230 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://144.126.221.125:6443/api/v1/nodes\": dial tcp 144.126.221.125:6443: connect: connection refused" node="ci-4081.2.1-4-b1553ec4eb" Dec 13 08:53:00.554088 containerd[1483]: time="2024-12-13T08:53:00.553952086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.2.1-4-b1553ec4eb,Uid:7a6f7e84ca04122a92f7e467cb349ffd,Namespace:kube-system,Attempt:0,} returns sandbox id \"6b3b1a502a8c0372a5e6a89ef65879bbf2ee45f27a37c29a1cb537bab85e54f0\"" Dec 13 08:53:00.557857 kubelet[2230]: E1213 08:53:00.557749 2230 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:53:00.569277 containerd[1483]: time="2024-12-13T08:53:00.567607647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.2.1-4-b1553ec4eb,Uid:61a286f76e6a2ed5bb12dbe7c80446bf,Namespace:kube-system,Attempt:0,} returns sandbox id \"56db482e8c01345ae9e272e5bf7ad08760a660d098cd2d606a9ad137ed5f97e6\"" Dec 13 08:53:00.572698 kubelet[2230]: E1213 08:53:00.572449 2230 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:53:00.579221 containerd[1483]: time="2024-12-13T08:53:00.576742456Z" level=info msg="CreateContainer within sandbox \"6b3b1a502a8c0372a5e6a89ef65879bbf2ee45f27a37c29a1cb537bab85e54f0\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 08:53:00.582206 containerd[1483]: time="2024-12-13T08:53:00.582152203Z" level=info msg="CreateContainer within sandbox \"56db482e8c01345ae9e272e5bf7ad08760a660d098cd2d606a9ad137ed5f97e6\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 08:53:00.595017 containerd[1483]: time="2024-12-13T08:53:00.594934031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.2.1-4-b1553ec4eb,Uid:446e043bc931e01d4f774324938423db,Namespace:kube-system,Attempt:0,} returns sandbox id \"c69f7959ea7832280a2bc79e5e6ca808c5861b103450032c68c30d47b7965104\"" Dec 13 08:53:00.597321 kubelet[2230]: E1213 08:53:00.596895 2230 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:53:00.601429 containerd[1483]: time="2024-12-13T08:53:00.601375585Z" level=info msg="CreateContainer within sandbox \"c69f7959ea7832280a2bc79e5e6ca808c5861b103450032c68c30d47b7965104\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 08:53:00.642854 containerd[1483]: time="2024-12-13T08:53:00.642658965Z" level=info msg="CreateContainer within sandbox \"6b3b1a502a8c0372a5e6a89ef65879bbf2ee45f27a37c29a1cb537bab85e54f0\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5c67a5f3857cc2613c9a459031cbc2cceb61169925acf4083de64a3e651511d1\"" Dec 13 08:53:00.644232 containerd[1483]: time="2024-12-13T08:53:00.644094212Z" level=info msg="StartContainer for \"5c67a5f3857cc2613c9a459031cbc2cceb61169925acf4083de64a3e651511d1\"" Dec 13 08:53:00.647438 containerd[1483]: time="2024-12-13T08:53:00.647358389Z" level=info msg="CreateContainer within sandbox \"56db482e8c01345ae9e272e5bf7ad08760a660d098cd2d606a9ad137ed5f97e6\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"681a87855fff2b9343b42d4a839c2ef983cc90dcaa5f1d1fbba66c0fee680243\"" Dec 13 08:53:00.648927 containerd[1483]: time="2024-12-13T08:53:00.648851538Z" level=info msg="StartContainer for \"681a87855fff2b9343b42d4a839c2ef983cc90dcaa5f1d1fbba66c0fee680243\"" Dec 13 08:53:00.689894 containerd[1483]: time="2024-12-13T08:53:00.689827557Z" level=info msg="CreateContainer within sandbox \"c69f7959ea7832280a2bc79e5e6ca808c5861b103450032c68c30d47b7965104\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c6300c9954848f50472e3b3b1c7d60fa92973c6f3334c66b8e5fdb29144c7f77\"" Dec 13 08:53:00.691654 containerd[1483]: time="2024-12-13T08:53:00.691514152Z" level=info msg="StartContainer for \"c6300c9954848f50472e3b3b1c7d60fa92973c6f3334c66b8e5fdb29144c7f77\"" Dec 13 08:53:00.700705 systemd[1]: Started cri-containerd-5c67a5f3857cc2613c9a459031cbc2cceb61169925acf4083de64a3e651511d1.scope - libcontainer container 5c67a5f3857cc2613c9a459031cbc2cceb61169925acf4083de64a3e651511d1. Dec 13 08:53:00.703841 systemd[1]: Started cri-containerd-681a87855fff2b9343b42d4a839c2ef983cc90dcaa5f1d1fbba66c0fee680243.scope - libcontainer container 681a87855fff2b9343b42d4a839c2ef983cc90dcaa5f1d1fbba66c0fee680243. Dec 13 08:53:00.765510 systemd[1]: Started cri-containerd-c6300c9954848f50472e3b3b1c7d60fa92973c6f3334c66b8e5fdb29144c7f77.scope - libcontainer container c6300c9954848f50472e3b3b1c7d60fa92973c6f3334c66b8e5fdb29144c7f77. Dec 13 08:53:00.849932 containerd[1483]: time="2024-12-13T08:53:00.849503575Z" level=info msg="StartContainer for \"5c67a5f3857cc2613c9a459031cbc2cceb61169925acf4083de64a3e651511d1\" returns successfully" Dec 13 08:53:00.851745 containerd[1483]: time="2024-12-13T08:53:00.849900538Z" level=info msg="StartContainer for \"681a87855fff2b9343b42d4a839c2ef983cc90dcaa5f1d1fbba66c0fee680243\" returns successfully" Dec 13 08:53:00.894611 containerd[1483]: time="2024-12-13T08:53:00.894425783Z" level=info msg="StartContainer for \"c6300c9954848f50472e3b3b1c7d60fa92973c6f3334c66b8e5fdb29144c7f77\" returns successfully" Dec 13 08:53:01.035006 kubelet[2230]: E1213 08:53:01.033120 2230 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:53:01.041435 kubelet[2230]: E1213 08:53:01.041399 2230 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:53:01.046347 kubelet[2230]: E1213 08:53:01.046058 2230 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:53:01.112241 kubelet[2230]: E1213 08:53:01.112051 2230 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://144.126.221.125:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 144.126.221.125:6443: connect: connection refused Dec 13 08:53:01.911831 sshd[2124]: PAM: Permission denied for root from 218.92.0.229 Dec 13 08:53:02.048405 kubelet[2230]: E1213 08:53:02.048358 2230 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:53:02.101114 kubelet[2230]: I1213 08:53:02.101063 2230 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.1-4-b1553ec4eb" Dec 13 08:53:02.256180 sshd[2507]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.229 user=root Dec 13 08:53:03.962297 kubelet[2230]: I1213 08:53:03.961125 2230 apiserver.go:52] "Watching apiserver" Dec 13 08:53:03.967559 kubelet[2230]: E1213 08:53:03.967445 2230 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.2.1-4-b1553ec4eb\" not found" node="ci-4081.2.1-4-b1553ec4eb" Dec 13 08:53:03.983045 kubelet[2230]: I1213 08:53:03.982990 2230 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 08:53:04.060399 kubelet[2230]: I1213 08:53:04.060329 2230 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.2.1-4-b1553ec4eb" Dec 13 08:53:04.551124 sshd[2124]: PAM: Permission denied for root from 218.92.0.229 Dec 13 08:53:04.722857 sshd[2124]: Received disconnect from 218.92.0.229 port 25422:11: [preauth] Dec 13 08:53:04.722857 sshd[2124]: Disconnected from authenticating user root 218.92.0.229 port 25422 [preauth] Dec 13 08:53:04.726677 systemd[1]: sshd@10-144.126.221.125:22-218.92.0.229:25422.service: Deactivated successfully. Dec 13 08:53:04.881786 systemd[1]: Started sshd@11-144.126.221.125:22-218.92.0.229:27302.service - OpenSSH per-connection server daemon (218.92.0.229:27302). Dec 13 08:53:04.977121 kubelet[2230]: W1213 08:53:04.977085 2230 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 08:53:04.978426 kubelet[2230]: E1213 08:53:04.978304 2230 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:53:05.055777 kubelet[2230]: E1213 08:53:05.055703 2230 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:53:06.045105 sshd[2514]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.229 user=root Dec 13 08:53:06.514749 systemd[1]: Reloading requested from client PID 2516 ('systemctl') (unit session-9.scope)... Dec 13 08:53:06.514774 systemd[1]: Reloading... Dec 13 08:53:06.616231 zram_generator::config[2557]: No configuration found. Dec 13 08:53:06.803242 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 08:53:06.944247 systemd[1]: Reloading finished in 428 ms. Dec 13 08:53:07.002923 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 08:53:07.003430 kubelet[2230]: E1213 08:53:07.003153 2230 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{ci-4081.2.1-4-b1553ec4eb.1810b08da677a268 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.2.1-4-b1553ec4eb,UID:ci-4081.2.1-4-b1553ec4eb,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.2.1-4-b1553ec4eb,},FirstTimestamp:2024-12-13 08:52:58.967376488 +0000 UTC m=+0.399545613,LastTimestamp:2024-12-13 08:52:58.967376488 +0000 UTC m=+0.399545613,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.2.1-4-b1553ec4eb,}" Dec 13 08:53:07.011382 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 08:53:07.012165 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 08:53:07.024311 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 08:53:07.186293 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 08:53:07.200105 (kubelet)[2608]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 08:53:07.277865 kubelet[2608]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 08:53:07.277865 kubelet[2608]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 08:53:07.277865 kubelet[2608]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 08:53:07.279391 kubelet[2608]: I1213 08:53:07.279140 2608 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 08:53:07.289253 kubelet[2608]: I1213 08:53:07.288950 2608 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 08:53:07.289253 kubelet[2608]: I1213 08:53:07.288985 2608 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 08:53:07.289625 kubelet[2608]: I1213 08:53:07.289345 2608 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 08:53:07.293209 kubelet[2608]: I1213 08:53:07.293107 2608 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 08:53:07.295653 kubelet[2608]: I1213 08:53:07.295611 2608 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 08:53:07.311147 kubelet[2608]: I1213 08:53:07.310616 2608 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 08:53:07.311147 kubelet[2608]: I1213 08:53:07.310946 2608 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 08:53:07.311899 kubelet[2608]: I1213 08:53:07.310978 2608 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.2.1-4-b1553ec4eb","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 08:53:07.312350 kubelet[2608]: I1213 08:53:07.312325 2608 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 08:53:07.312441 kubelet[2608]: I1213 08:53:07.312360 2608 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 08:53:07.312441 kubelet[2608]: I1213 08:53:07.312423 2608 state_mem.go:36] "Initialized new in-memory state store" Dec 13 08:53:07.312547 kubelet[2608]: I1213 08:53:07.312542 2608 kubelet.go:400] "Attempting to sync node with API server" Dec 13 08:53:07.312608 kubelet[2608]: I1213 08:53:07.312556 2608 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 08:53:07.312608 kubelet[2608]: I1213 08:53:07.312585 2608 kubelet.go:312] "Adding apiserver pod source" Dec 13 08:53:07.312608 kubelet[2608]: I1213 08:53:07.312604 2608 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 08:53:07.314680 kubelet[2608]: I1213 08:53:07.314660 2608 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 08:53:07.314905 kubelet[2608]: I1213 08:53:07.314828 2608 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 08:53:07.317328 kubelet[2608]: I1213 08:53:07.317296 2608 server.go:1264] "Started kubelet" Dec 13 08:53:07.325221 kubelet[2608]: I1213 08:53:07.324553 2608 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 08:53:07.330057 kubelet[2608]: E1213 08:53:07.330026 2608 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 08:53:07.336337 kubelet[2608]: I1213 08:53:07.336285 2608 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 08:53:07.341282 kubelet[2608]: I1213 08:53:07.340261 2608 server.go:455] "Adding debug handlers to kubelet server" Dec 13 08:53:07.341437 kubelet[2608]: I1213 08:53:07.341277 2608 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 08:53:07.341499 kubelet[2608]: I1213 08:53:07.341484 2608 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 08:53:07.346635 kubelet[2608]: I1213 08:53:07.346594 2608 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 08:53:07.348233 kubelet[2608]: I1213 08:53:07.347894 2608 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 08:53:07.348233 kubelet[2608]: I1213 08:53:07.348044 2608 reconciler.go:26] "Reconciler: start to sync state" Dec 13 08:53:07.351545 kubelet[2608]: I1213 08:53:07.351348 2608 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 08:53:07.354437 kubelet[2608]: I1213 08:53:07.354069 2608 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 08:53:07.354437 kubelet[2608]: I1213 08:53:07.354109 2608 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 08:53:07.354437 kubelet[2608]: I1213 08:53:07.354133 2608 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 08:53:07.354437 kubelet[2608]: E1213 08:53:07.354223 2608 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 08:53:07.365257 kubelet[2608]: I1213 08:53:07.365217 2608 factory.go:221] Registration of the systemd container factory successfully Dec 13 08:53:07.365556 kubelet[2608]: I1213 08:53:07.365530 2608 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 08:53:07.368007 kubelet[2608]: I1213 08:53:07.367985 2608 factory.go:221] Registration of the containerd container factory successfully Dec 13 08:53:07.427124 kubelet[2608]: I1213 08:53:07.426988 2608 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 08:53:07.427124 kubelet[2608]: I1213 08:53:07.427109 2608 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 08:53:07.427124 kubelet[2608]: I1213 08:53:07.427137 2608 state_mem.go:36] "Initialized new in-memory state store" Dec 13 08:53:07.427510 kubelet[2608]: I1213 08:53:07.427480 2608 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 08:53:07.427549 kubelet[2608]: I1213 08:53:07.427505 2608 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 08:53:07.427549 kubelet[2608]: I1213 08:53:07.427533 2608 policy_none.go:49] "None policy: Start" Dec 13 08:53:07.428649 kubelet[2608]: I1213 08:53:07.428624 2608 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 08:53:07.428776 kubelet[2608]: I1213 08:53:07.428662 2608 state_mem.go:35] "Initializing new in-memory state store" Dec 13 08:53:07.429058 kubelet[2608]: I1213 08:53:07.429027 2608 state_mem.go:75] "Updated machine memory state" Dec 13 08:53:07.435030 kubelet[2608]: I1213 08:53:07.434992 2608 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 08:53:07.435735 kubelet[2608]: I1213 08:53:07.435286 2608 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 08:53:07.435735 kubelet[2608]: I1213 08:53:07.435447 2608 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 08:53:07.449904 kubelet[2608]: I1213 08:53:07.449630 2608 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.1-4-b1553ec4eb" Dec 13 08:53:07.454754 kubelet[2608]: I1213 08:53:07.454709 2608 topology_manager.go:215] "Topology Admit Handler" podUID="7a6f7e84ca04122a92f7e467cb349ffd" podNamespace="kube-system" podName="kube-apiserver-ci-4081.2.1-4-b1553ec4eb" Dec 13 08:53:07.454918 kubelet[2608]: I1213 08:53:07.454812 2608 topology_manager.go:215] "Topology Admit Handler" podUID="61a286f76e6a2ed5bb12dbe7c80446bf" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.2.1-4-b1553ec4eb" Dec 13 08:53:07.454918 kubelet[2608]: I1213 08:53:07.454870 2608 topology_manager.go:215] "Topology Admit Handler" podUID="446e043bc931e01d4f774324938423db" podNamespace="kube-system" podName="kube-scheduler-ci-4081.2.1-4-b1553ec4eb" Dec 13 08:53:07.479742 kubelet[2608]: W1213 08:53:07.479451 2608 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 08:53:07.479742 kubelet[2608]: W1213 08:53:07.479597 2608 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 08:53:07.479742 kubelet[2608]: W1213 08:53:07.479700 2608 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 08:53:07.480524 kubelet[2608]: E1213 08:53:07.479753 2608 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4081.2.1-4-b1553ec4eb\" already exists" pod="kube-system/kube-scheduler-ci-4081.2.1-4-b1553ec4eb" Dec 13 08:53:07.482518 kubelet[2608]: I1213 08:53:07.481526 2608 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081.2.1-4-b1553ec4eb" Dec 13 08:53:07.482518 kubelet[2608]: I1213 08:53:07.481610 2608 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.2.1-4-b1553ec4eb" Dec 13 08:53:07.549882 kubelet[2608]: I1213 08:53:07.549821 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7a6f7e84ca04122a92f7e467cb349ffd-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.2.1-4-b1553ec4eb\" (UID: \"7a6f7e84ca04122a92f7e467cb349ffd\") " pod="kube-system/kube-apiserver-ci-4081.2.1-4-b1553ec4eb" Dec 13 08:53:07.549882 kubelet[2608]: I1213 08:53:07.549868 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/61a286f76e6a2ed5bb12dbe7c80446bf-ca-certs\") pod \"kube-controller-manager-ci-4081.2.1-4-b1553ec4eb\" (UID: \"61a286f76e6a2ed5bb12dbe7c80446bf\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-4-b1553ec4eb" Dec 13 08:53:07.550115 kubelet[2608]: I1213 08:53:07.549905 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/61a286f76e6a2ed5bb12dbe7c80446bf-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.2.1-4-b1553ec4eb\" (UID: \"61a286f76e6a2ed5bb12dbe7c80446bf\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-4-b1553ec4eb" Dec 13 08:53:07.550115 kubelet[2608]: I1213 08:53:07.549921 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/61a286f76e6a2ed5bb12dbe7c80446bf-k8s-certs\") pod \"kube-controller-manager-ci-4081.2.1-4-b1553ec4eb\" (UID: \"61a286f76e6a2ed5bb12dbe7c80446bf\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-4-b1553ec4eb" Dec 13 08:53:07.550115 kubelet[2608]: I1213 08:53:07.549938 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/61a286f76e6a2ed5bb12dbe7c80446bf-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.2.1-4-b1553ec4eb\" (UID: \"61a286f76e6a2ed5bb12dbe7c80446bf\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-4-b1553ec4eb" Dec 13 08:53:07.550115 kubelet[2608]: I1213 08:53:07.549957 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/446e043bc931e01d4f774324938423db-kubeconfig\") pod \"kube-scheduler-ci-4081.2.1-4-b1553ec4eb\" (UID: \"446e043bc931e01d4f774324938423db\") " pod="kube-system/kube-scheduler-ci-4081.2.1-4-b1553ec4eb" Dec 13 08:53:07.550115 kubelet[2608]: I1213 08:53:07.549983 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7a6f7e84ca04122a92f7e467cb349ffd-ca-certs\") pod \"kube-apiserver-ci-4081.2.1-4-b1553ec4eb\" (UID: \"7a6f7e84ca04122a92f7e467cb349ffd\") " pod="kube-system/kube-apiserver-ci-4081.2.1-4-b1553ec4eb" Dec 13 08:53:07.550403 kubelet[2608]: I1213 08:53:07.549998 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/61a286f76e6a2ed5bb12dbe7c80446bf-kubeconfig\") pod \"kube-controller-manager-ci-4081.2.1-4-b1553ec4eb\" (UID: \"61a286f76e6a2ed5bb12dbe7c80446bf\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-4-b1553ec4eb" Dec 13 08:53:07.550403 kubelet[2608]: I1213 08:53:07.550013 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7a6f7e84ca04122a92f7e467cb349ffd-k8s-certs\") pod \"kube-apiserver-ci-4081.2.1-4-b1553ec4eb\" (UID: \"7a6f7e84ca04122a92f7e467cb349ffd\") " pod="kube-system/kube-apiserver-ci-4081.2.1-4-b1553ec4eb" Dec 13 08:53:07.751689 sshd[2511]: PAM: Permission denied for root from 218.92.0.229 Dec 13 08:53:07.783342 kubelet[2608]: E1213 08:53:07.782729 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:53:07.785494 kubelet[2608]: E1213 08:53:07.783713 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:53:07.786353 kubelet[2608]: E1213 08:53:07.786321 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:53:08.077299 sshd[2643]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.229 user=root Dec 13 08:53:08.315234 kubelet[2608]: I1213 08:53:08.315127 2608 apiserver.go:52] "Watching apiserver" Dec 13 08:53:08.348595 kubelet[2608]: I1213 08:53:08.348382 2608 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 08:53:08.401736 kubelet[2608]: E1213 08:53:08.401310 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:53:08.405656 kubelet[2608]: E1213 08:53:08.405616 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:53:08.423784 kubelet[2608]: W1213 08:53:08.420427 2608 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 08:53:08.423784 kubelet[2608]: E1213 08:53:08.421012 2608 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.2.1-4-b1553ec4eb\" already exists" pod="kube-system/kube-apiserver-ci-4081.2.1-4-b1553ec4eb" Dec 13 08:53:08.423784 kubelet[2608]: E1213 08:53:08.421969 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:53:08.487624 kubelet[2608]: I1213 08:53:08.487521 2608 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.2.1-4-b1553ec4eb" podStartSLOduration=4.487487166 podStartE2EDuration="4.487487166s" podCreationTimestamp="2024-12-13 08:53:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 08:53:08.465332568 +0000 UTC m=+1.259523284" watchObservedRunningTime="2024-12-13 08:53:08.487487166 +0000 UTC m=+1.281677876" Dec 13 08:53:08.541325 kubelet[2608]: I1213 08:53:08.541240 2608 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.2.1-4-b1553ec4eb" podStartSLOduration=1.5412144030000001 podStartE2EDuration="1.541214403s" podCreationTimestamp="2024-12-13 08:53:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 08:53:08.490966182 +0000 UTC m=+1.285156933" watchObservedRunningTime="2024-12-13 08:53:08.541214403 +0000 UTC m=+1.335405111" Dec 13 08:53:08.572558 kubelet[2608]: I1213 08:53:08.572354 2608 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.2.1-4-b1553ec4eb" podStartSLOduration=1.57232743 podStartE2EDuration="1.57232743s" podCreationTimestamp="2024-12-13 08:53:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 08:53:08.545072865 +0000 UTC m=+1.339263580" watchObservedRunningTime="2024-12-13 08:53:08.57232743 +0000 UTC m=+1.366518146" Dec 13 08:53:09.403852 kubelet[2608]: E1213 08:53:09.403808 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:53:09.726295 sshd[2511]: PAM: Permission denied for root from 218.92.0.229 Dec 13 08:53:10.408687 kubelet[2608]: E1213 08:53:10.408641 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:53:10.427915 sshd[2651]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.229 user=root Dec 13 08:53:10.693664 kubelet[2608]: E1213 08:53:10.693625 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:53:10.901022 kubelet[2608]: E1213 08:53:10.900643 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:53:11.184026 update_engine[1453]: I20241213 08:53:11.183914 1453 update_attempter.cc:509] Updating boot flags... Dec 13 08:53:11.236390 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2675) Dec 13 08:53:11.298398 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2677) Dec 13 08:53:11.410668 kubelet[2608]: E1213 08:53:11.409565 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:53:11.410668 kubelet[2608]: E1213 08:53:11.409928 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:53:11.410668 kubelet[2608]: E1213 08:53:11.410341 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:53:11.822366 sshd[2511]: PAM: Permission denied for root from 218.92.0.229 Dec 13 08:53:12.353130 sshd[2511]: Received disconnect from 218.92.0.229 port 27302:11: [preauth] Dec 13 08:53:12.353130 sshd[2511]: Disconnected from authenticating user root 218.92.0.229 port 27302 [preauth] Dec 13 08:53:12.355592 systemd[1]: sshd@11-144.126.221.125:22-218.92.0.229:27302.service: Deactivated successfully. Dec 13 08:53:13.472822 sudo[1672]: pam_unix(sudo:session): session closed for user root Dec 13 08:53:13.478625 sshd[1666]: pam_unix(sshd:session): session closed for user core Dec 13 08:53:13.483945 systemd[1]: sshd@8-144.126.221.125:22-147.75.109.163:34428.service: Deactivated successfully. Dec 13 08:53:13.487088 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 08:53:13.487873 systemd[1]: session-9.scope: Consumed 6.842s CPU time, 191.1M memory peak, 0B memory swap peak. Dec 13 08:53:13.490993 systemd-logind[1451]: Session 9 logged out. Waiting for processes to exit. Dec 13 08:53:13.492770 systemd-logind[1451]: Removed session 9. Dec 13 08:53:21.729707 kubelet[2608]: I1213 08:53:21.729664 2608 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 08:53:21.730446 containerd[1483]: time="2024-12-13T08:53:21.730336668Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 08:53:21.732628 kubelet[2608]: I1213 08:53:21.730647 2608 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 08:53:22.661949 kubelet[2608]: I1213 08:53:22.661144 2608 topology_manager.go:215] "Topology Admit Handler" podUID="0f244385-8e64-457e-ae29-cfbf7018703f" podNamespace="kube-system" podName="kube-proxy-zccg5" Dec 13 08:53:22.682922 systemd[1]: Created slice kubepods-besteffort-pod0f244385_8e64_457e_ae29_cfbf7018703f.slice - libcontainer container kubepods-besteffort-pod0f244385_8e64_457e_ae29_cfbf7018703f.slice. Dec 13 08:53:22.752633 kubelet[2608]: I1213 08:53:22.752265 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0f244385-8e64-457e-ae29-cfbf7018703f-lib-modules\") pod \"kube-proxy-zccg5\" (UID: \"0f244385-8e64-457e-ae29-cfbf7018703f\") " pod="kube-system/kube-proxy-zccg5" Dec 13 08:53:22.752633 kubelet[2608]: I1213 08:53:22.752345 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0f244385-8e64-457e-ae29-cfbf7018703f-kube-proxy\") pod \"kube-proxy-zccg5\" (UID: \"0f244385-8e64-457e-ae29-cfbf7018703f\") " pod="kube-system/kube-proxy-zccg5" Dec 13 08:53:22.752633 kubelet[2608]: I1213 08:53:22.752374 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0f244385-8e64-457e-ae29-cfbf7018703f-xtables-lock\") pod \"kube-proxy-zccg5\" (UID: \"0f244385-8e64-457e-ae29-cfbf7018703f\") " pod="kube-system/kube-proxy-zccg5" Dec 13 08:53:22.752633 kubelet[2608]: I1213 08:53:22.752403 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsvvt\" (UniqueName: \"kubernetes.io/projected/0f244385-8e64-457e-ae29-cfbf7018703f-kube-api-access-qsvvt\") pod \"kube-proxy-zccg5\" (UID: \"0f244385-8e64-457e-ae29-cfbf7018703f\") " pod="kube-system/kube-proxy-zccg5" Dec 13 08:53:22.798052 kubelet[2608]: I1213 08:53:22.798002 2608 topology_manager.go:215] "Topology Admit Handler" podUID="64f0c4b5-55f3-44cf-bbd0-794b7908208c" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-p67hv" Dec 13 08:53:22.809692 systemd[1]: Created slice kubepods-besteffort-pod64f0c4b5_55f3_44cf_bbd0_794b7908208c.slice - libcontainer container kubepods-besteffort-pod64f0c4b5_55f3_44cf_bbd0_794b7908208c.slice. Dec 13 08:53:22.852994 kubelet[2608]: I1213 08:53:22.852888 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fd9j\" (UniqueName: \"kubernetes.io/projected/64f0c4b5-55f3-44cf-bbd0-794b7908208c-kube-api-access-8fd9j\") pod \"tigera-operator-7bc55997bb-p67hv\" (UID: \"64f0c4b5-55f3-44cf-bbd0-794b7908208c\") " pod="tigera-operator/tigera-operator-7bc55997bb-p67hv" Dec 13 08:53:22.852994 kubelet[2608]: I1213 08:53:22.852949 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/64f0c4b5-55f3-44cf-bbd0-794b7908208c-var-lib-calico\") pod \"tigera-operator-7bc55997bb-p67hv\" (UID: \"64f0c4b5-55f3-44cf-bbd0-794b7908208c\") " pod="tigera-operator/tigera-operator-7bc55997bb-p67hv" Dec 13 08:53:22.991490 kubelet[2608]: E1213 08:53:22.991440 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:53:22.992505 containerd[1483]: time="2024-12-13T08:53:22.992447030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zccg5,Uid:0f244385-8e64-457e-ae29-cfbf7018703f,Namespace:kube-system,Attempt:0,}" Dec 13 08:53:23.043475 containerd[1483]: time="2024-12-13T08:53:23.043109629Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 08:53:23.043475 containerd[1483]: time="2024-12-13T08:53:23.043176251Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 08:53:23.043475 containerd[1483]: time="2024-12-13T08:53:23.043209720Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:53:23.043475 containerd[1483]: time="2024-12-13T08:53:23.043340549Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:53:23.079124 systemd[1]: Started cri-containerd-a929f76189e7f3e382c667d80968478546739c4d69e45b0c6cb4de327c30d8b7.scope - libcontainer container a929f76189e7f3e382c667d80968478546739c4d69e45b0c6cb4de327c30d8b7. Dec 13 08:53:23.116297 containerd[1483]: time="2024-12-13T08:53:23.116128999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-p67hv,Uid:64f0c4b5-55f3-44cf-bbd0-794b7908208c,Namespace:tigera-operator,Attempt:0,}" Dec 13 08:53:23.123704 containerd[1483]: time="2024-12-13T08:53:23.123649679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zccg5,Uid:0f244385-8e64-457e-ae29-cfbf7018703f,Namespace:kube-system,Attempt:0,} returns sandbox id \"a929f76189e7f3e382c667d80968478546739c4d69e45b0c6cb4de327c30d8b7\"" Dec 13 08:53:23.125486 kubelet[2608]: E1213 08:53:23.125437 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:53:23.131436 containerd[1483]: time="2024-12-13T08:53:23.131360072Z" level=info msg="CreateContainer within sandbox \"a929f76189e7f3e382c667d80968478546739c4d69e45b0c6cb4de327c30d8b7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 08:53:23.165945 containerd[1483]: time="2024-12-13T08:53:23.165513787Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 08:53:23.165945 containerd[1483]: time="2024-12-13T08:53:23.165596236Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 08:53:23.165945 containerd[1483]: time="2024-12-13T08:53:23.165618049Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:53:23.165945 containerd[1483]: time="2024-12-13T08:53:23.165741035Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:53:23.176526 containerd[1483]: time="2024-12-13T08:53:23.176410241Z" level=info msg="CreateContainer within sandbox \"a929f76189e7f3e382c667d80968478546739c4d69e45b0c6cb4de327c30d8b7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e2775d0ea83eff1ffd05f26546ab6ce6a14e594d80b3d1e0fa84794198eaf8a6\"" Dec 13 08:53:23.178027 containerd[1483]: time="2024-12-13T08:53:23.177870650Z" level=info msg="StartContainer for \"e2775d0ea83eff1ffd05f26546ab6ce6a14e594d80b3d1e0fa84794198eaf8a6\"" Dec 13 08:53:23.198515 systemd[1]: Started cri-containerd-c572552d717d4ee7ce14c1260a6c8ac86b429277915a6878cff6b10e4cf4ccec.scope - libcontainer container c572552d717d4ee7ce14c1260a6c8ac86b429277915a6878cff6b10e4cf4ccec. Dec 13 08:53:23.231429 systemd[1]: Started cri-containerd-e2775d0ea83eff1ffd05f26546ab6ce6a14e594d80b3d1e0fa84794198eaf8a6.scope - libcontainer container e2775d0ea83eff1ffd05f26546ab6ce6a14e594d80b3d1e0fa84794198eaf8a6. Dec 13 08:53:23.283541 containerd[1483]: time="2024-12-13T08:53:23.282342612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-p67hv,Uid:64f0c4b5-55f3-44cf-bbd0-794b7908208c,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"c572552d717d4ee7ce14c1260a6c8ac86b429277915a6878cff6b10e4cf4ccec\"" Dec 13 08:53:23.295257 containerd[1483]: time="2024-12-13T08:53:23.294964659Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Dec 13 08:53:23.308666 containerd[1483]: time="2024-12-13T08:53:23.307868679Z" level=info msg="StartContainer for \"e2775d0ea83eff1ffd05f26546ab6ce6a14e594d80b3d1e0fa84794198eaf8a6\" returns successfully" Dec 13 08:53:23.440240 kubelet[2608]: E1213 08:53:23.440019 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:53:23.458878 kubelet[2608]: I1213 08:53:23.458798 2608 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zccg5" podStartSLOduration=1.458774909 podStartE2EDuration="1.458774909s" podCreationTimestamp="2024-12-13 08:53:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 08:53:23.458512697 +0000 UTC m=+16.252703413" watchObservedRunningTime="2024-12-13 08:53:23.458774909 +0000 UTC m=+16.252965622" Dec 13 08:53:27.640924 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1757450649.mount: Deactivated successfully. Dec 13 08:53:28.308229 containerd[1483]: time="2024-12-13T08:53:28.308130197Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:53:28.310337 containerd[1483]: time="2024-12-13T08:53:28.310247924Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21764345" Dec 13 08:53:28.313610 containerd[1483]: time="2024-12-13T08:53:28.313488026Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:53:28.319506 containerd[1483]: time="2024-12-13T08:53:28.318776831Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:53:28.319720 containerd[1483]: time="2024-12-13T08:53:28.319692371Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 5.024670622s" Dec 13 08:53:28.319854 containerd[1483]: time="2024-12-13T08:53:28.319836959Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Dec 13 08:53:28.329176 containerd[1483]: time="2024-12-13T08:53:28.329130351Z" level=info msg="CreateContainer within sandbox \"c572552d717d4ee7ce14c1260a6c8ac86b429277915a6878cff6b10e4cf4ccec\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 13 08:53:28.388022 containerd[1483]: time="2024-12-13T08:53:28.387961208Z" level=info msg="CreateContainer within sandbox \"c572552d717d4ee7ce14c1260a6c8ac86b429277915a6878cff6b10e4cf4ccec\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"1903f87d0ef0b75c68f6f8a2954294a22a76e78b84aa85d2d316ba5c37095a78\"" Dec 13 08:53:28.397742 containerd[1483]: time="2024-12-13T08:53:28.397637374Z" level=info msg="StartContainer for \"1903f87d0ef0b75c68f6f8a2954294a22a76e78b84aa85d2d316ba5c37095a78\"" Dec 13 08:53:28.439659 systemd[1]: Started cri-containerd-1903f87d0ef0b75c68f6f8a2954294a22a76e78b84aa85d2d316ba5c37095a78.scope - libcontainer container 1903f87d0ef0b75c68f6f8a2954294a22a76e78b84aa85d2d316ba5c37095a78. Dec 13 08:53:28.478271 containerd[1483]: time="2024-12-13T08:53:28.478059673Z" level=info msg="StartContainer for \"1903f87d0ef0b75c68f6f8a2954294a22a76e78b84aa85d2d316ba5c37095a78\" returns successfully" Dec 13 08:53:31.665795 kubelet[2608]: I1213 08:53:31.664930 2608 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-p67hv" podStartSLOduration=4.623303765 podStartE2EDuration="9.664898786s" podCreationTimestamp="2024-12-13 08:53:22 +0000 UTC" firstStartedPulling="2024-12-13 08:53:23.285314623 +0000 UTC m=+16.079505317" lastFinishedPulling="2024-12-13 08:53:28.326909645 +0000 UTC m=+21.121100338" observedRunningTime="2024-12-13 08:53:29.511708501 +0000 UTC m=+22.305899216" watchObservedRunningTime="2024-12-13 08:53:31.664898786 +0000 UTC m=+24.459089479" Dec 13 08:53:31.669874 kubelet[2608]: I1213 08:53:31.669797 2608 topology_manager.go:215] "Topology Admit Handler" podUID="3da13a4c-13fe-4bce-a31d-025317c7c94b" podNamespace="calico-system" podName="calico-typha-777fd6cc7f-v8npm" Dec 13 08:53:31.697482 systemd[1]: Created slice kubepods-besteffort-pod3da13a4c_13fe_4bce_a31d_025317c7c94b.slice - libcontainer container kubepods-besteffort-pod3da13a4c_13fe_4bce_a31d_025317c7c94b.slice. Dec 13 08:53:31.735975 kubelet[2608]: I1213 08:53:31.735906 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3da13a4c-13fe-4bce-a31d-025317c7c94b-tigera-ca-bundle\") pod \"calico-typha-777fd6cc7f-v8npm\" (UID: \"3da13a4c-13fe-4bce-a31d-025317c7c94b\") " pod="calico-system/calico-typha-777fd6cc7f-v8npm" Dec 13 08:53:31.736329 kubelet[2608]: I1213 08:53:31.736185 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/3da13a4c-13fe-4bce-a31d-025317c7c94b-typha-certs\") pod \"calico-typha-777fd6cc7f-v8npm\" (UID: \"3da13a4c-13fe-4bce-a31d-025317c7c94b\") " pod="calico-system/calico-typha-777fd6cc7f-v8npm" Dec 13 08:53:31.736329 kubelet[2608]: I1213 08:53:31.736259 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72vzq\" (UniqueName: \"kubernetes.io/projected/3da13a4c-13fe-4bce-a31d-025317c7c94b-kube-api-access-72vzq\") pod \"calico-typha-777fd6cc7f-v8npm\" (UID: \"3da13a4c-13fe-4bce-a31d-025317c7c94b\") " pod="calico-system/calico-typha-777fd6cc7f-v8npm" Dec 13 08:53:31.826872 kubelet[2608]: I1213 08:53:31.826733 2608 topology_manager.go:215] "Topology Admit Handler" podUID="2c09cde6-ab63-411e-a7a2-64acd5a2b065" podNamespace="calico-system" podName="calico-node-dlxd6" Dec 13 08:53:31.839269 systemd[1]: Created slice kubepods-besteffort-pod2c09cde6_ab63_411e_a7a2_64acd5a2b065.slice - libcontainer container kubepods-besteffort-pod2c09cde6_ab63_411e_a7a2_64acd5a2b065.slice. Dec 13 08:53:31.938701 kubelet[2608]: I1213 08:53:31.938660 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/2c09cde6-ab63-411e-a7a2-64acd5a2b065-cni-log-dir\") pod \"calico-node-dlxd6\" (UID: \"2c09cde6-ab63-411e-a7a2-64acd5a2b065\") " pod="calico-system/calico-node-dlxd6" Dec 13 08:53:31.939217 kubelet[2608]: I1213 08:53:31.938944 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2c09cde6-ab63-411e-a7a2-64acd5a2b065-lib-modules\") pod \"calico-node-dlxd6\" (UID: \"2c09cde6-ab63-411e-a7a2-64acd5a2b065\") " pod="calico-system/calico-node-dlxd6" Dec 13 08:53:31.939217 kubelet[2608]: I1213 08:53:31.938997 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2c09cde6-ab63-411e-a7a2-64acd5a2b065-tigera-ca-bundle\") pod \"calico-node-dlxd6\" (UID: \"2c09cde6-ab63-411e-a7a2-64acd5a2b065\") " pod="calico-system/calico-node-dlxd6" Dec 13 08:53:31.939217 kubelet[2608]: I1213 08:53:31.939025 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbhfh\" (UniqueName: \"kubernetes.io/projected/2c09cde6-ab63-411e-a7a2-64acd5a2b065-kube-api-access-bbhfh\") pod \"calico-node-dlxd6\" (UID: \"2c09cde6-ab63-411e-a7a2-64acd5a2b065\") " pod="calico-system/calico-node-dlxd6" Dec 13 08:53:31.939217 kubelet[2608]: I1213 08:53:31.939061 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2c09cde6-ab63-411e-a7a2-64acd5a2b065-xtables-lock\") pod \"calico-node-dlxd6\" (UID: \"2c09cde6-ab63-411e-a7a2-64acd5a2b065\") " pod="calico-system/calico-node-dlxd6" Dec 13 08:53:31.939217 kubelet[2608]: I1213 08:53:31.939088 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/2c09cde6-ab63-411e-a7a2-64acd5a2b065-policysync\") pod \"calico-node-dlxd6\" (UID: \"2c09cde6-ab63-411e-a7a2-64acd5a2b065\") " pod="calico-system/calico-node-dlxd6" Dec 13 08:53:31.939531 kubelet[2608]: I1213 08:53:31.939119 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/2c09cde6-ab63-411e-a7a2-64acd5a2b065-flexvol-driver-host\") pod \"calico-node-dlxd6\" (UID: \"2c09cde6-ab63-411e-a7a2-64acd5a2b065\") " pod="calico-system/calico-node-dlxd6" Dec 13 08:53:31.939531 kubelet[2608]: I1213 08:53:31.939158 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/2c09cde6-ab63-411e-a7a2-64acd5a2b065-node-certs\") pod \"calico-node-dlxd6\" (UID: \"2c09cde6-ab63-411e-a7a2-64acd5a2b065\") " pod="calico-system/calico-node-dlxd6" Dec 13 08:53:31.940347 kubelet[2608]: I1213 08:53:31.939885 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2c09cde6-ab63-411e-a7a2-64acd5a2b065-var-lib-calico\") pod \"calico-node-dlxd6\" (UID: \"2c09cde6-ab63-411e-a7a2-64acd5a2b065\") " pod="calico-system/calico-node-dlxd6" Dec 13 08:53:31.940347 kubelet[2608]: I1213 08:53:31.939977 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/2c09cde6-ab63-411e-a7a2-64acd5a2b065-var-run-calico\") pod \"calico-node-dlxd6\" (UID: \"2c09cde6-ab63-411e-a7a2-64acd5a2b065\") " pod="calico-system/calico-node-dlxd6" Dec 13 08:53:31.940347 kubelet[2608]: I1213 08:53:31.940039 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/2c09cde6-ab63-411e-a7a2-64acd5a2b065-cni-net-dir\") pod \"calico-node-dlxd6\" (UID: \"2c09cde6-ab63-411e-a7a2-64acd5a2b065\") " pod="calico-system/calico-node-dlxd6" Dec 13 08:53:31.940347 kubelet[2608]: I1213 08:53:31.940212 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/2c09cde6-ab63-411e-a7a2-64acd5a2b065-cni-bin-dir\") pod \"calico-node-dlxd6\" (UID: \"2c09cde6-ab63-411e-a7a2-64acd5a2b065\") " pod="calico-system/calico-node-dlxd6" Dec 13 08:53:31.975992 kubelet[2608]: I1213 08:53:31.975879 2608 topology_manager.go:215] "Topology Admit Handler" podUID="9c2374d2-93f5-41dd-beaa-5f3be640a74e" podNamespace="calico-system" podName="csi-node-driver-jd225" Dec 13 08:53:31.976815 kubelet[2608]: E1213 08:53:31.976545 2608 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jd225" podUID="9c2374d2-93f5-41dd-beaa-5f3be640a74e" Dec 13 08:53:32.015398 kubelet[2608]: E1213 08:53:32.013857 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:53:32.016237 containerd[1483]: time="2024-12-13T08:53:32.016103339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-777fd6cc7f-v8npm,Uid:3da13a4c-13fe-4bce-a31d-025317c7c94b,Namespace:calico-system,Attempt:0,}" Dec 13 08:53:32.040724 kubelet[2608]: I1213 08:53:32.040670 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9c2374d2-93f5-41dd-beaa-5f3be640a74e-registration-dir\") pod \"csi-node-driver-jd225\" (UID: \"9c2374d2-93f5-41dd-beaa-5f3be640a74e\") " pod="calico-system/csi-node-driver-jd225" Dec 13 08:53:32.040724 kubelet[2608]: I1213 08:53:32.040733 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9c2374d2-93f5-41dd-beaa-5f3be640a74e-kubelet-dir\") pod \"csi-node-driver-jd225\" (UID: \"9c2374d2-93f5-41dd-beaa-5f3be640a74e\") " pod="calico-system/csi-node-driver-jd225" Dec 13 08:53:32.040911 kubelet[2608]: I1213 08:53:32.040761 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/9c2374d2-93f5-41dd-beaa-5f3be640a74e-varrun\") pod \"csi-node-driver-jd225\" (UID: \"9c2374d2-93f5-41dd-beaa-5f3be640a74e\") " pod="calico-system/csi-node-driver-jd225" Dec 13 08:53:32.041575 kubelet[2608]: I1213 08:53:32.041544 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9c2374d2-93f5-41dd-beaa-5f3be640a74e-socket-dir\") pod \"csi-node-driver-jd225\" (UID: \"9c2374d2-93f5-41dd-beaa-5f3be640a74e\") " pod="calico-system/csi-node-driver-jd225" Dec 13 08:53:32.042010 kubelet[2608]: I1213 08:53:32.041987 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qb5qj\" (UniqueName: \"kubernetes.io/projected/9c2374d2-93f5-41dd-beaa-5f3be640a74e-kube-api-access-qb5qj\") pod \"csi-node-driver-jd225\" (UID: \"9c2374d2-93f5-41dd-beaa-5f3be640a74e\") " pod="calico-system/csi-node-driver-jd225" Dec 13 08:53:32.059250 kubelet[2608]: E1213 08:53:32.057147 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:53:32.059250 kubelet[2608]: W1213 08:53:32.057235 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:53:32.059250 kubelet[2608]: E1213 08:53:32.057273 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:53:32.095827 kubelet[2608]: E1213 08:53:32.094179 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:53:32.095827 kubelet[2608]: W1213 08:53:32.095730 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:53:32.095827 kubelet[2608]: E1213 08:53:32.095764 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:53:32.125456 containerd[1483]: time="2024-12-13T08:53:32.124025437Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 08:53:32.125456 containerd[1483]: time="2024-12-13T08:53:32.124434197Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 08:53:32.125456 containerd[1483]: time="2024-12-13T08:53:32.124454502Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:53:32.125456 containerd[1483]: time="2024-12-13T08:53:32.124599009Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:53:32.143455 kubelet[2608]: E1213 08:53:32.143347 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:53:32.143455 kubelet[2608]: W1213 08:53:32.143372 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:53:32.143737 kubelet[2608]: E1213 08:53:32.143469 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:53:32.144512 kubelet[2608]: E1213 08:53:32.143868 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:53:32.144512 kubelet[2608]: W1213 08:53:32.143906 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:53:32.144512 kubelet[2608]: E1213 08:53:32.143924 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:53:32.144512 kubelet[2608]: E1213 08:53:32.144154 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:53:32.144512 kubelet[2608]: W1213 08:53:32.144164 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:53:32.144512 kubelet[2608]: E1213 08:53:32.144175 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:53:32.146470 kubelet[2608]: E1213 08:53:32.145847 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:53:32.146470 kubelet[2608]: W1213 08:53:32.145867 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:53:32.146470 kubelet[2608]: E1213 08:53:32.145891 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:53:32.147144 kubelet[2608]: E1213 08:53:32.146942 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:53:32.147144 kubelet[2608]: W1213 08:53:32.147018 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:53:32.147144 kubelet[2608]: E1213 08:53:32.147098 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:53:32.147994 kubelet[2608]: E1213 08:53:32.147805 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:53:32.147994 kubelet[2608]: W1213 08:53:32.147820 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:53:32.147994 kubelet[2608]: E1213 08:53:32.147855 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:53:32.148677 kubelet[2608]: E1213 08:53:32.148490 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:53:32.148677 kubelet[2608]: W1213 08:53:32.148504 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:53:32.148677 kubelet[2608]: E1213 08:53:32.148534 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:53:32.149715 kubelet[2608]: E1213 08:53:32.149550 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:53:32.149715 kubelet[2608]: W1213 08:53:32.149562 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:53:32.149715 kubelet[2608]: E1213 08:53:32.149616 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:53:32.150099 kubelet[2608]: E1213 08:53:32.150081 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:53:32.150172 kubelet[2608]: W1213 08:53:32.150159 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:53:32.150698 kubelet[2608]: E1213 08:53:32.150604 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:53:32.150698 kubelet[2608]: W1213 08:53:32.150617 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:53:32.150933 kubelet[2608]: E1213 08:53:32.150921 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:53:32.151396 kubelet[2608]: W1213 08:53:32.151097 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:53:32.151534 kubelet[2608]: E1213 08:53:32.151523 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:53:32.151647 kubelet[2608]: W1213 08:53:32.151579 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:53:32.151966 kubelet[2608]: E1213 08:53:32.151868 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:53:32.151966 kubelet[2608]: W1213 08:53:32.151878 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:53:32.151966 kubelet[2608]: E1213 08:53:32.151892 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:53:32.151966 kubelet[2608]: E1213 08:53:32.151906 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:53:32.152151 kubelet[2608]: E1213 08:53:32.152006 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:53:32.152151 kubelet[2608]: E1213 08:53:32.152045 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:53:32.152151 kubelet[2608]: E1213 08:53:32.152063 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:53:32.152550 kubelet[2608]: E1213 08:53:32.152339 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:53:32.152550 kubelet[2608]: W1213 08:53:32.152350 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:53:32.152550 kubelet[2608]: E1213 08:53:32.152367 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:53:32.153640 kubelet[2608]: E1213 08:53:32.153512 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:53:32.153640 kubelet[2608]: W1213 08:53:32.153526 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:53:32.153640 kubelet[2608]: E1213 08:53:32.153544 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:53:32.153901 kubelet[2608]: E1213 08:53:32.153890 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:53:32.154084 kubelet[2608]: W1213 08:53:32.153984 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:53:32.154084 kubelet[2608]: E1213 08:53:32.154008 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:53:32.154418 kubelet[2608]: E1213 08:53:32.154293 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:53:32.154418 kubelet[2608]: W1213 08:53:32.154304 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:53:32.154632 kubelet[2608]: E1213 08:53:32.154621 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:53:32.154748 kubelet[2608]: W1213 08:53:32.154680 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:53:32.155038 kubelet[2608]: E1213 08:53:32.154956 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:53:32.155038 kubelet[2608]: W1213 08:53:32.154967 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:53:32.155301 kubelet[2608]: E1213 08:53:32.155254 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:53:32.155301 kubelet[2608]: W1213 08:53:32.155264 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:53:32.155503 kubelet[2608]: E1213 08:53:32.155399 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:53:32.155653 kubelet[2608]: E1213 08:53:32.155644 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:53:32.155804 kubelet[2608]: W1213 08:53:32.155701 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:53:32.155804 kubelet[2608]: E1213 08:53:32.155715 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:53:32.155804 kubelet[2608]: E1213 08:53:32.155743 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:53:32.155804 kubelet[2608]: E1213 08:53:32.155775 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:53:32.155921 kubelet[2608]: E1213 08:53:32.155821 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:53:32.156260 kubelet[2608]: E1213 08:53:32.156069 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:53:32.156260 kubelet[2608]: W1213 08:53:32.156080 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:53:32.156260 kubelet[2608]: E1213 08:53:32.156095 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:53:32.157352 kubelet[2608]: E1213 08:53:32.157332 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:53:32.157352 kubelet[2608]: W1213 08:53:32.157349 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:53:32.157716 kubelet[2608]: E1213 08:53:32.157369 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:53:32.158345 kubelet[2608]: E1213 08:53:32.158305 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:53:32.158345 kubelet[2608]: W1213 08:53:32.158324 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:53:32.159032 kubelet[2608]: E1213 08:53:32.158506 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:53:32.159032 kubelet[2608]: E1213 08:53:32.158538 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:53:32.159032 kubelet[2608]: W1213 08:53:32.158546 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:53:32.159032 kubelet[2608]: E1213 08:53:32.158557 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:53:32.170394 kubelet[2608]: E1213 08:53:32.169953 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:53:32.171996 containerd[1483]: time="2024-12-13T08:53:32.171940680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dlxd6,Uid:2c09cde6-ab63-411e-a7a2-64acd5a2b065,Namespace:calico-system,Attempt:0,}" Dec 13 08:53:32.176531 systemd[1]: Started cri-containerd-6f8ea445bbb339e9957377d4514ec07e03cfc04c8ddea3735a6756a8b2289c57.scope - libcontainer container 6f8ea445bbb339e9957377d4514ec07e03cfc04c8ddea3735a6756a8b2289c57. Dec 13 08:53:32.184053 kubelet[2608]: E1213 08:53:32.184006 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:53:32.184053 kubelet[2608]: W1213 08:53:32.184040 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:53:32.184293 kubelet[2608]: E1213 08:53:32.184071 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:53:32.244473 containerd[1483]: time="2024-12-13T08:53:32.242173097Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 08:53:32.244473 containerd[1483]: time="2024-12-13T08:53:32.243455969Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 08:53:32.244473 containerd[1483]: time="2024-12-13T08:53:32.243481389Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:53:32.244473 containerd[1483]: time="2024-12-13T08:53:32.243650883Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:53:32.282501 systemd[1]: Started cri-containerd-264cb4f28f0e3876901929c75ece8bd79fa646e5d942af07beac55608264fb0a.scope - libcontainer container 264cb4f28f0e3876901929c75ece8bd79fa646e5d942af07beac55608264fb0a. Dec 13 08:53:32.342628 containerd[1483]: time="2024-12-13T08:53:32.342516448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dlxd6,Uid:2c09cde6-ab63-411e-a7a2-64acd5a2b065,Namespace:calico-system,Attempt:0,} returns sandbox id \"264cb4f28f0e3876901929c75ece8bd79fa646e5d942af07beac55608264fb0a\"" Dec 13 08:53:32.354805 containerd[1483]: time="2024-12-13T08:53:32.354130422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-777fd6cc7f-v8npm,Uid:3da13a4c-13fe-4bce-a31d-025317c7c94b,Namespace:calico-system,Attempt:0,} returns sandbox id \"6f8ea445bbb339e9957377d4514ec07e03cfc04c8ddea3735a6756a8b2289c57\"" Dec 13 08:53:32.355431 kubelet[2608]: E1213 08:53:32.355406 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:53:32.356911 kubelet[2608]: E1213 08:53:32.356489 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:53:32.358881 containerd[1483]: time="2024-12-13T08:53:32.358556092Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Dec 13 08:53:33.355986 kubelet[2608]: E1213 08:53:33.354833 2608 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jd225" podUID="9c2374d2-93f5-41dd-beaa-5f3be640a74e" Dec 13 08:53:33.706603 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount759168297.mount: Deactivated successfully. Dec 13 08:53:34.623268 containerd[1483]: time="2024-12-13T08:53:34.623213725Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:53:34.626403 containerd[1483]: time="2024-12-13T08:53:34.626340406Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Dec 13 08:53:34.629823 containerd[1483]: time="2024-12-13T08:53:34.629778740Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:53:34.635210 containerd[1483]: time="2024-12-13T08:53:34.635053963Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:53:34.637757 containerd[1483]: time="2024-12-13T08:53:34.637637760Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.279004799s" Dec 13 08:53:34.637757 containerd[1483]: time="2024-12-13T08:53:34.637727595Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Dec 13 08:53:34.640464 containerd[1483]: time="2024-12-13T08:53:34.640375092Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Dec 13 08:53:34.668401 containerd[1483]: time="2024-12-13T08:53:34.668343474Z" level=info msg="CreateContainer within sandbox \"6f8ea445bbb339e9957377d4514ec07e03cfc04c8ddea3735a6756a8b2289c57\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 13 08:53:34.698404 containerd[1483]: time="2024-12-13T08:53:34.697812121Z" level=info msg="CreateContainer within sandbox \"6f8ea445bbb339e9957377d4514ec07e03cfc04c8ddea3735a6756a8b2289c57\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"925c2fc442fc1daed9d36b3f73835835156fba76763ce244e678d1bc7c6e79a8\"" Dec 13 08:53:34.705388 containerd[1483]: time="2024-12-13T08:53:34.703717983Z" level=info msg="StartContainer for \"925c2fc442fc1daed9d36b3f73835835156fba76763ce244e678d1bc7c6e79a8\"" Dec 13 08:53:34.772397 systemd[1]: Started cri-containerd-925c2fc442fc1daed9d36b3f73835835156fba76763ce244e678d1bc7c6e79a8.scope - libcontainer container 925c2fc442fc1daed9d36b3f73835835156fba76763ce244e678d1bc7c6e79a8. Dec 13 08:53:34.842167 containerd[1483]: time="2024-12-13T08:53:34.842084512Z" level=info msg="StartContainer for \"925c2fc442fc1daed9d36b3f73835835156fba76763ce244e678d1bc7c6e79a8\" returns successfully" Dec 13 08:53:35.356785 kubelet[2608]: E1213 08:53:35.355478 2608 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jd225" podUID="9c2374d2-93f5-41dd-beaa-5f3be640a74e" Dec 13 08:53:35.475295 kubelet[2608]: E1213 08:53:35.474822 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:53:35.499375 kubelet[2608]: I1213 08:53:35.499285 2608 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-777fd6cc7f-v8npm" podStartSLOduration=2.21866305 podStartE2EDuration="4.499257867s" podCreationTimestamp="2024-12-13 08:53:31 +0000 UTC" firstStartedPulling="2024-12-13 08:53:32.35822119 +0000 UTC m=+25.152411894" lastFinishedPulling="2024-12-13 08:53:34.638816014 +0000 UTC m=+27.433006711" observedRunningTime="2024-12-13 08:53:35.498596447 +0000 UTC m=+28.292787163" watchObservedRunningTime="2024-12-13 08:53:35.499257867 +0000 UTC m=+28.293448584" Dec 13 08:53:35.547157 kubelet[2608]: E1213 08:53:35.547119 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:53:35.547157 kubelet[2608]: W1213 08:53:35.547150 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:53:35.547512 kubelet[2608]: E1213 08:53:35.547178 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:53:35.547512 kubelet[2608]: E1213 08:53:35.547508 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:53:35.547670 kubelet[2608]: W1213 08:53:35.547522 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:53:35.547670 kubelet[2608]: E1213 08:53:35.547540 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:53:35.547825 kubelet[2608]: E1213 08:53:35.547807 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:53:35.547825 kubelet[2608]: W1213 08:53:35.547821 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:53:35.547978 kubelet[2608]: E1213 08:53:35.547838 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:53:35.548112 kubelet[2608]: E1213 08:53:35.548092 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:53:35.548112 kubelet[2608]: W1213 08:53:35.548109 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:53:35.548340 kubelet[2608]: E1213 08:53:35.548125 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:53:35.548545 kubelet[2608]: E1213 08:53:35.548524 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:53:35.548545 kubelet[2608]: W1213 08:53:35.548544 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:53:35.548729 kubelet[2608]: E1213 08:53:35.548564 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:53:35.548989 kubelet[2608]: E1213 08:53:35.548969 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:53:35.548989 kubelet[2608]: W1213 08:53:35.548988 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:53:35.549237 kubelet[2608]: E1213 08:53:35.549005 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:53:35.549341 kubelet[2608]: E1213 08:53:35.549323 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:53:35.549341 kubelet[2608]: W1213 08:53:35.549338 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:53:35.549341 kubelet[2608]: E1213 08:53:35.549354 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:53:35.549645 kubelet[2608]: E1213 08:53:35.549588 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:53:35.549645 kubelet[2608]: W1213 08:53:35.549601 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:53:35.549645 kubelet[2608]: E1213 08:53:35.549616 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:53:35.549934 kubelet[2608]: E1213 08:53:35.549883 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:53:35.549934 kubelet[2608]: W1213 08:53:35.549896 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:53:35.549934 kubelet[2608]: E1213 08:53:35.549912 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:53:35.550286 kubelet[2608]: E1213 08:53:35.550150 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:53:35.550286 kubelet[2608]: W1213 08:53:35.550163 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:53:35.550286 kubelet[2608]: E1213 08:53:35.550177 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:53:35.550778 kubelet[2608]: E1213 08:53:35.550437 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:53:35.550778 kubelet[2608]: W1213 08:53:35.550450 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:53:35.550778 kubelet[2608]: E1213 08:53:35.550463 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:53:35.550778 kubelet[2608]: E1213 08:53:35.550689 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:53:35.550778 kubelet[2608]: W1213 08:53:35.550700 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:53:35.550778 kubelet[2608]: E1213 08:53:35.550715 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:53:35.551342 kubelet[2608]: E1213 08:53:35.550970 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:53:35.551342 kubelet[2608]: W1213 08:53:35.550983 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:53:35.551342 kubelet[2608]: E1213 08:53:35.550999 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:53:35.551342 kubelet[2608]: E1213 08:53:35.551377 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:53:35.551342 kubelet[2608]: W1213 08:53:35.551393 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:53:35.551342 kubelet[2608]: E1213 08:53:35.551408 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:53:35.551997 kubelet[2608]: E1213 08:53:35.551670 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:53:35.551997 kubelet[2608]: W1213 08:53:35.551682 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:53:35.551997 kubelet[2608]: E1213 08:53:35.551697 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:53:35.580217 kubelet[2608]: E1213 08:53:35.579608 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:53:35.580217 kubelet[2608]: W1213 08:53:35.579636 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:53:35.580217 kubelet[2608]: E1213 08:53:35.579665 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:53:35.581854 kubelet[2608]: E1213 08:53:35.581663 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:53:35.581854 kubelet[2608]: W1213 08:53:35.581686 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:53:35.581854 kubelet[2608]: E1213 08:53:35.581714 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:53:35.582388 kubelet[2608]: E1213 08:53:35.582225 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:53:35.582388 kubelet[2608]: W1213 08:53:35.582238 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:53:35.582388 kubelet[2608]: E1213 08:53:35.582281 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:53:35.582751 kubelet[2608]: E1213 08:53:35.582582 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:53:35.582751 kubelet[2608]: W1213 08:53:35.582592 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:53:35.582751 kubelet[2608]: E1213 08:53:35.582658 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:53:35.582998 kubelet[2608]: E1213 08:53:35.582897 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:53:35.582998 kubelet[2608]: W1213 08:53:35.582909 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:53:35.582998 kubelet[2608]: E1213 08:53:35.582925 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:53:35.583461 kubelet[2608]: E1213 08:53:35.583296 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:53:35.583461 kubelet[2608]: W1213 08:53:35.583307 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:53:35.583461 kubelet[2608]: E1213 08:53:35.583323 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:53:35.583772 kubelet[2608]: E1213 08:53:35.583608 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:53:35.583772 kubelet[2608]: W1213 08:53:35.583620 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:53:35.583772 kubelet[2608]: E1213 08:53:35.583652 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:53:35.584057 kubelet[2608]: E1213 08:53:35.583931 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:53:35.584057 kubelet[2608]: W1213 08:53:35.583947 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:53:35.584057 kubelet[2608]: E1213 08:53:35.583981 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:53:35.584674 kubelet[2608]: E1213 08:53:35.584551 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:53:35.584674 kubelet[2608]: W1213 08:53:35.584564 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:53:35.584674 kubelet[2608]: E1213 08:53:35.584602 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:53:35.585341 kubelet[2608]: E1213 08:53:35.585024 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:53:35.585341 kubelet[2608]: W1213 08:53:35.585036 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:53:35.585341 kubelet[2608]: E1213 08:53:35.585053 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:53:35.586322 kubelet[2608]: E1213 08:53:35.586291 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:53:35.586322 kubelet[2608]: W1213 08:53:35.586314 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:53:35.586402 kubelet[2608]: E1213 08:53:35.586351 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:53:35.586649 kubelet[2608]: E1213 08:53:35.586631 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:53:35.586702 kubelet[2608]: W1213 08:53:35.586649 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:53:35.586828 kubelet[2608]: E1213 08:53:35.586809 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:53:35.587068 kubelet[2608]: E1213 08:53:35.587052 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:53:35.587111 kubelet[2608]: W1213 08:53:35.587069 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:53:35.587415 kubelet[2608]: E1213 08:53:35.587293 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:53:35.587415 kubelet[2608]: E1213 08:53:35.587321 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:53:35.587415 kubelet[2608]: W1213 08:53:35.587334 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:53:35.587521 kubelet[2608]: E1213 08:53:35.587424 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:53:35.587703 kubelet[2608]: E1213 08:53:35.587662 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:53:35.587703 kubelet[2608]: W1213 08:53:35.587681 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:53:35.587793 kubelet[2608]: E1213 08:53:35.587703 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:53:35.588273 kubelet[2608]: E1213 08:53:35.588227 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:53:35.588273 kubelet[2608]: W1213 08:53:35.588245 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:53:35.588273 kubelet[2608]: E1213 08:53:35.588261 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:53:35.588562 kubelet[2608]: E1213 08:53:35.588505 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:53:35.588562 kubelet[2608]: W1213 08:53:35.588516 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:53:35.588562 kubelet[2608]: E1213 08:53:35.588531 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:53:35.589323 kubelet[2608]: E1213 08:53:35.589298 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:53:35.589323 kubelet[2608]: W1213 08:53:35.589317 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:53:35.589477 kubelet[2608]: E1213 08:53:35.589335 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:53:36.242473 containerd[1483]: time="2024-12-13T08:53:36.242405158Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:53:36.245043 containerd[1483]: time="2024-12-13T08:53:36.244956940Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Dec 13 08:53:36.262610 containerd[1483]: time="2024-12-13T08:53:36.262522782Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:53:36.283233 containerd[1483]: time="2024-12-13T08:53:36.283095186Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:53:36.286405 containerd[1483]: time="2024-12-13T08:53:36.285758178Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.645321083s" Dec 13 08:53:36.286405 containerd[1483]: time="2024-12-13T08:53:36.285817502Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Dec 13 08:53:36.291230 containerd[1483]: time="2024-12-13T08:53:36.291102977Z" level=info msg="CreateContainer within sandbox \"264cb4f28f0e3876901929c75ece8bd79fa646e5d942af07beac55608264fb0a\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 08:53:36.361057 containerd[1483]: time="2024-12-13T08:53:36.360880773Z" level=info msg="CreateContainer within sandbox \"264cb4f28f0e3876901929c75ece8bd79fa646e5d942af07beac55608264fb0a\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"382093a95d306b485fc3c7ed1946354aab6f5cc52e999994e3b119a58e11debf\"" Dec 13 08:53:36.361989 containerd[1483]: time="2024-12-13T08:53:36.361918705Z" level=info msg="StartContainer for \"382093a95d306b485fc3c7ed1946354aab6f5cc52e999994e3b119a58e11debf\"" Dec 13 08:53:36.425488 systemd[1]: Started cri-containerd-382093a95d306b485fc3c7ed1946354aab6f5cc52e999994e3b119a58e11debf.scope - libcontainer container 382093a95d306b485fc3c7ed1946354aab6f5cc52e999994e3b119a58e11debf. Dec 13 08:53:36.482046 containerd[1483]: time="2024-12-13T08:53:36.481308854Z" level=info msg="StartContainer for \"382093a95d306b485fc3c7ed1946354aab6f5cc52e999994e3b119a58e11debf\" returns successfully" Dec 13 08:53:36.489386 kubelet[2608]: I1213 08:53:36.489328 2608 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 08:53:36.491367 kubelet[2608]: E1213 08:53:36.490728 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:53:36.518875 systemd[1]: cri-containerd-382093a95d306b485fc3c7ed1946354aab6f5cc52e999994e3b119a58e11debf.scope: Deactivated successfully. Dec 13 08:53:36.562347 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-382093a95d306b485fc3c7ed1946354aab6f5cc52e999994e3b119a58e11debf-rootfs.mount: Deactivated successfully. Dec 13 08:53:36.633246 containerd[1483]: time="2024-12-13T08:53:36.573158334Z" level=info msg="shim disconnected" id=382093a95d306b485fc3c7ed1946354aab6f5cc52e999994e3b119a58e11debf namespace=k8s.io Dec 13 08:53:36.633246 containerd[1483]: time="2024-12-13T08:53:36.633225745Z" level=warning msg="cleaning up after shim disconnected" id=382093a95d306b485fc3c7ed1946354aab6f5cc52e999994e3b119a58e11debf namespace=k8s.io Dec 13 08:53:36.633246 containerd[1483]: time="2024-12-13T08:53:36.633248334Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 08:53:37.355179 kubelet[2608]: E1213 08:53:37.354912 2608 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jd225" podUID="9c2374d2-93f5-41dd-beaa-5f3be640a74e" Dec 13 08:53:37.494772 kubelet[2608]: E1213 08:53:37.494728 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:53:37.497606 containerd[1483]: time="2024-12-13T08:53:37.497561994Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Dec 13 08:53:39.357676 kubelet[2608]: E1213 08:53:39.356492 2608 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jd225" podUID="9c2374d2-93f5-41dd-beaa-5f3be640a74e" Dec 13 08:53:41.357556 kubelet[2608]: E1213 08:53:41.357478 2608 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jd225" podUID="9c2374d2-93f5-41dd-beaa-5f3be640a74e" Dec 13 08:53:41.486296 containerd[1483]: time="2024-12-13T08:53:41.485436796Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Dec 13 08:53:41.501846 containerd[1483]: time="2024-12-13T08:53:41.501664030Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:53:41.506400 containerd[1483]: time="2024-12-13T08:53:41.506345905Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:53:41.509424 containerd[1483]: time="2024-12-13T08:53:41.507917958Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:53:41.509424 containerd[1483]: time="2024-12-13T08:53:41.509218787Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 4.011581795s" Dec 13 08:53:41.509424 containerd[1483]: time="2024-12-13T08:53:41.509269246Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Dec 13 08:53:41.515518 containerd[1483]: time="2024-12-13T08:53:41.515418624Z" level=info msg="CreateContainer within sandbox \"264cb4f28f0e3876901929c75ece8bd79fa646e5d942af07beac55608264fb0a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 08:53:41.546503 containerd[1483]: time="2024-12-13T08:53:41.546446902Z" level=info msg="CreateContainer within sandbox \"264cb4f28f0e3876901929c75ece8bd79fa646e5d942af07beac55608264fb0a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"2851fba1d47f6c15c64a4e7f8fac158d606e68f15a5a08f6f9a4de239d4a92c5\"" Dec 13 08:53:41.548099 containerd[1483]: time="2024-12-13T08:53:41.547480574Z" level=info msg="StartContainer for \"2851fba1d47f6c15c64a4e7f8fac158d606e68f15a5a08f6f9a4de239d4a92c5\"" Dec 13 08:53:41.657537 systemd[1]: Started cri-containerd-2851fba1d47f6c15c64a4e7f8fac158d606e68f15a5a08f6f9a4de239d4a92c5.scope - libcontainer container 2851fba1d47f6c15c64a4e7f8fac158d606e68f15a5a08f6f9a4de239d4a92c5. Dec 13 08:53:41.714277 containerd[1483]: time="2024-12-13T08:53:41.714169211Z" level=info msg="StartContainer for \"2851fba1d47f6c15c64a4e7f8fac158d606e68f15a5a08f6f9a4de239d4a92c5\" returns successfully" Dec 13 08:53:42.348255 systemd[1]: cri-containerd-2851fba1d47f6c15c64a4e7f8fac158d606e68f15a5a08f6f9a4de239d4a92c5.scope: Deactivated successfully. Dec 13 08:53:42.392655 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2851fba1d47f6c15c64a4e7f8fac158d606e68f15a5a08f6f9a4de239d4a92c5-rootfs.mount: Deactivated successfully. Dec 13 08:53:42.400804 containerd[1483]: time="2024-12-13T08:53:42.400707221Z" level=info msg="shim disconnected" id=2851fba1d47f6c15c64a4e7f8fac158d606e68f15a5a08f6f9a4de239d4a92c5 namespace=k8s.io Dec 13 08:53:42.400804 containerd[1483]: time="2024-12-13T08:53:42.400774459Z" level=warning msg="cleaning up after shim disconnected" id=2851fba1d47f6c15c64a4e7f8fac158d606e68f15a5a08f6f9a4de239d4a92c5 namespace=k8s.io Dec 13 08:53:42.400804 containerd[1483]: time="2024-12-13T08:53:42.400785663Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 08:53:42.409880 kubelet[2608]: I1213 08:53:42.409585 2608 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 08:53:42.469771 kubelet[2608]: I1213 08:53:42.469706 2608 topology_manager.go:215] "Topology Admit Handler" podUID="5f39249e-c6ea-419b-8152-f432f5354acc" podNamespace="calico-system" podName="calico-kube-controllers-57b6c9448b-wct54" Dec 13 08:53:42.473610 kubelet[2608]: I1213 08:53:42.473565 2608 topology_manager.go:215] "Topology Admit Handler" podUID="d5b39035-4771-4bba-abc4-b862e2d1a098" podNamespace="kube-system" podName="coredns-7db6d8ff4d-fq2kp" Dec 13 08:53:42.481930 kubelet[2608]: I1213 08:53:42.481867 2608 topology_manager.go:215] "Topology Admit Handler" podUID="32f1faf5-14a7-4e77-ad30-f5c2a4239f44" podNamespace="kube-system" podName="coredns-7db6d8ff4d-jjl49" Dec 13 08:53:42.484067 kubelet[2608]: W1213 08:53:42.483970 2608 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4081.2.1-4-b1553ec4eb" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081.2.1-4-b1553ec4eb' and this object Dec 13 08:53:42.484067 kubelet[2608]: E1213 08:53:42.484027 2608 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4081.2.1-4-b1553ec4eb" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081.2.1-4-b1553ec4eb' and this object Dec 13 08:53:42.486057 kubelet[2608]: I1213 08:53:42.486012 2608 topology_manager.go:215] "Topology Admit Handler" podUID="ddf3153c-b769-4b7e-ba57-f0bc3c3374a4" podNamespace="calico-apiserver" podName="calico-apiserver-5c5f75d696-2kzjr" Dec 13 08:53:42.489705 kubelet[2608]: I1213 08:53:42.489660 2608 topology_manager.go:215] "Topology Admit Handler" podUID="8ba1434c-4e4e-46fd-97f9-ebbb427b8559" podNamespace="calico-apiserver" podName="calico-apiserver-5c5f75d696-w7vmr" Dec 13 08:53:42.499102 systemd[1]: Created slice kubepods-besteffort-pod5f39249e_c6ea_419b_8152_f432f5354acc.slice - libcontainer container kubepods-besteffort-pod5f39249e_c6ea_419b_8152_f432f5354acc.slice. Dec 13 08:53:42.514848 kubelet[2608]: E1213 08:53:42.513630 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:53:42.516694 containerd[1483]: time="2024-12-13T08:53:42.516598755Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Dec 13 08:53:42.518341 systemd[1]: Created slice kubepods-besteffort-pod8ba1434c_4e4e_46fd_97f9_ebbb427b8559.slice - libcontainer container kubepods-besteffort-pod8ba1434c_4e4e_46fd_97f9_ebbb427b8559.slice. Dec 13 08:53:42.536527 kubelet[2608]: I1213 08:53:42.535556 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wf7n6\" (UniqueName: \"kubernetes.io/projected/ddf3153c-b769-4b7e-ba57-f0bc3c3374a4-kube-api-access-wf7n6\") pod \"calico-apiserver-5c5f75d696-2kzjr\" (UID: \"ddf3153c-b769-4b7e-ba57-f0bc3c3374a4\") " pod="calico-apiserver/calico-apiserver-5c5f75d696-2kzjr" Dec 13 08:53:42.536527 kubelet[2608]: I1213 08:53:42.535609 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f39249e-c6ea-419b-8152-f432f5354acc-tigera-ca-bundle\") pod \"calico-kube-controllers-57b6c9448b-wct54\" (UID: \"5f39249e-c6ea-419b-8152-f432f5354acc\") " pod="calico-system/calico-kube-controllers-57b6c9448b-wct54" Dec 13 08:53:42.536527 kubelet[2608]: I1213 08:53:42.535641 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/32f1faf5-14a7-4e77-ad30-f5c2a4239f44-config-volume\") pod \"coredns-7db6d8ff4d-jjl49\" (UID: \"32f1faf5-14a7-4e77-ad30-f5c2a4239f44\") " pod="kube-system/coredns-7db6d8ff4d-jjl49" Dec 13 08:53:42.536527 kubelet[2608]: I1213 08:53:42.535666 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d5b39035-4771-4bba-abc4-b862e2d1a098-config-volume\") pod \"coredns-7db6d8ff4d-fq2kp\" (UID: \"d5b39035-4771-4bba-abc4-b862e2d1a098\") " pod="kube-system/coredns-7db6d8ff4d-fq2kp" Dec 13 08:53:42.536527 kubelet[2608]: I1213 08:53:42.535692 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8t2gb\" (UniqueName: \"kubernetes.io/projected/d5b39035-4771-4bba-abc4-b862e2d1a098-kube-api-access-8t2gb\") pod \"coredns-7db6d8ff4d-fq2kp\" (UID: \"d5b39035-4771-4bba-abc4-b862e2d1a098\") " pod="kube-system/coredns-7db6d8ff4d-fq2kp" Dec 13 08:53:42.535836 systemd[1]: Created slice kubepods-burstable-podd5b39035_4771_4bba_abc4_b862e2d1a098.slice - libcontainer container kubepods-burstable-podd5b39035_4771_4bba_abc4_b862e2d1a098.slice. Dec 13 08:53:42.537137 kubelet[2608]: I1213 08:53:42.535746 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqgdm\" (UniqueName: \"kubernetes.io/projected/32f1faf5-14a7-4e77-ad30-f5c2a4239f44-kube-api-access-zqgdm\") pod \"coredns-7db6d8ff4d-jjl49\" (UID: \"32f1faf5-14a7-4e77-ad30-f5c2a4239f44\") " pod="kube-system/coredns-7db6d8ff4d-jjl49" Dec 13 08:53:42.537137 kubelet[2608]: I1213 08:53:42.535774 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bplvb\" (UniqueName: \"kubernetes.io/projected/5f39249e-c6ea-419b-8152-f432f5354acc-kube-api-access-bplvb\") pod \"calico-kube-controllers-57b6c9448b-wct54\" (UID: \"5f39249e-c6ea-419b-8152-f432f5354acc\") " pod="calico-system/calico-kube-controllers-57b6c9448b-wct54" Dec 13 08:53:42.537137 kubelet[2608]: I1213 08:53:42.535800 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ddf3153c-b769-4b7e-ba57-f0bc3c3374a4-calico-apiserver-certs\") pod \"calico-apiserver-5c5f75d696-2kzjr\" (UID: \"ddf3153c-b769-4b7e-ba57-f0bc3c3374a4\") " pod="calico-apiserver/calico-apiserver-5c5f75d696-2kzjr" Dec 13 08:53:42.537137 kubelet[2608]: I1213 08:53:42.535835 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8ba1434c-4e4e-46fd-97f9-ebbb427b8559-calico-apiserver-certs\") pod \"calico-apiserver-5c5f75d696-w7vmr\" (UID: \"8ba1434c-4e4e-46fd-97f9-ebbb427b8559\") " pod="calico-apiserver/calico-apiserver-5c5f75d696-w7vmr" Dec 13 08:53:42.537137 kubelet[2608]: I1213 08:53:42.535862 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hj67x\" (UniqueName: \"kubernetes.io/projected/8ba1434c-4e4e-46fd-97f9-ebbb427b8559-kube-api-access-hj67x\") pod \"calico-apiserver-5c5f75d696-w7vmr\" (UID: \"8ba1434c-4e4e-46fd-97f9-ebbb427b8559\") " pod="calico-apiserver/calico-apiserver-5c5f75d696-w7vmr" Dec 13 08:53:42.555040 systemd[1]: Created slice kubepods-burstable-pod32f1faf5_14a7_4e77_ad30_f5c2a4239f44.slice - libcontainer container kubepods-burstable-pod32f1faf5_14a7_4e77_ad30_f5c2a4239f44.slice. Dec 13 08:53:42.563799 systemd[1]: Created slice kubepods-besteffort-podddf3153c_b769_4b7e_ba57_f0bc3c3374a4.slice - libcontainer container kubepods-besteffort-podddf3153c_b769_4b7e_ba57_f0bc3c3374a4.slice. Dec 13 08:53:42.806082 containerd[1483]: time="2024-12-13T08:53:42.805998874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57b6c9448b-wct54,Uid:5f39249e-c6ea-419b-8152-f432f5354acc,Namespace:calico-system,Attempt:0,}" Dec 13 08:53:42.828820 containerd[1483]: time="2024-12-13T08:53:42.828381560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c5f75d696-w7vmr,Uid:8ba1434c-4e4e-46fd-97f9-ebbb427b8559,Namespace:calico-apiserver,Attempt:0,}" Dec 13 08:53:42.869470 containerd[1483]: time="2024-12-13T08:53:42.869425297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c5f75d696-2kzjr,Uid:ddf3153c-b769-4b7e-ba57-f0bc3c3374a4,Namespace:calico-apiserver,Attempt:0,}" Dec 13 08:53:43.123664 containerd[1483]: time="2024-12-13T08:53:43.123488530Z" level=error msg="Failed to destroy network for sandbox \"fddcdc4e4228fb516182855e6acda500bf9dfa5babdf534e12f5f62252f97ea8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:53:43.127485 containerd[1483]: time="2024-12-13T08:53:43.126680672Z" level=error msg="Failed to destroy network for sandbox \"2aff4406cc488b46d47506cc8c40fcf91b4e6a48a242e0eada84d6cf9628d908\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:53:43.128375 containerd[1483]: time="2024-12-13T08:53:43.128327682Z" level=error msg="encountered an error cleaning up failed sandbox \"fddcdc4e4228fb516182855e6acda500bf9dfa5babdf534e12f5f62252f97ea8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:53:43.128629 containerd[1483]: time="2024-12-13T08:53:43.128602771Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57b6c9448b-wct54,Uid:5f39249e-c6ea-419b-8152-f432f5354acc,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fddcdc4e4228fb516182855e6acda500bf9dfa5babdf534e12f5f62252f97ea8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:53:43.129017 containerd[1483]: time="2024-12-13T08:53:43.128365481Z" level=error msg="encountered an error cleaning up failed sandbox \"2aff4406cc488b46d47506cc8c40fcf91b4e6a48a242e0eada84d6cf9628d908\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:53:43.129017 containerd[1483]: time="2024-12-13T08:53:43.128881181Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c5f75d696-2kzjr,Uid:ddf3153c-b769-4b7e-ba57-f0bc3c3374a4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2aff4406cc488b46d47506cc8c40fcf91b4e6a48a242e0eada84d6cf9628d908\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:53:43.135103 containerd[1483]: time="2024-12-13T08:53:43.128454712Z" level=error msg="Failed to destroy network for sandbox \"78cb6c96573eb180d6c043237feef4c7adb280e484418384c57bc32cd68ec686\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:53:43.135738 containerd[1483]: time="2024-12-13T08:53:43.135558567Z" level=error msg="encountered an error cleaning up failed sandbox \"78cb6c96573eb180d6c043237feef4c7adb280e484418384c57bc32cd68ec686\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:53:43.135738 containerd[1483]: time="2024-12-13T08:53:43.135636743Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c5f75d696-w7vmr,Uid:8ba1434c-4e4e-46fd-97f9-ebbb427b8559,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"78cb6c96573eb180d6c043237feef4c7adb280e484418384c57bc32cd68ec686\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:53:43.136593 kubelet[2608]: E1213 08:53:43.136247 2608 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78cb6c96573eb180d6c043237feef4c7adb280e484418384c57bc32cd68ec686\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:53:43.136593 kubelet[2608]: E1213 08:53:43.136338 2608 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fddcdc4e4228fb516182855e6acda500bf9dfa5babdf534e12f5f62252f97ea8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:53:43.136593 kubelet[2608]: E1213 08:53:43.136366 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78cb6c96573eb180d6c043237feef4c7adb280e484418384c57bc32cd68ec686\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c5f75d696-w7vmr" Dec 13 08:53:43.136593 kubelet[2608]: E1213 08:53:43.136391 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fddcdc4e4228fb516182855e6acda500bf9dfa5babdf534e12f5f62252f97ea8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-57b6c9448b-wct54" Dec 13 08:53:43.136807 kubelet[2608]: E1213 08:53:43.136396 2608 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78cb6c96573eb180d6c043237feef4c7adb280e484418384c57bc32cd68ec686\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c5f75d696-w7vmr" Dec 13 08:53:43.136807 kubelet[2608]: E1213 08:53:43.136412 2608 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fddcdc4e4228fb516182855e6acda500bf9dfa5babdf534e12f5f62252f97ea8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-57b6c9448b-wct54" Dec 13 08:53:43.136807 kubelet[2608]: E1213 08:53:43.136451 2608 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-57b6c9448b-wct54_calico-system(5f39249e-c6ea-419b-8152-f432f5354acc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-57b6c9448b-wct54_calico-system(5f39249e-c6ea-419b-8152-f432f5354acc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fddcdc4e4228fb516182855e6acda500bf9dfa5babdf534e12f5f62252f97ea8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-57b6c9448b-wct54" podUID="5f39249e-c6ea-419b-8152-f432f5354acc" Dec 13 08:53:43.137061 kubelet[2608]: E1213 08:53:43.136459 2608 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5c5f75d696-w7vmr_calico-apiserver(8ba1434c-4e4e-46fd-97f9-ebbb427b8559)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5c5f75d696-w7vmr_calico-apiserver(8ba1434c-4e4e-46fd-97f9-ebbb427b8559)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"78cb6c96573eb180d6c043237feef4c7adb280e484418384c57bc32cd68ec686\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5c5f75d696-w7vmr" podUID="8ba1434c-4e4e-46fd-97f9-ebbb427b8559" Dec 13 08:53:43.137061 kubelet[2608]: E1213 08:53:43.136496 2608 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2aff4406cc488b46d47506cc8c40fcf91b4e6a48a242e0eada84d6cf9628d908\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:53:43.137061 kubelet[2608]: E1213 08:53:43.136513 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2aff4406cc488b46d47506cc8c40fcf91b4e6a48a242e0eada84d6cf9628d908\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c5f75d696-2kzjr" Dec 13 08:53:43.137234 kubelet[2608]: E1213 08:53:43.136527 2608 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2aff4406cc488b46d47506cc8c40fcf91b4e6a48a242e0eada84d6cf9628d908\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c5f75d696-2kzjr" Dec 13 08:53:43.137234 kubelet[2608]: E1213 08:53:43.136554 2608 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5c5f75d696-2kzjr_calico-apiserver(ddf3153c-b769-4b7e-ba57-f0bc3c3374a4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5c5f75d696-2kzjr_calico-apiserver(ddf3153c-b769-4b7e-ba57-f0bc3c3374a4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2aff4406cc488b46d47506cc8c40fcf91b4e6a48a242e0eada84d6cf9628d908\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5c5f75d696-2kzjr" podUID="ddf3153c-b769-4b7e-ba57-f0bc3c3374a4" Dec 13 08:53:43.362804 systemd[1]: Created slice kubepods-besteffort-pod9c2374d2_93f5_41dd_beaa_5f3be640a74e.slice - libcontainer container kubepods-besteffort-pod9c2374d2_93f5_41dd_beaa_5f3be640a74e.slice. Dec 13 08:53:43.366149 containerd[1483]: time="2024-12-13T08:53:43.366104277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jd225,Uid:9c2374d2-93f5-41dd-beaa-5f3be640a74e,Namespace:calico-system,Attempt:0,}" Dec 13 08:53:43.474932 containerd[1483]: time="2024-12-13T08:53:43.474854794Z" level=error msg="Failed to destroy network for sandbox \"8b6fef1aab49a17690d10d5f9b7ddf2e6712c5bfcc25e1474967f18112be0076\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:53:43.475673 containerd[1483]: time="2024-12-13T08:53:43.475465456Z" level=error msg="encountered an error cleaning up failed sandbox \"8b6fef1aab49a17690d10d5f9b7ddf2e6712c5bfcc25e1474967f18112be0076\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:53:43.475673 containerd[1483]: time="2024-12-13T08:53:43.475561177Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jd225,Uid:9c2374d2-93f5-41dd-beaa-5f3be640a74e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8b6fef1aab49a17690d10d5f9b7ddf2e6712c5bfcc25e1474967f18112be0076\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:53:43.476164 kubelet[2608]: E1213 08:53:43.476122 2608 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b6fef1aab49a17690d10d5f9b7ddf2e6712c5bfcc25e1474967f18112be0076\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:53:43.477246 kubelet[2608]: E1213 08:53:43.476701 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b6fef1aab49a17690d10d5f9b7ddf2e6712c5bfcc25e1474967f18112be0076\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jd225" Dec 13 08:53:43.477246 kubelet[2608]: E1213 08:53:43.476740 2608 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b6fef1aab49a17690d10d5f9b7ddf2e6712c5bfcc25e1474967f18112be0076\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jd225" Dec 13 08:53:43.477617 kubelet[2608]: E1213 08:53:43.476816 2608 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jd225_calico-system(9c2374d2-93f5-41dd-beaa-5f3be640a74e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jd225_calico-system(9c2374d2-93f5-41dd-beaa-5f3be640a74e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8b6fef1aab49a17690d10d5f9b7ddf2e6712c5bfcc25e1474967f18112be0076\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jd225" podUID="9c2374d2-93f5-41dd-beaa-5f3be640a74e" Dec 13 08:53:43.516376 kubelet[2608]: I1213 08:53:43.516314 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2aff4406cc488b46d47506cc8c40fcf91b4e6a48a242e0eada84d6cf9628d908" Dec 13 08:53:43.521310 kubelet[2608]: I1213 08:53:43.521269 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b6fef1aab49a17690d10d5f9b7ddf2e6712c5bfcc25e1474967f18112be0076" Dec 13 08:53:43.523801 containerd[1483]: time="2024-12-13T08:53:43.523754759Z" level=info msg="StopPodSandbox for \"8b6fef1aab49a17690d10d5f9b7ddf2e6712c5bfcc25e1474967f18112be0076\"" Dec 13 08:53:43.529126 containerd[1483]: time="2024-12-13T08:53:43.527791002Z" level=info msg="Ensure that sandbox 8b6fef1aab49a17690d10d5f9b7ddf2e6712c5bfcc25e1474967f18112be0076 in task-service has been cleanup successfully" Dec 13 08:53:43.531202 containerd[1483]: time="2024-12-13T08:53:43.531120151Z" level=info msg="StopPodSandbox for \"2aff4406cc488b46d47506cc8c40fcf91b4e6a48a242e0eada84d6cf9628d908\"" Dec 13 08:53:43.531931 containerd[1483]: time="2024-12-13T08:53:43.531884140Z" level=info msg="Ensure that sandbox 2aff4406cc488b46d47506cc8c40fcf91b4e6a48a242e0eada84d6cf9628d908 in task-service has been cleanup successfully" Dec 13 08:53:43.533408 kubelet[2608]: I1213 08:53:43.532852 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78cb6c96573eb180d6c043237feef4c7adb280e484418384c57bc32cd68ec686" Dec 13 08:53:43.535734 containerd[1483]: time="2024-12-13T08:53:43.535702439Z" level=info msg="StopPodSandbox for \"78cb6c96573eb180d6c043237feef4c7adb280e484418384c57bc32cd68ec686\"" Dec 13 08:53:43.537248 containerd[1483]: time="2024-12-13T08:53:43.537208206Z" level=info msg="Ensure that sandbox 78cb6c96573eb180d6c043237feef4c7adb280e484418384c57bc32cd68ec686 in task-service has been cleanup successfully" Dec 13 08:53:43.537587 kubelet[2608]: I1213 08:53:43.537568 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fddcdc4e4228fb516182855e6acda500bf9dfa5babdf534e12f5f62252f97ea8" Dec 13 08:53:43.539146 containerd[1483]: time="2024-12-13T08:53:43.538779522Z" level=info msg="StopPodSandbox for \"fddcdc4e4228fb516182855e6acda500bf9dfa5babdf534e12f5f62252f97ea8\"" Dec 13 08:53:43.539443 containerd[1483]: time="2024-12-13T08:53:43.539321691Z" level=info msg="Ensure that sandbox fddcdc4e4228fb516182855e6acda500bf9dfa5babdf534e12f5f62252f97ea8 in task-service has been cleanup successfully" Dec 13 08:53:43.636251 containerd[1483]: time="2024-12-13T08:53:43.636166480Z" level=error msg="StopPodSandbox for \"8b6fef1aab49a17690d10d5f9b7ddf2e6712c5bfcc25e1474967f18112be0076\" failed" error="failed to destroy network for sandbox \"8b6fef1aab49a17690d10d5f9b7ddf2e6712c5bfcc25e1474967f18112be0076\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:53:43.636761 kubelet[2608]: E1213 08:53:43.636519 2608 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8b6fef1aab49a17690d10d5f9b7ddf2e6712c5bfcc25e1474967f18112be0076\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8b6fef1aab49a17690d10d5f9b7ddf2e6712c5bfcc25e1474967f18112be0076" Dec 13 08:53:43.636761 kubelet[2608]: E1213 08:53:43.636585 2608 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8b6fef1aab49a17690d10d5f9b7ddf2e6712c5bfcc25e1474967f18112be0076"} Dec 13 08:53:43.636761 kubelet[2608]: E1213 08:53:43.636655 2608 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9c2374d2-93f5-41dd-beaa-5f3be640a74e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8b6fef1aab49a17690d10d5f9b7ddf2e6712c5bfcc25e1474967f18112be0076\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 08:53:43.636761 kubelet[2608]: E1213 08:53:43.636680 2608 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9c2374d2-93f5-41dd-beaa-5f3be640a74e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8b6fef1aab49a17690d10d5f9b7ddf2e6712c5bfcc25e1474967f18112be0076\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jd225" podUID="9c2374d2-93f5-41dd-beaa-5f3be640a74e" Dec 13 08:53:43.640259 containerd[1483]: time="2024-12-13T08:53:43.639242737Z" level=error msg="StopPodSandbox for \"78cb6c96573eb180d6c043237feef4c7adb280e484418384c57bc32cd68ec686\" failed" error="failed to destroy network for sandbox \"78cb6c96573eb180d6c043237feef4c7adb280e484418384c57bc32cd68ec686\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:53:43.640259 containerd[1483]: time="2024-12-13T08:53:43.639483744Z" level=error msg="StopPodSandbox for \"2aff4406cc488b46d47506cc8c40fcf91b4e6a48a242e0eada84d6cf9628d908\" failed" error="failed to destroy network for sandbox \"2aff4406cc488b46d47506cc8c40fcf91b4e6a48a242e0eada84d6cf9628d908\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:53:43.640520 kubelet[2608]: E1213 08:53:43.639796 2608 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"78cb6c96573eb180d6c043237feef4c7adb280e484418384c57bc32cd68ec686\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="78cb6c96573eb180d6c043237feef4c7adb280e484418384c57bc32cd68ec686" Dec 13 08:53:43.640520 kubelet[2608]: E1213 08:53:43.639796 2608 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2aff4406cc488b46d47506cc8c40fcf91b4e6a48a242e0eada84d6cf9628d908\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2aff4406cc488b46d47506cc8c40fcf91b4e6a48a242e0eada84d6cf9628d908" Dec 13 08:53:43.640520 kubelet[2608]: E1213 08:53:43.639970 2608 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2aff4406cc488b46d47506cc8c40fcf91b4e6a48a242e0eada84d6cf9628d908"} Dec 13 08:53:43.640520 kubelet[2608]: E1213 08:53:43.640027 2608 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ddf3153c-b769-4b7e-ba57-f0bc3c3374a4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2aff4406cc488b46d47506cc8c40fcf91b4e6a48a242e0eada84d6cf9628d908\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 08:53:43.640801 kubelet[2608]: E1213 08:53:43.640065 2608 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ddf3153c-b769-4b7e-ba57-f0bc3c3374a4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2aff4406cc488b46d47506cc8c40fcf91b4e6a48a242e0eada84d6cf9628d908\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5c5f75d696-2kzjr" podUID="ddf3153c-b769-4b7e-ba57-f0bc3c3374a4" Dec 13 08:53:43.640801 kubelet[2608]: E1213 08:53:43.639849 2608 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"78cb6c96573eb180d6c043237feef4c7adb280e484418384c57bc32cd68ec686"} Dec 13 08:53:43.640801 kubelet[2608]: E1213 08:53:43.640122 2608 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8ba1434c-4e4e-46fd-97f9-ebbb427b8559\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"78cb6c96573eb180d6c043237feef4c7adb280e484418384c57bc32cd68ec686\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 08:53:43.640801 kubelet[2608]: E1213 08:53:43.640149 2608 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8ba1434c-4e4e-46fd-97f9-ebbb427b8559\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"78cb6c96573eb180d6c043237feef4c7adb280e484418384c57bc32cd68ec686\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5c5f75d696-w7vmr" podUID="8ba1434c-4e4e-46fd-97f9-ebbb427b8559" Dec 13 08:53:43.641175 containerd[1483]: time="2024-12-13T08:53:43.641113781Z" level=error msg="StopPodSandbox for \"fddcdc4e4228fb516182855e6acda500bf9dfa5babdf534e12f5f62252f97ea8\" failed" error="failed to destroy network for sandbox \"fddcdc4e4228fb516182855e6acda500bf9dfa5babdf534e12f5f62252f97ea8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:53:43.641507 kubelet[2608]: E1213 08:53:43.641470 2608 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fddcdc4e4228fb516182855e6acda500bf9dfa5babdf534e12f5f62252f97ea8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fddcdc4e4228fb516182855e6acda500bf9dfa5babdf534e12f5f62252f97ea8" Dec 13 08:53:43.641564 kubelet[2608]: E1213 08:53:43.641526 2608 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fddcdc4e4228fb516182855e6acda500bf9dfa5babdf534e12f5f62252f97ea8"} Dec 13 08:53:43.641603 kubelet[2608]: E1213 08:53:43.641560 2608 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5f39249e-c6ea-419b-8152-f432f5354acc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fddcdc4e4228fb516182855e6acda500bf9dfa5babdf534e12f5f62252f97ea8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 08:53:43.641603 kubelet[2608]: E1213 08:53:43.641582 2608 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5f39249e-c6ea-419b-8152-f432f5354acc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fddcdc4e4228fb516182855e6acda500bf9dfa5babdf534e12f5f62252f97ea8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-57b6c9448b-wct54" podUID="5f39249e-c6ea-419b-8152-f432f5354acc" Dec 13 08:53:43.746590 kubelet[2608]: E1213 08:53:43.746458 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:53:43.761940 kubelet[2608]: E1213 08:53:43.761884 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:53:43.764247 containerd[1483]: time="2024-12-13T08:53:43.762540242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-fq2kp,Uid:d5b39035-4771-4bba-abc4-b862e2d1a098,Namespace:kube-system,Attempt:0,}" Dec 13 08:53:43.764247 containerd[1483]: time="2024-12-13T08:53:43.762904480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jjl49,Uid:32f1faf5-14a7-4e77-ad30-f5c2a4239f44,Namespace:kube-system,Attempt:0,}" Dec 13 08:53:44.050087 containerd[1483]: time="2024-12-13T08:53:44.049898303Z" level=error msg="Failed to destroy network for sandbox \"bdc5c326c198be84dc8157066841b00cc97215a6990384f726d76c81f085d0ad\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:53:44.050435 containerd[1483]: time="2024-12-13T08:53:44.050395546Z" level=error msg="encountered an error cleaning up failed sandbox \"bdc5c326c198be84dc8157066841b00cc97215a6990384f726d76c81f085d0ad\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:53:44.050512 containerd[1483]: time="2024-12-13T08:53:44.050462320Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jjl49,Uid:32f1faf5-14a7-4e77-ad30-f5c2a4239f44,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bdc5c326c198be84dc8157066841b00cc97215a6990384f726d76c81f085d0ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:53:44.050974 kubelet[2608]: E1213 08:53:44.050908 2608 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bdc5c326c198be84dc8157066841b00cc97215a6990384f726d76c81f085d0ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:53:44.051284 kubelet[2608]: E1213 08:53:44.051253 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bdc5c326c198be84dc8157066841b00cc97215a6990384f726d76c81f085d0ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-jjl49" Dec 13 08:53:44.051438 kubelet[2608]: E1213 08:53:44.051413 2608 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bdc5c326c198be84dc8157066841b00cc97215a6990384f726d76c81f085d0ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-jjl49" Dec 13 08:53:44.051647 kubelet[2608]: E1213 08:53:44.051608 2608 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-jjl49_kube-system(32f1faf5-14a7-4e77-ad30-f5c2a4239f44)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-jjl49_kube-system(32f1faf5-14a7-4e77-ad30-f5c2a4239f44)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bdc5c326c198be84dc8157066841b00cc97215a6990384f726d76c81f085d0ad\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-jjl49" podUID="32f1faf5-14a7-4e77-ad30-f5c2a4239f44" Dec 13 08:53:44.065142 containerd[1483]: time="2024-12-13T08:53:44.065068009Z" level=error msg="Failed to destroy network for sandbox \"e5306418cea0adddfb382ad3a08729db93d26aebae1c8b84a589414da162addf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:53:44.065479 containerd[1483]: time="2024-12-13T08:53:44.065448916Z" level=error msg="encountered an error cleaning up failed sandbox \"e5306418cea0adddfb382ad3a08729db93d26aebae1c8b84a589414da162addf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:53:44.065556 containerd[1483]: time="2024-12-13T08:53:44.065519345Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-fq2kp,Uid:d5b39035-4771-4bba-abc4-b862e2d1a098,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e5306418cea0adddfb382ad3a08729db93d26aebae1c8b84a589414da162addf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:53:44.065956 kubelet[2608]: E1213 08:53:44.065882 2608 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5306418cea0adddfb382ad3a08729db93d26aebae1c8b84a589414da162addf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:53:44.066152 kubelet[2608]: E1213 08:53:44.065981 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5306418cea0adddfb382ad3a08729db93d26aebae1c8b84a589414da162addf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-fq2kp" Dec 13 08:53:44.066152 kubelet[2608]: E1213 08:53:44.066018 2608 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5306418cea0adddfb382ad3a08729db93d26aebae1c8b84a589414da162addf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-fq2kp" Dec 13 08:53:44.066152 kubelet[2608]: E1213 08:53:44.066069 2608 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-fq2kp_kube-system(d5b39035-4771-4bba-abc4-b862e2d1a098)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-fq2kp_kube-system(d5b39035-4771-4bba-abc4-b862e2d1a098)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e5306418cea0adddfb382ad3a08729db93d26aebae1c8b84a589414da162addf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-fq2kp" podUID="d5b39035-4771-4bba-abc4-b862e2d1a098" Dec 13 08:53:44.542870 kubelet[2608]: I1213 08:53:44.541397 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bdc5c326c198be84dc8157066841b00cc97215a6990384f726d76c81f085d0ad" Dec 13 08:53:44.544611 containerd[1483]: time="2024-12-13T08:53:44.543898452Z" level=info msg="StopPodSandbox for \"bdc5c326c198be84dc8157066841b00cc97215a6990384f726d76c81f085d0ad\"" Dec 13 08:53:44.544611 containerd[1483]: time="2024-12-13T08:53:44.544143883Z" level=info msg="Ensure that sandbox bdc5c326c198be84dc8157066841b00cc97215a6990384f726d76c81f085d0ad in task-service has been cleanup successfully" Dec 13 08:53:44.566449 kubelet[2608]: I1213 08:53:44.565699 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5306418cea0adddfb382ad3a08729db93d26aebae1c8b84a589414da162addf" Dec 13 08:53:44.570241 containerd[1483]: time="2024-12-13T08:53:44.568590730Z" level=info msg="StopPodSandbox for \"e5306418cea0adddfb382ad3a08729db93d26aebae1c8b84a589414da162addf\"" Dec 13 08:53:44.570241 containerd[1483]: time="2024-12-13T08:53:44.568835582Z" level=info msg="Ensure that sandbox e5306418cea0adddfb382ad3a08729db93d26aebae1c8b84a589414da162addf in task-service has been cleanup successfully" Dec 13 08:53:44.628696 containerd[1483]: time="2024-12-13T08:53:44.628588342Z" level=error msg="StopPodSandbox for \"bdc5c326c198be84dc8157066841b00cc97215a6990384f726d76c81f085d0ad\" failed" error="failed to destroy network for sandbox \"bdc5c326c198be84dc8157066841b00cc97215a6990384f726d76c81f085d0ad\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:53:44.629386 kubelet[2608]: E1213 08:53:44.629338 2608 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bdc5c326c198be84dc8157066841b00cc97215a6990384f726d76c81f085d0ad\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bdc5c326c198be84dc8157066841b00cc97215a6990384f726d76c81f085d0ad" Dec 13 08:53:44.629695 kubelet[2608]: E1213 08:53:44.629668 2608 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bdc5c326c198be84dc8157066841b00cc97215a6990384f726d76c81f085d0ad"} Dec 13 08:53:44.630358 kubelet[2608]: E1213 08:53:44.630325 2608 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"32f1faf5-14a7-4e77-ad30-f5c2a4239f44\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bdc5c326c198be84dc8157066841b00cc97215a6990384f726d76c81f085d0ad\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 08:53:44.631290 kubelet[2608]: E1213 08:53:44.630372 2608 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"32f1faf5-14a7-4e77-ad30-f5c2a4239f44\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bdc5c326c198be84dc8157066841b00cc97215a6990384f726d76c81f085d0ad\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-jjl49" podUID="32f1faf5-14a7-4e77-ad30-f5c2a4239f44" Dec 13 08:53:44.648626 containerd[1483]: time="2024-12-13T08:53:44.645337542Z" level=error msg="StopPodSandbox for \"e5306418cea0adddfb382ad3a08729db93d26aebae1c8b84a589414da162addf\" failed" error="failed to destroy network for sandbox \"e5306418cea0adddfb382ad3a08729db93d26aebae1c8b84a589414da162addf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:53:44.648791 kubelet[2608]: E1213 08:53:44.648410 2608 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e5306418cea0adddfb382ad3a08729db93d26aebae1c8b84a589414da162addf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e5306418cea0adddfb382ad3a08729db93d26aebae1c8b84a589414da162addf" Dec 13 08:53:44.648791 kubelet[2608]: E1213 08:53:44.648486 2608 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e5306418cea0adddfb382ad3a08729db93d26aebae1c8b84a589414da162addf"} Dec 13 08:53:44.648791 kubelet[2608]: E1213 08:53:44.648531 2608 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d5b39035-4771-4bba-abc4-b862e2d1a098\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e5306418cea0adddfb382ad3a08729db93d26aebae1c8b84a589414da162addf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 08:53:44.648791 kubelet[2608]: E1213 08:53:44.648569 2608 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d5b39035-4771-4bba-abc4-b862e2d1a098\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e5306418cea0adddfb382ad3a08729db93d26aebae1c8b84a589414da162addf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-fq2kp" podUID="d5b39035-4771-4bba-abc4-b862e2d1a098" Dec 13 08:53:44.655565 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e5306418cea0adddfb382ad3a08729db93d26aebae1c8b84a589414da162addf-shm.mount: Deactivated successfully. Dec 13 08:53:44.655673 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bdc5c326c198be84dc8157066841b00cc97215a6990384f726d76c81f085d0ad-shm.mount: Deactivated successfully. Dec 13 08:53:46.269933 kubelet[2608]: I1213 08:53:46.269895 2608 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 08:53:46.271858 kubelet[2608]: E1213 08:53:46.270914 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:53:46.575403 kubelet[2608]: E1213 08:53:46.574736 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:53:48.654652 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount60633031.mount: Deactivated successfully. Dec 13 08:53:48.767821 containerd[1483]: time="2024-12-13T08:53:48.758110615Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Dec 13 08:53:48.784308 containerd[1483]: time="2024-12-13T08:53:48.783653382Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:53:48.819957 containerd[1483]: time="2024-12-13T08:53:48.819834837Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:53:48.822318 containerd[1483]: time="2024-12-13T08:53:48.821728921Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:53:48.825772 containerd[1483]: time="2024-12-13T08:53:48.825702340Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 6.304386315s" Dec 13 08:53:48.825772 containerd[1483]: time="2024-12-13T08:53:48.825773644Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Dec 13 08:53:48.891893 containerd[1483]: time="2024-12-13T08:53:48.891825860Z" level=info msg="CreateContainer within sandbox \"264cb4f28f0e3876901929c75ece8bd79fa646e5d942af07beac55608264fb0a\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 08:53:49.011714 containerd[1483]: time="2024-12-13T08:53:49.011553938Z" level=info msg="CreateContainer within sandbox \"264cb4f28f0e3876901929c75ece8bd79fa646e5d942af07beac55608264fb0a\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"8999e9f403528f83607efc2ce985d5637ba4f7605c3b193aba2e1c10eeada0e4\"" Dec 13 08:53:49.012773 containerd[1483]: time="2024-12-13T08:53:49.012682237Z" level=info msg="StartContainer for \"8999e9f403528f83607efc2ce985d5637ba4f7605c3b193aba2e1c10eeada0e4\"" Dec 13 08:53:49.183520 systemd[1]: Started cri-containerd-8999e9f403528f83607efc2ce985d5637ba4f7605c3b193aba2e1c10eeada0e4.scope - libcontainer container 8999e9f403528f83607efc2ce985d5637ba4f7605c3b193aba2e1c10eeada0e4. Dec 13 08:53:49.266290 containerd[1483]: time="2024-12-13T08:53:49.265899981Z" level=info msg="StartContainer for \"8999e9f403528f83607efc2ce985d5637ba4f7605c3b193aba2e1c10eeada0e4\" returns successfully" Dec 13 08:53:49.409229 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 13 08:53:49.418181 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 13 08:53:49.602852 kubelet[2608]: E1213 08:53:49.602681 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:53:49.698386 kubelet[2608]: I1213 08:53:49.678437 2608 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-dlxd6" podStartSLOduration=2.153772205 podStartE2EDuration="18.638751859s" podCreationTimestamp="2024-12-13 08:53:31 +0000 UTC" firstStartedPulling="2024-12-13 08:53:32.358378503 +0000 UTC m=+25.152569197" lastFinishedPulling="2024-12-13 08:53:48.843358144 +0000 UTC m=+41.637548851" observedRunningTime="2024-12-13 08:53:49.637947132 +0000 UTC m=+42.432137850" watchObservedRunningTime="2024-12-13 08:53:49.638751859 +0000 UTC m=+42.432942567" Dec 13 08:53:50.601283 kubelet[2608]: I1213 08:53:50.601244 2608 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 08:53:50.601976 kubelet[2608]: E1213 08:53:50.601946 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:53:51.358396 kernel: bpftool[3836]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Dec 13 08:53:51.677603 systemd-networkd[1363]: vxlan.calico: Link UP Dec 13 08:53:51.677614 systemd-networkd[1363]: vxlan.calico: Gained carrier Dec 13 08:53:51.731666 systemd[1]: Started sshd@12-144.126.221.125:22-218.92.0.157:43031.service - OpenSSH per-connection server daemon (218.92.0.157:43031). Dec 13 08:53:53.049548 sshd[3908]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.157 user=root Dec 13 08:53:53.704006 systemd-networkd[1363]: vxlan.calico: Gained IPv6LL Dec 13 08:53:53.762140 kubelet[2608]: I1213 08:53:53.762072 2608 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 08:53:53.786090 kubelet[2608]: E1213 08:53:53.784810 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:53:53.919650 systemd[1]: run-containerd-runc-k8s.io-8999e9f403528f83607efc2ce985d5637ba4f7605c3b193aba2e1c10eeada0e4-runc.t4RCzx.mount: Deactivated successfully. Dec 13 08:53:54.358128 containerd[1483]: time="2024-12-13T08:53:54.357064474Z" level=info msg="StopPodSandbox for \"2aff4406cc488b46d47506cc8c40fcf91b4e6a48a242e0eada84d6cf9628d908\"" Dec 13 08:53:54.541619 sshd[3869]: PAM: Permission denied for root from 218.92.0.157 Dec 13 08:53:54.627023 kubelet[2608]: E1213 08:53:54.626834 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:53:54.821617 containerd[1483]: 2024-12-13 08:53:54.477 [INFO][3971] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2aff4406cc488b46d47506cc8c40fcf91b4e6a48a242e0eada84d6cf9628d908" Dec 13 08:53:54.821617 containerd[1483]: 2024-12-13 08:53:54.478 [INFO][3971] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2aff4406cc488b46d47506cc8c40fcf91b4e6a48a242e0eada84d6cf9628d908" iface="eth0" netns="/var/run/netns/cni-bc366a93-f0de-d246-e556-45199cfb57e6" Dec 13 08:53:54.821617 containerd[1483]: 2024-12-13 08:53:54.479 [INFO][3971] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2aff4406cc488b46d47506cc8c40fcf91b4e6a48a242e0eada84d6cf9628d908" iface="eth0" netns="/var/run/netns/cni-bc366a93-f0de-d246-e556-45199cfb57e6" Dec 13 08:53:54.821617 containerd[1483]: 2024-12-13 08:53:54.483 [INFO][3971] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2aff4406cc488b46d47506cc8c40fcf91b4e6a48a242e0eada84d6cf9628d908" iface="eth0" netns="/var/run/netns/cni-bc366a93-f0de-d246-e556-45199cfb57e6" Dec 13 08:53:54.821617 containerd[1483]: 2024-12-13 08:53:54.483 [INFO][3971] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2aff4406cc488b46d47506cc8c40fcf91b4e6a48a242e0eada84d6cf9628d908" Dec 13 08:53:54.821617 containerd[1483]: 2024-12-13 08:53:54.483 [INFO][3971] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2aff4406cc488b46d47506cc8c40fcf91b4e6a48a242e0eada84d6cf9628d908" Dec 13 08:53:54.821617 containerd[1483]: 2024-12-13 08:53:54.791 [INFO][3977] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2aff4406cc488b46d47506cc8c40fcf91b4e6a48a242e0eada84d6cf9628d908" HandleID="k8s-pod-network.2aff4406cc488b46d47506cc8c40fcf91b4e6a48a242e0eada84d6cf9628d908" Workload="ci--4081.2.1--4--b1553ec4eb-k8s-calico--apiserver--5c5f75d696--2kzjr-eth0" Dec 13 08:53:54.821617 containerd[1483]: 2024-12-13 08:53:54.795 [INFO][3977] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:53:54.821617 containerd[1483]: 2024-12-13 08:53:54.796 [INFO][3977] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:53:54.821617 containerd[1483]: 2024-12-13 08:53:54.814 [WARNING][3977] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2aff4406cc488b46d47506cc8c40fcf91b4e6a48a242e0eada84d6cf9628d908" HandleID="k8s-pod-network.2aff4406cc488b46d47506cc8c40fcf91b4e6a48a242e0eada84d6cf9628d908" Workload="ci--4081.2.1--4--b1553ec4eb-k8s-calico--apiserver--5c5f75d696--2kzjr-eth0" Dec 13 08:53:54.821617 containerd[1483]: 2024-12-13 08:53:54.814 [INFO][3977] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2aff4406cc488b46d47506cc8c40fcf91b4e6a48a242e0eada84d6cf9628d908" HandleID="k8s-pod-network.2aff4406cc488b46d47506cc8c40fcf91b4e6a48a242e0eada84d6cf9628d908" Workload="ci--4081.2.1--4--b1553ec4eb-k8s-calico--apiserver--5c5f75d696--2kzjr-eth0" Dec 13 08:53:54.821617 containerd[1483]: 2024-12-13 08:53:54.816 [INFO][3977] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:53:54.821617 containerd[1483]: 2024-12-13 08:53:54.818 [INFO][3971] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2aff4406cc488b46d47506cc8c40fcf91b4e6a48a242e0eada84d6cf9628d908" Dec 13 08:53:54.828988 systemd[1]: run-netns-cni\x2dbc366a93\x2df0de\x2dd246\x2de556\x2d45199cfb57e6.mount: Deactivated successfully. Dec 13 08:53:54.832646 containerd[1483]: time="2024-12-13T08:53:54.832563195Z" level=info msg="TearDown network for sandbox \"2aff4406cc488b46d47506cc8c40fcf91b4e6a48a242e0eada84d6cf9628d908\" successfully" Dec 13 08:53:54.832760 containerd[1483]: time="2024-12-13T08:53:54.832640680Z" level=info msg="StopPodSandbox for \"2aff4406cc488b46d47506cc8c40fcf91b4e6a48a242e0eada84d6cf9628d908\" returns successfully" Dec 13 08:53:54.843337 containerd[1483]: time="2024-12-13T08:53:54.843283637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c5f75d696-2kzjr,Uid:ddf3153c-b769-4b7e-ba57-f0bc3c3374a4,Namespace:calico-apiserver,Attempt:1,}" Dec 13 08:53:55.076264 systemd-networkd[1363]: cali9e85804ae9e: Link UP Dec 13 08:53:55.076602 systemd-networkd[1363]: cali9e85804ae9e: Gained carrier Dec 13 08:53:55.124644 containerd[1483]: 2024-12-13 08:53:54.945 [INFO][3984] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.1--4--b1553ec4eb-k8s-calico--apiserver--5c5f75d696--2kzjr-eth0 calico-apiserver-5c5f75d696- calico-apiserver ddf3153c-b769-4b7e-ba57-f0bc3c3374a4 782 0 2024-12-13 08:53:31 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5c5f75d696 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.2.1-4-b1553ec4eb calico-apiserver-5c5f75d696-2kzjr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9e85804ae9e [] []}} ContainerID="2b678feefc730f87accf5bf16bd57033ca5bc2617de95496efa3ff0265316817" Namespace="calico-apiserver" Pod="calico-apiserver-5c5f75d696-2kzjr" WorkloadEndpoint="ci--4081.2.1--4--b1553ec4eb-k8s-calico--apiserver--5c5f75d696--2kzjr-" Dec 13 08:53:55.124644 containerd[1483]: 2024-12-13 08:53:54.946 [INFO][3984] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2b678feefc730f87accf5bf16bd57033ca5bc2617de95496efa3ff0265316817" Namespace="calico-apiserver" Pod="calico-apiserver-5c5f75d696-2kzjr" WorkloadEndpoint="ci--4081.2.1--4--b1553ec4eb-k8s-calico--apiserver--5c5f75d696--2kzjr-eth0" Dec 13 08:53:55.124644 containerd[1483]: 2024-12-13 08:53:55.002 [INFO][3995] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2b678feefc730f87accf5bf16bd57033ca5bc2617de95496efa3ff0265316817" HandleID="k8s-pod-network.2b678feefc730f87accf5bf16bd57033ca5bc2617de95496efa3ff0265316817" Workload="ci--4081.2.1--4--b1553ec4eb-k8s-calico--apiserver--5c5f75d696--2kzjr-eth0" Dec 13 08:53:55.124644 containerd[1483]: 2024-12-13 08:53:55.016 [INFO][3995] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2b678feefc730f87accf5bf16bd57033ca5bc2617de95496efa3ff0265316817" HandleID="k8s-pod-network.2b678feefc730f87accf5bf16bd57033ca5bc2617de95496efa3ff0265316817" Workload="ci--4081.2.1--4--b1553ec4eb-k8s-calico--apiserver--5c5f75d696--2kzjr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000334120), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.2.1-4-b1553ec4eb", "pod":"calico-apiserver-5c5f75d696-2kzjr", "timestamp":"2024-12-13 08:53:55.002898357 +0000 UTC"}, Hostname:"ci-4081.2.1-4-b1553ec4eb", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 08:53:55.124644 containerd[1483]: 2024-12-13 08:53:55.016 [INFO][3995] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:53:55.124644 containerd[1483]: 2024-12-13 08:53:55.017 [INFO][3995] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:53:55.124644 containerd[1483]: 2024-12-13 08:53:55.017 [INFO][3995] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.1-4-b1553ec4eb' Dec 13 08:53:55.124644 containerd[1483]: 2024-12-13 08:53:55.020 [INFO][3995] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2b678feefc730f87accf5bf16bd57033ca5bc2617de95496efa3ff0265316817" host="ci-4081.2.1-4-b1553ec4eb" Dec 13 08:53:55.124644 containerd[1483]: 2024-12-13 08:53:55.031 [INFO][3995] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.1-4-b1553ec4eb" Dec 13 08:53:55.124644 containerd[1483]: 2024-12-13 08:53:55.038 [INFO][3995] ipam/ipam.go 489: Trying affinity for 192.168.124.0/26 host="ci-4081.2.1-4-b1553ec4eb" Dec 13 08:53:55.124644 containerd[1483]: 2024-12-13 08:53:55.041 [INFO][3995] ipam/ipam.go 155: Attempting to load block cidr=192.168.124.0/26 host="ci-4081.2.1-4-b1553ec4eb" Dec 13 08:53:55.124644 containerd[1483]: 2024-12-13 08:53:55.045 [INFO][3995] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.124.0/26 host="ci-4081.2.1-4-b1553ec4eb" Dec 13 08:53:55.124644 containerd[1483]: 2024-12-13 08:53:55.045 [INFO][3995] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.124.0/26 handle="k8s-pod-network.2b678feefc730f87accf5bf16bd57033ca5bc2617de95496efa3ff0265316817" host="ci-4081.2.1-4-b1553ec4eb" Dec 13 08:53:55.124644 containerd[1483]: 2024-12-13 08:53:55.047 [INFO][3995] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2b678feefc730f87accf5bf16bd57033ca5bc2617de95496efa3ff0265316817 Dec 13 08:53:55.124644 containerd[1483]: 2024-12-13 08:53:55.054 [INFO][3995] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.124.0/26 handle="k8s-pod-network.2b678feefc730f87accf5bf16bd57033ca5bc2617de95496efa3ff0265316817" host="ci-4081.2.1-4-b1553ec4eb" Dec 13 08:53:55.124644 containerd[1483]: 2024-12-13 08:53:55.065 [INFO][3995] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.124.1/26] block=192.168.124.0/26 handle="k8s-pod-network.2b678feefc730f87accf5bf16bd57033ca5bc2617de95496efa3ff0265316817" host="ci-4081.2.1-4-b1553ec4eb" Dec 13 08:53:55.124644 containerd[1483]: 2024-12-13 08:53:55.066 [INFO][3995] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.124.1/26] handle="k8s-pod-network.2b678feefc730f87accf5bf16bd57033ca5bc2617de95496efa3ff0265316817" host="ci-4081.2.1-4-b1553ec4eb" Dec 13 08:53:55.124644 containerd[1483]: 2024-12-13 08:53:55.066 [INFO][3995] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:53:55.124644 containerd[1483]: 2024-12-13 08:53:55.066 [INFO][3995] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.124.1/26] IPv6=[] ContainerID="2b678feefc730f87accf5bf16bd57033ca5bc2617de95496efa3ff0265316817" HandleID="k8s-pod-network.2b678feefc730f87accf5bf16bd57033ca5bc2617de95496efa3ff0265316817" Workload="ci--4081.2.1--4--b1553ec4eb-k8s-calico--apiserver--5c5f75d696--2kzjr-eth0" Dec 13 08:53:55.127955 containerd[1483]: 2024-12-13 08:53:55.070 [INFO][3984] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2b678feefc730f87accf5bf16bd57033ca5bc2617de95496efa3ff0265316817" Namespace="calico-apiserver" Pod="calico-apiserver-5c5f75d696-2kzjr" WorkloadEndpoint="ci--4081.2.1--4--b1553ec4eb-k8s-calico--apiserver--5c5f75d696--2kzjr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--4--b1553ec4eb-k8s-calico--apiserver--5c5f75d696--2kzjr-eth0", GenerateName:"calico-apiserver-5c5f75d696-", Namespace:"calico-apiserver", SelfLink:"", UID:"ddf3153c-b769-4b7e-ba57-f0bc3c3374a4", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 53, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c5f75d696", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-4-b1553ec4eb", ContainerID:"", Pod:"calico-apiserver-5c5f75d696-2kzjr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9e85804ae9e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:53:55.127955 containerd[1483]: 2024-12-13 08:53:55.070 [INFO][3984] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.124.1/32] ContainerID="2b678feefc730f87accf5bf16bd57033ca5bc2617de95496efa3ff0265316817" Namespace="calico-apiserver" Pod="calico-apiserver-5c5f75d696-2kzjr" WorkloadEndpoint="ci--4081.2.1--4--b1553ec4eb-k8s-calico--apiserver--5c5f75d696--2kzjr-eth0" Dec 13 08:53:55.127955 containerd[1483]: 2024-12-13 08:53:55.070 [INFO][3984] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9e85804ae9e ContainerID="2b678feefc730f87accf5bf16bd57033ca5bc2617de95496efa3ff0265316817" Namespace="calico-apiserver" Pod="calico-apiserver-5c5f75d696-2kzjr" WorkloadEndpoint="ci--4081.2.1--4--b1553ec4eb-k8s-calico--apiserver--5c5f75d696--2kzjr-eth0" Dec 13 08:53:55.127955 containerd[1483]: 2024-12-13 08:53:55.077 [INFO][3984] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2b678feefc730f87accf5bf16bd57033ca5bc2617de95496efa3ff0265316817" Namespace="calico-apiserver" Pod="calico-apiserver-5c5f75d696-2kzjr" WorkloadEndpoint="ci--4081.2.1--4--b1553ec4eb-k8s-calico--apiserver--5c5f75d696--2kzjr-eth0" Dec 13 08:53:55.127955 containerd[1483]: 2024-12-13 08:53:55.078 [INFO][3984] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2b678feefc730f87accf5bf16bd57033ca5bc2617de95496efa3ff0265316817" Namespace="calico-apiserver" Pod="calico-apiserver-5c5f75d696-2kzjr" WorkloadEndpoint="ci--4081.2.1--4--b1553ec4eb-k8s-calico--apiserver--5c5f75d696--2kzjr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--4--b1553ec4eb-k8s-calico--apiserver--5c5f75d696--2kzjr-eth0", GenerateName:"calico-apiserver-5c5f75d696-", Namespace:"calico-apiserver", SelfLink:"", UID:"ddf3153c-b769-4b7e-ba57-f0bc3c3374a4", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 53, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c5f75d696", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-4-b1553ec4eb", ContainerID:"2b678feefc730f87accf5bf16bd57033ca5bc2617de95496efa3ff0265316817", Pod:"calico-apiserver-5c5f75d696-2kzjr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9e85804ae9e", MAC:"22:c3:ae:c9:22:e8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:53:55.127955 containerd[1483]: 2024-12-13 08:53:55.101 [INFO][3984] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2b678feefc730f87accf5bf16bd57033ca5bc2617de95496efa3ff0265316817" Namespace="calico-apiserver" Pod="calico-apiserver-5c5f75d696-2kzjr" WorkloadEndpoint="ci--4081.2.1--4--b1553ec4eb-k8s-calico--apiserver--5c5f75d696--2kzjr-eth0" Dec 13 08:53:55.178475 containerd[1483]: time="2024-12-13T08:53:55.178301255Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 08:53:55.178475 containerd[1483]: time="2024-12-13T08:53:55.178407297Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 08:53:55.179378 containerd[1483]: time="2024-12-13T08:53:55.178440318Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:53:55.180140 containerd[1483]: time="2024-12-13T08:53:55.180022819Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:53:55.215582 systemd[1]: Started cri-containerd-2b678feefc730f87accf5bf16bd57033ca5bc2617de95496efa3ff0265316817.scope - libcontainer container 2b678feefc730f87accf5bf16bd57033ca5bc2617de95496efa3ff0265316817. Dec 13 08:53:55.283298 containerd[1483]: time="2024-12-13T08:53:55.283235445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c5f75d696-2kzjr,Uid:ddf3153c-b769-4b7e-ba57-f0bc3c3374a4,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"2b678feefc730f87accf5bf16bd57033ca5bc2617de95496efa3ff0265316817\"" Dec 13 08:53:55.315248 containerd[1483]: time="2024-12-13T08:53:55.315161990Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 08:53:56.356632 containerd[1483]: time="2024-12-13T08:53:56.356258533Z" level=info msg="StopPodSandbox for \"fddcdc4e4228fb516182855e6acda500bf9dfa5babdf534e12f5f62252f97ea8\"" Dec 13 08:53:56.455556 systemd-networkd[1363]: cali9e85804ae9e: Gained IPv6LL Dec 13 08:53:56.488835 containerd[1483]: 2024-12-13 08:53:56.423 [INFO][4067] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fddcdc4e4228fb516182855e6acda500bf9dfa5babdf534e12f5f62252f97ea8" Dec 13 08:53:56.488835 containerd[1483]: 2024-12-13 08:53:56.423 [INFO][4067] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fddcdc4e4228fb516182855e6acda500bf9dfa5babdf534e12f5f62252f97ea8" iface="eth0" netns="/var/run/netns/cni-b7cdafeb-1918-c3c1-b5ab-439d0d904867" Dec 13 08:53:56.488835 containerd[1483]: 2024-12-13 08:53:56.424 [INFO][4067] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fddcdc4e4228fb516182855e6acda500bf9dfa5babdf534e12f5f62252f97ea8" iface="eth0" netns="/var/run/netns/cni-b7cdafeb-1918-c3c1-b5ab-439d0d904867" Dec 13 08:53:56.488835 containerd[1483]: 2024-12-13 08:53:56.425 [INFO][4067] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fddcdc4e4228fb516182855e6acda500bf9dfa5babdf534e12f5f62252f97ea8" iface="eth0" netns="/var/run/netns/cni-b7cdafeb-1918-c3c1-b5ab-439d0d904867" Dec 13 08:53:56.488835 containerd[1483]: 2024-12-13 08:53:56.425 [INFO][4067] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fddcdc4e4228fb516182855e6acda500bf9dfa5babdf534e12f5f62252f97ea8" Dec 13 08:53:56.488835 containerd[1483]: 2024-12-13 08:53:56.425 [INFO][4067] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fddcdc4e4228fb516182855e6acda500bf9dfa5babdf534e12f5f62252f97ea8" Dec 13 08:53:56.488835 containerd[1483]: 2024-12-13 08:53:56.472 [INFO][4073] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fddcdc4e4228fb516182855e6acda500bf9dfa5babdf534e12f5f62252f97ea8" HandleID="k8s-pod-network.fddcdc4e4228fb516182855e6acda500bf9dfa5babdf534e12f5f62252f97ea8" Workload="ci--4081.2.1--4--b1553ec4eb-k8s-calico--kube--controllers--57b6c9448b--wct54-eth0" Dec 13 08:53:56.488835 containerd[1483]: 2024-12-13 08:53:56.472 [INFO][4073] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:53:56.488835 containerd[1483]: 2024-12-13 08:53:56.472 [INFO][4073] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:53:56.488835 containerd[1483]: 2024-12-13 08:53:56.480 [WARNING][4073] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fddcdc4e4228fb516182855e6acda500bf9dfa5babdf534e12f5f62252f97ea8" HandleID="k8s-pod-network.fddcdc4e4228fb516182855e6acda500bf9dfa5babdf534e12f5f62252f97ea8" Workload="ci--4081.2.1--4--b1553ec4eb-k8s-calico--kube--controllers--57b6c9448b--wct54-eth0" Dec 13 08:53:56.488835 containerd[1483]: 2024-12-13 08:53:56.480 [INFO][4073] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fddcdc4e4228fb516182855e6acda500bf9dfa5babdf534e12f5f62252f97ea8" HandleID="k8s-pod-network.fddcdc4e4228fb516182855e6acda500bf9dfa5babdf534e12f5f62252f97ea8" Workload="ci--4081.2.1--4--b1553ec4eb-k8s-calico--kube--controllers--57b6c9448b--wct54-eth0" Dec 13 08:53:56.488835 containerd[1483]: 2024-12-13 08:53:56.483 [INFO][4073] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:53:56.488835 containerd[1483]: 2024-12-13 08:53:56.485 [INFO][4067] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fddcdc4e4228fb516182855e6acda500bf9dfa5babdf534e12f5f62252f97ea8" Dec 13 08:53:56.492125 containerd[1483]: time="2024-12-13T08:53:56.490934716Z" level=info msg="TearDown network for sandbox \"fddcdc4e4228fb516182855e6acda500bf9dfa5babdf534e12f5f62252f97ea8\" successfully" Dec 13 08:53:56.492125 containerd[1483]: time="2024-12-13T08:53:56.490983400Z" level=info msg="StopPodSandbox for \"fddcdc4e4228fb516182855e6acda500bf9dfa5babdf534e12f5f62252f97ea8\" returns successfully" Dec 13 08:53:56.491865 systemd[1]: run-netns-cni\x2db7cdafeb\x2d1918\x2dc3c1\x2db5ab\x2d439d0d904867.mount: Deactivated successfully. Dec 13 08:53:56.494402 containerd[1483]: time="2024-12-13T08:53:56.494337792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57b6c9448b-wct54,Uid:5f39249e-c6ea-419b-8152-f432f5354acc,Namespace:calico-system,Attempt:1,}" Dec 13 08:53:56.750134 systemd-networkd[1363]: calia7994ea9a7e: Link UP Dec 13 08:53:56.752490 systemd-networkd[1363]: calia7994ea9a7e: Gained carrier Dec 13 08:53:56.786277 containerd[1483]: 2024-12-13 08:53:56.626 [INFO][4079] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.1--4--b1553ec4eb-k8s-calico--kube--controllers--57b6c9448b--wct54-eth0 calico-kube-controllers-57b6c9448b- calico-system 5f39249e-c6ea-419b-8152-f432f5354acc 794 0 2024-12-13 08:53:32 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:57b6c9448b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.2.1-4-b1553ec4eb calico-kube-controllers-57b6c9448b-wct54 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia7994ea9a7e [] []}} ContainerID="60ca077f7c0888b82333bfdeb9afdc41800b071196993feecc58e74e419c7ca8" Namespace="calico-system" Pod="calico-kube-controllers-57b6c9448b-wct54" WorkloadEndpoint="ci--4081.2.1--4--b1553ec4eb-k8s-calico--kube--controllers--57b6c9448b--wct54-" Dec 13 08:53:56.786277 containerd[1483]: 2024-12-13 08:53:56.627 [INFO][4079] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="60ca077f7c0888b82333bfdeb9afdc41800b071196993feecc58e74e419c7ca8" Namespace="calico-system" Pod="calico-kube-controllers-57b6c9448b-wct54" WorkloadEndpoint="ci--4081.2.1--4--b1553ec4eb-k8s-calico--kube--controllers--57b6c9448b--wct54-eth0" Dec 13 08:53:56.786277 containerd[1483]: 2024-12-13 08:53:56.675 [INFO][4090] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="60ca077f7c0888b82333bfdeb9afdc41800b071196993feecc58e74e419c7ca8" HandleID="k8s-pod-network.60ca077f7c0888b82333bfdeb9afdc41800b071196993feecc58e74e419c7ca8" Workload="ci--4081.2.1--4--b1553ec4eb-k8s-calico--kube--controllers--57b6c9448b--wct54-eth0" Dec 13 08:53:56.786277 containerd[1483]: 2024-12-13 08:53:56.692 [INFO][4090] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="60ca077f7c0888b82333bfdeb9afdc41800b071196993feecc58e74e419c7ca8" HandleID="k8s-pod-network.60ca077f7c0888b82333bfdeb9afdc41800b071196993feecc58e74e419c7ca8" Workload="ci--4081.2.1--4--b1553ec4eb-k8s-calico--kube--controllers--57b6c9448b--wct54-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000514a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.2.1-4-b1553ec4eb", "pod":"calico-kube-controllers-57b6c9448b-wct54", "timestamp":"2024-12-13 08:53:56.675690853 +0000 UTC"}, Hostname:"ci-4081.2.1-4-b1553ec4eb", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 08:53:56.786277 containerd[1483]: 2024-12-13 08:53:56.692 [INFO][4090] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:53:56.786277 containerd[1483]: 2024-12-13 08:53:56.692 [INFO][4090] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:53:56.786277 containerd[1483]: 2024-12-13 08:53:56.692 [INFO][4090] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.1-4-b1553ec4eb' Dec 13 08:53:56.786277 containerd[1483]: 2024-12-13 08:53:56.694 [INFO][4090] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.60ca077f7c0888b82333bfdeb9afdc41800b071196993feecc58e74e419c7ca8" host="ci-4081.2.1-4-b1553ec4eb" Dec 13 08:53:56.786277 containerd[1483]: 2024-12-13 08:53:56.700 [INFO][4090] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.1-4-b1553ec4eb" Dec 13 08:53:56.786277 containerd[1483]: 2024-12-13 08:53:56.710 [INFO][4090] ipam/ipam.go 489: Trying affinity for 192.168.124.0/26 host="ci-4081.2.1-4-b1553ec4eb" Dec 13 08:53:56.786277 containerd[1483]: 2024-12-13 08:53:56.714 [INFO][4090] ipam/ipam.go 155: Attempting to load block cidr=192.168.124.0/26 host="ci-4081.2.1-4-b1553ec4eb" Dec 13 08:53:56.786277 containerd[1483]: 2024-12-13 08:53:56.717 [INFO][4090] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.124.0/26 host="ci-4081.2.1-4-b1553ec4eb" Dec 13 08:53:56.786277 containerd[1483]: 2024-12-13 08:53:56.717 [INFO][4090] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.124.0/26 handle="k8s-pod-network.60ca077f7c0888b82333bfdeb9afdc41800b071196993feecc58e74e419c7ca8" host="ci-4081.2.1-4-b1553ec4eb" Dec 13 08:53:56.786277 containerd[1483]: 2024-12-13 08:53:56.719 [INFO][4090] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.60ca077f7c0888b82333bfdeb9afdc41800b071196993feecc58e74e419c7ca8 Dec 13 08:53:56.786277 containerd[1483]: 2024-12-13 08:53:56.726 [INFO][4090] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.124.0/26 handle="k8s-pod-network.60ca077f7c0888b82333bfdeb9afdc41800b071196993feecc58e74e419c7ca8" host="ci-4081.2.1-4-b1553ec4eb" Dec 13 08:53:56.786277 containerd[1483]: 2024-12-13 08:53:56.739 [INFO][4090] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.124.2/26] block=192.168.124.0/26 handle="k8s-pod-network.60ca077f7c0888b82333bfdeb9afdc41800b071196993feecc58e74e419c7ca8" host="ci-4081.2.1-4-b1553ec4eb" Dec 13 08:53:56.786277 containerd[1483]: 2024-12-13 08:53:56.740 [INFO][4090] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.124.2/26] handle="k8s-pod-network.60ca077f7c0888b82333bfdeb9afdc41800b071196993feecc58e74e419c7ca8" host="ci-4081.2.1-4-b1553ec4eb" Dec 13 08:53:56.786277 containerd[1483]: 2024-12-13 08:53:56.740 [INFO][4090] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:53:56.786277 containerd[1483]: 2024-12-13 08:53:56.740 [INFO][4090] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.124.2/26] IPv6=[] ContainerID="60ca077f7c0888b82333bfdeb9afdc41800b071196993feecc58e74e419c7ca8" HandleID="k8s-pod-network.60ca077f7c0888b82333bfdeb9afdc41800b071196993feecc58e74e419c7ca8" Workload="ci--4081.2.1--4--b1553ec4eb-k8s-calico--kube--controllers--57b6c9448b--wct54-eth0" Dec 13 08:53:56.790174 containerd[1483]: 2024-12-13 08:53:56.745 [INFO][4079] cni-plugin/k8s.go 386: Populated endpoint ContainerID="60ca077f7c0888b82333bfdeb9afdc41800b071196993feecc58e74e419c7ca8" Namespace="calico-system" Pod="calico-kube-controllers-57b6c9448b-wct54" WorkloadEndpoint="ci--4081.2.1--4--b1553ec4eb-k8s-calico--kube--controllers--57b6c9448b--wct54-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--4--b1553ec4eb-k8s-calico--kube--controllers--57b6c9448b--wct54-eth0", GenerateName:"calico-kube-controllers-57b6c9448b-", Namespace:"calico-system", SelfLink:"", UID:"5f39249e-c6ea-419b-8152-f432f5354acc", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 53, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"57b6c9448b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-4-b1553ec4eb", ContainerID:"", Pod:"calico-kube-controllers-57b6c9448b-wct54", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.124.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia7994ea9a7e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:53:56.790174 containerd[1483]: 2024-12-13 08:53:56.745 [INFO][4079] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.124.2/32] ContainerID="60ca077f7c0888b82333bfdeb9afdc41800b071196993feecc58e74e419c7ca8" Namespace="calico-system" Pod="calico-kube-controllers-57b6c9448b-wct54" WorkloadEndpoint="ci--4081.2.1--4--b1553ec4eb-k8s-calico--kube--controllers--57b6c9448b--wct54-eth0" Dec 13 08:53:56.790174 containerd[1483]: 2024-12-13 08:53:56.745 [INFO][4079] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia7994ea9a7e ContainerID="60ca077f7c0888b82333bfdeb9afdc41800b071196993feecc58e74e419c7ca8" Namespace="calico-system" Pod="calico-kube-controllers-57b6c9448b-wct54" WorkloadEndpoint="ci--4081.2.1--4--b1553ec4eb-k8s-calico--kube--controllers--57b6c9448b--wct54-eth0" Dec 13 08:53:56.790174 containerd[1483]: 2024-12-13 08:53:56.751 [INFO][4079] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="60ca077f7c0888b82333bfdeb9afdc41800b071196993feecc58e74e419c7ca8" Namespace="calico-system" Pod="calico-kube-controllers-57b6c9448b-wct54" WorkloadEndpoint="ci--4081.2.1--4--b1553ec4eb-k8s-calico--kube--controllers--57b6c9448b--wct54-eth0" Dec 13 08:53:56.790174 containerd[1483]: 2024-12-13 08:53:56.752 [INFO][4079] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="60ca077f7c0888b82333bfdeb9afdc41800b071196993feecc58e74e419c7ca8" Namespace="calico-system" Pod="calico-kube-controllers-57b6c9448b-wct54" WorkloadEndpoint="ci--4081.2.1--4--b1553ec4eb-k8s-calico--kube--controllers--57b6c9448b--wct54-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--4--b1553ec4eb-k8s-calico--kube--controllers--57b6c9448b--wct54-eth0", GenerateName:"calico-kube-controllers-57b6c9448b-", Namespace:"calico-system", SelfLink:"", UID:"5f39249e-c6ea-419b-8152-f432f5354acc", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 53, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"57b6c9448b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-4-b1553ec4eb", ContainerID:"60ca077f7c0888b82333bfdeb9afdc41800b071196993feecc58e74e419c7ca8", Pod:"calico-kube-controllers-57b6c9448b-wct54", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.124.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia7994ea9a7e", MAC:"42:60:7a:4c:da:a8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:53:56.790174 containerd[1483]: 2024-12-13 08:53:56.772 [INFO][4079] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="60ca077f7c0888b82333bfdeb9afdc41800b071196993feecc58e74e419c7ca8" Namespace="calico-system" Pod="calico-kube-controllers-57b6c9448b-wct54" WorkloadEndpoint="ci--4081.2.1--4--b1553ec4eb-k8s-calico--kube--controllers--57b6c9448b--wct54-eth0" Dec 13 08:53:56.858491 containerd[1483]: time="2024-12-13T08:53:56.857022692Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 08:53:56.858491 containerd[1483]: time="2024-12-13T08:53:56.858423857Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 08:53:56.858491 containerd[1483]: time="2024-12-13T08:53:56.858462777Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:53:56.859235 containerd[1483]: time="2024-12-13T08:53:56.858620357Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:53:56.898689 systemd[1]: Started cri-containerd-60ca077f7c0888b82333bfdeb9afdc41800b071196993feecc58e74e419c7ca8.scope - libcontainer container 60ca077f7c0888b82333bfdeb9afdc41800b071196993feecc58e74e419c7ca8. Dec 13 08:53:56.988256 containerd[1483]: time="2024-12-13T08:53:56.988185289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57b6c9448b-wct54,Uid:5f39249e-c6ea-419b-8152-f432f5354acc,Namespace:calico-system,Attempt:1,} returns sandbox id \"60ca077f7c0888b82333bfdeb9afdc41800b071196993feecc58e74e419c7ca8\"" Dec 13 08:53:58.247412 systemd-networkd[1363]: calia7994ea9a7e: Gained IPv6LL Dec 13 08:53:58.345303 containerd[1483]: time="2024-12-13T08:53:58.344844966Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:53:58.348395 containerd[1483]: time="2024-12-13T08:53:58.347307946Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Dec 13 08:53:58.351925 containerd[1483]: time="2024-12-13T08:53:58.351876572Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:53:58.360215 containerd[1483]: time="2024-12-13T08:53:58.360141150Z" level=info msg="StopPodSandbox for \"bdc5c326c198be84dc8157066841b00cc97215a6990384f726d76c81f085d0ad\"" Dec 13 08:53:58.363681 containerd[1483]: time="2024-12-13T08:53:58.363632963Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:53:58.368505 containerd[1483]: time="2024-12-13T08:53:58.368438074Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 3.053178369s" Dec 13 08:53:58.368505 containerd[1483]: time="2024-12-13T08:53:58.368505935Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Dec 13 08:53:58.373254 containerd[1483]: time="2024-12-13T08:53:58.372688611Z" level=info msg="StopPodSandbox for \"78cb6c96573eb180d6c043237feef4c7adb280e484418384c57bc32cd68ec686\"" Dec 13 08:53:58.374021 containerd[1483]: time="2024-12-13T08:53:58.373958193Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Dec 13 08:53:58.399814 containerd[1483]: time="2024-12-13T08:53:58.398492294Z" level=info msg="CreateContainer within sandbox \"2b678feefc730f87accf5bf16bd57033ca5bc2617de95496efa3ff0265316817\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 08:53:58.470525 containerd[1483]: time="2024-12-13T08:53:58.470473100Z" level=info msg="CreateContainer within sandbox \"2b678feefc730f87accf5bf16bd57033ca5bc2617de95496efa3ff0265316817\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"fe8edad78e85a39ab410b8994ee9ad7f8b4a994a3f0b814a95f72baa149e3103\"" Dec 13 08:53:58.473940 containerd[1483]: time="2024-12-13T08:53:58.473898458Z" level=info msg="StartContainer for \"fe8edad78e85a39ab410b8994ee9ad7f8b4a994a3f0b814a95f72baa149e3103\"" Dec 13 08:53:58.555506 systemd[1]: Started cri-containerd-fe8edad78e85a39ab410b8994ee9ad7f8b4a994a3f0b814a95f72baa149e3103.scope - libcontainer container fe8edad78e85a39ab410b8994ee9ad7f8b4a994a3f0b814a95f72baa149e3103. Dec 13 08:53:58.708847 containerd[1483]: 2024-12-13 08:53:58.572 [INFO][4198] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="78cb6c96573eb180d6c043237feef4c7adb280e484418384c57bc32cd68ec686" Dec 13 08:53:58.708847 containerd[1483]: 2024-12-13 08:53:58.572 [INFO][4198] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="78cb6c96573eb180d6c043237feef4c7adb280e484418384c57bc32cd68ec686" iface="eth0" netns="/var/run/netns/cni-16b40bc5-2e21-6472-7b01-f448d2bc6721" Dec 13 08:53:58.708847 containerd[1483]: 2024-12-13 08:53:58.573 [INFO][4198] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="78cb6c96573eb180d6c043237feef4c7adb280e484418384c57bc32cd68ec686" iface="eth0" netns="/var/run/netns/cni-16b40bc5-2e21-6472-7b01-f448d2bc6721" Dec 13 08:53:58.708847 containerd[1483]: 2024-12-13 08:53:58.574 [INFO][4198] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="78cb6c96573eb180d6c043237feef4c7adb280e484418384c57bc32cd68ec686" iface="eth0" netns="/var/run/netns/cni-16b40bc5-2e21-6472-7b01-f448d2bc6721" Dec 13 08:53:58.708847 containerd[1483]: 2024-12-13 08:53:58.574 [INFO][4198] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="78cb6c96573eb180d6c043237feef4c7adb280e484418384c57bc32cd68ec686" Dec 13 08:53:58.708847 containerd[1483]: 2024-12-13 08:53:58.574 [INFO][4198] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="78cb6c96573eb180d6c043237feef4c7adb280e484418384c57bc32cd68ec686" Dec 13 08:53:58.708847 containerd[1483]: 2024-12-13 08:53:58.663 [INFO][4232] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="78cb6c96573eb180d6c043237feef4c7adb280e484418384c57bc32cd68ec686" HandleID="k8s-pod-network.78cb6c96573eb180d6c043237feef4c7adb280e484418384c57bc32cd68ec686" Workload="ci--4081.2.1--4--b1553ec4eb-k8s-calico--apiserver--5c5f75d696--w7vmr-eth0" Dec 13 08:53:58.708847 containerd[1483]: 2024-12-13 08:53:58.664 [INFO][4232] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:53:58.708847 containerd[1483]: 2024-12-13 08:53:58.664 [INFO][4232] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:53:58.708847 containerd[1483]: 2024-12-13 08:53:58.689 [WARNING][4232] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="78cb6c96573eb180d6c043237feef4c7adb280e484418384c57bc32cd68ec686" HandleID="k8s-pod-network.78cb6c96573eb180d6c043237feef4c7adb280e484418384c57bc32cd68ec686" Workload="ci--4081.2.1--4--b1553ec4eb-k8s-calico--apiserver--5c5f75d696--w7vmr-eth0" Dec 13 08:53:58.708847 containerd[1483]: 2024-12-13 08:53:58.689 [INFO][4232] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="78cb6c96573eb180d6c043237feef4c7adb280e484418384c57bc32cd68ec686" HandleID="k8s-pod-network.78cb6c96573eb180d6c043237feef4c7adb280e484418384c57bc32cd68ec686" Workload="ci--4081.2.1--4--b1553ec4eb-k8s-calico--apiserver--5c5f75d696--w7vmr-eth0" Dec 13 08:53:58.708847 containerd[1483]: 2024-12-13 08:53:58.692 [INFO][4232] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:53:58.708847 containerd[1483]: 2024-12-13 08:53:58.698 [INFO][4198] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="78cb6c96573eb180d6c043237feef4c7adb280e484418384c57bc32cd68ec686" Dec 13 08:53:58.717127 containerd[1483]: time="2024-12-13T08:53:58.715344253Z" level=info msg="TearDown network for sandbox \"78cb6c96573eb180d6c043237feef4c7adb280e484418384c57bc32cd68ec686\" successfully" Dec 13 08:53:58.717127 containerd[1483]: time="2024-12-13T08:53:58.715421708Z" level=info msg="StopPodSandbox for \"78cb6c96573eb180d6c043237feef4c7adb280e484418384c57bc32cd68ec686\" returns successfully" Dec 13 08:53:58.716783 systemd[1]: run-netns-cni\x2d16b40bc5\x2d2e21\x2d6472\x2d7b01\x2df448d2bc6721.mount: Deactivated successfully. Dec 13 08:53:58.727234 containerd[1483]: time="2024-12-13T08:53:58.726353101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c5f75d696-w7vmr,Uid:8ba1434c-4e4e-46fd-97f9-ebbb427b8559,Namespace:calico-apiserver,Attempt:1,}" Dec 13 08:53:58.739062 containerd[1483]: time="2024-12-13T08:53:58.738859922Z" level=info msg="StartContainer for \"fe8edad78e85a39ab410b8994ee9ad7f8b4a994a3f0b814a95f72baa149e3103\" returns successfully" Dec 13 08:53:58.739729 containerd[1483]: 2024-12-13 08:53:58.579 [INFO][4188] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bdc5c326c198be84dc8157066841b00cc97215a6990384f726d76c81f085d0ad" Dec 13 08:53:58.739729 containerd[1483]: 2024-12-13 08:53:58.579 [INFO][4188] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bdc5c326c198be84dc8157066841b00cc97215a6990384f726d76c81f085d0ad" iface="eth0" netns="/var/run/netns/cni-41265845-8f62-02a6-56c0-f4ac80efb2e8" Dec 13 08:53:58.739729 containerd[1483]: 2024-12-13 08:53:58.581 [INFO][4188] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bdc5c326c198be84dc8157066841b00cc97215a6990384f726d76c81f085d0ad" iface="eth0" netns="/var/run/netns/cni-41265845-8f62-02a6-56c0-f4ac80efb2e8" Dec 13 08:53:58.739729 containerd[1483]: 2024-12-13 08:53:58.582 [INFO][4188] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bdc5c326c198be84dc8157066841b00cc97215a6990384f726d76c81f085d0ad" iface="eth0" netns="/var/run/netns/cni-41265845-8f62-02a6-56c0-f4ac80efb2e8" Dec 13 08:53:58.739729 containerd[1483]: 2024-12-13 08:53:58.582 [INFO][4188] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bdc5c326c198be84dc8157066841b00cc97215a6990384f726d76c81f085d0ad" Dec 13 08:53:58.739729 containerd[1483]: 2024-12-13 08:53:58.582 [INFO][4188] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bdc5c326c198be84dc8157066841b00cc97215a6990384f726d76c81f085d0ad" Dec 13 08:53:58.739729 containerd[1483]: 2024-12-13 08:53:58.681 [INFO][4233] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bdc5c326c198be84dc8157066841b00cc97215a6990384f726d76c81f085d0ad" HandleID="k8s-pod-network.bdc5c326c198be84dc8157066841b00cc97215a6990384f726d76c81f085d0ad" Workload="ci--4081.2.1--4--b1553ec4eb-k8s-coredns--7db6d8ff4d--jjl49-eth0" Dec 13 08:53:58.739729 containerd[1483]: 2024-12-13 08:53:58.682 [INFO][4233] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:53:58.739729 containerd[1483]: 2024-12-13 08:53:58.692 [INFO][4233] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:53:58.739729 containerd[1483]: 2024-12-13 08:53:58.706 [WARNING][4233] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bdc5c326c198be84dc8157066841b00cc97215a6990384f726d76c81f085d0ad" HandleID="k8s-pod-network.bdc5c326c198be84dc8157066841b00cc97215a6990384f726d76c81f085d0ad" Workload="ci--4081.2.1--4--b1553ec4eb-k8s-coredns--7db6d8ff4d--jjl49-eth0" Dec 13 08:53:58.739729 containerd[1483]: 2024-12-13 08:53:58.708 [INFO][4233] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bdc5c326c198be84dc8157066841b00cc97215a6990384f726d76c81f085d0ad" HandleID="k8s-pod-network.bdc5c326c198be84dc8157066841b00cc97215a6990384f726d76c81f085d0ad" Workload="ci--4081.2.1--4--b1553ec4eb-k8s-coredns--7db6d8ff4d--jjl49-eth0" Dec 13 08:53:58.739729 containerd[1483]: 2024-12-13 08:53:58.718 [INFO][4233] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:53:58.739729 containerd[1483]: 2024-12-13 08:53:58.731 [INFO][4188] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bdc5c326c198be84dc8157066841b00cc97215a6990384f726d76c81f085d0ad" Dec 13 08:53:58.742467 containerd[1483]: time="2024-12-13T08:53:58.742419229Z" level=info msg="TearDown network for sandbox \"bdc5c326c198be84dc8157066841b00cc97215a6990384f726d76c81f085d0ad\" successfully" Dec 13 08:53:58.742467 containerd[1483]: time="2024-12-13T08:53:58.742454299Z" level=info msg="StopPodSandbox for \"bdc5c326c198be84dc8157066841b00cc97215a6990384f726d76c81f085d0ad\" returns successfully" Dec 13 08:53:58.743309 kubelet[2608]: E1213 08:53:58.743050 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:53:58.744085 containerd[1483]: time="2024-12-13T08:53:58.743537489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jjl49,Uid:32f1faf5-14a7-4e77-ad30-f5c2a4239f44,Namespace:kube-system,Attempt:1,}" Dec 13 08:53:59.143807 systemd-networkd[1363]: calidd01b34e7e3: Link UP Dec 13 08:53:59.144975 systemd-networkd[1363]: calidd01b34e7e3: Gained carrier Dec 13 08:53:59.184499 containerd[1483]: 2024-12-13 08:53:58.957 [INFO][4268] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.1--4--b1553ec4eb-k8s-calico--apiserver--5c5f75d696--w7vmr-eth0 calico-apiserver-5c5f75d696- calico-apiserver 8ba1434c-4e4e-46fd-97f9-ebbb427b8559 809 0 2024-12-13 08:53:31 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5c5f75d696 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.2.1-4-b1553ec4eb calico-apiserver-5c5f75d696-w7vmr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calidd01b34e7e3 [] []}} ContainerID="a8154459c6c30914ec17e3bc27750965572a2fb8e83b232fc8d62caaad66a8b0" Namespace="calico-apiserver" Pod="calico-apiserver-5c5f75d696-w7vmr" WorkloadEndpoint="ci--4081.2.1--4--b1553ec4eb-k8s-calico--apiserver--5c5f75d696--w7vmr-" Dec 13 08:53:59.184499 containerd[1483]: 2024-12-13 08:53:58.957 [INFO][4268] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a8154459c6c30914ec17e3bc27750965572a2fb8e83b232fc8d62caaad66a8b0" Namespace="calico-apiserver" Pod="calico-apiserver-5c5f75d696-w7vmr" WorkloadEndpoint="ci--4081.2.1--4--b1553ec4eb-k8s-calico--apiserver--5c5f75d696--w7vmr-eth0" Dec 13 08:53:59.184499 containerd[1483]: 2024-12-13 08:53:59.018 [INFO][4285] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a8154459c6c30914ec17e3bc27750965572a2fb8e83b232fc8d62caaad66a8b0" HandleID="k8s-pod-network.a8154459c6c30914ec17e3bc27750965572a2fb8e83b232fc8d62caaad66a8b0" Workload="ci--4081.2.1--4--b1553ec4eb-k8s-calico--apiserver--5c5f75d696--w7vmr-eth0" Dec 13 08:53:59.184499 containerd[1483]: 2024-12-13 08:53:59.063 [INFO][4285] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a8154459c6c30914ec17e3bc27750965572a2fb8e83b232fc8d62caaad66a8b0" HandleID="k8s-pod-network.a8154459c6c30914ec17e3bc27750965572a2fb8e83b232fc8d62caaad66a8b0" Workload="ci--4081.2.1--4--b1553ec4eb-k8s-calico--apiserver--5c5f75d696--w7vmr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003bc290), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.2.1-4-b1553ec4eb", "pod":"calico-apiserver-5c5f75d696-w7vmr", "timestamp":"2024-12-13 08:53:59.018340814 +0000 UTC"}, Hostname:"ci-4081.2.1-4-b1553ec4eb", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 08:53:59.184499 containerd[1483]: 2024-12-13 08:53:59.063 [INFO][4285] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:53:59.184499 containerd[1483]: 2024-12-13 08:53:59.063 [INFO][4285] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:53:59.184499 containerd[1483]: 2024-12-13 08:53:59.063 [INFO][4285] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.1-4-b1553ec4eb' Dec 13 08:53:59.184499 containerd[1483]: 2024-12-13 08:53:59.074 [INFO][4285] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a8154459c6c30914ec17e3bc27750965572a2fb8e83b232fc8d62caaad66a8b0" host="ci-4081.2.1-4-b1553ec4eb" Dec 13 08:53:59.184499 containerd[1483]: 2024-12-13 08:53:59.086 [INFO][4285] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.1-4-b1553ec4eb" Dec 13 08:53:59.184499 containerd[1483]: 2024-12-13 08:53:59.095 [INFO][4285] ipam/ipam.go 489: Trying affinity for 192.168.124.0/26 host="ci-4081.2.1-4-b1553ec4eb" Dec 13 08:53:59.184499 containerd[1483]: 2024-12-13 08:53:59.098 [INFO][4285] ipam/ipam.go 155: Attempting to load block cidr=192.168.124.0/26 host="ci-4081.2.1-4-b1553ec4eb" Dec 13 08:53:59.184499 containerd[1483]: 2024-12-13 08:53:59.103 [INFO][4285] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.124.0/26 host="ci-4081.2.1-4-b1553ec4eb" Dec 13 08:53:59.184499 containerd[1483]: 2024-12-13 08:53:59.103 [INFO][4285] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.124.0/26 handle="k8s-pod-network.a8154459c6c30914ec17e3bc27750965572a2fb8e83b232fc8d62caaad66a8b0" host="ci-4081.2.1-4-b1553ec4eb" Dec 13 08:53:59.184499 containerd[1483]: 2024-12-13 08:53:59.106 [INFO][4285] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a8154459c6c30914ec17e3bc27750965572a2fb8e83b232fc8d62caaad66a8b0 Dec 13 08:53:59.184499 containerd[1483]: 2024-12-13 08:53:59.114 [INFO][4285] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.124.0/26 handle="k8s-pod-network.a8154459c6c30914ec17e3bc27750965572a2fb8e83b232fc8d62caaad66a8b0" host="ci-4081.2.1-4-b1553ec4eb" Dec 13 08:53:59.184499 containerd[1483]: 2024-12-13 08:53:59.128 [INFO][4285] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.124.3/26] block=192.168.124.0/26 handle="k8s-pod-network.a8154459c6c30914ec17e3bc27750965572a2fb8e83b232fc8d62caaad66a8b0" host="ci-4081.2.1-4-b1553ec4eb" Dec 13 08:53:59.184499 containerd[1483]: 2024-12-13 08:53:59.128 [INFO][4285] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.124.3/26] handle="k8s-pod-network.a8154459c6c30914ec17e3bc27750965572a2fb8e83b232fc8d62caaad66a8b0" host="ci-4081.2.1-4-b1553ec4eb" Dec 13 08:53:59.184499 containerd[1483]: 2024-12-13 08:53:59.128 [INFO][4285] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:53:59.184499 containerd[1483]: 2024-12-13 08:53:59.128 [INFO][4285] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.124.3/26] IPv6=[] ContainerID="a8154459c6c30914ec17e3bc27750965572a2fb8e83b232fc8d62caaad66a8b0" HandleID="k8s-pod-network.a8154459c6c30914ec17e3bc27750965572a2fb8e83b232fc8d62caaad66a8b0" Workload="ci--4081.2.1--4--b1553ec4eb-k8s-calico--apiserver--5c5f75d696--w7vmr-eth0" Dec 13 08:53:59.187390 containerd[1483]: 2024-12-13 08:53:59.132 [INFO][4268] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a8154459c6c30914ec17e3bc27750965572a2fb8e83b232fc8d62caaad66a8b0" Namespace="calico-apiserver" Pod="calico-apiserver-5c5f75d696-w7vmr" WorkloadEndpoint="ci--4081.2.1--4--b1553ec4eb-k8s-calico--apiserver--5c5f75d696--w7vmr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--4--b1553ec4eb-k8s-calico--apiserver--5c5f75d696--w7vmr-eth0", GenerateName:"calico-apiserver-5c5f75d696-", Namespace:"calico-apiserver", SelfLink:"", UID:"8ba1434c-4e4e-46fd-97f9-ebbb427b8559", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 53, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c5f75d696", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-4-b1553ec4eb", ContainerID:"", Pod:"calico-apiserver-5c5f75d696-w7vmr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidd01b34e7e3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:53:59.187390 containerd[1483]: 2024-12-13 08:53:59.132 [INFO][4268] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.124.3/32] ContainerID="a8154459c6c30914ec17e3bc27750965572a2fb8e83b232fc8d62caaad66a8b0" Namespace="calico-apiserver" Pod="calico-apiserver-5c5f75d696-w7vmr" WorkloadEndpoint="ci--4081.2.1--4--b1553ec4eb-k8s-calico--apiserver--5c5f75d696--w7vmr-eth0" Dec 13 08:53:59.187390 containerd[1483]: 2024-12-13 08:53:59.133 [INFO][4268] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidd01b34e7e3 ContainerID="a8154459c6c30914ec17e3bc27750965572a2fb8e83b232fc8d62caaad66a8b0" Namespace="calico-apiserver" Pod="calico-apiserver-5c5f75d696-w7vmr" WorkloadEndpoint="ci--4081.2.1--4--b1553ec4eb-k8s-calico--apiserver--5c5f75d696--w7vmr-eth0" Dec 13 08:53:59.187390 containerd[1483]: 2024-12-13 08:53:59.147 [INFO][4268] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a8154459c6c30914ec17e3bc27750965572a2fb8e83b232fc8d62caaad66a8b0" Namespace="calico-apiserver" Pod="calico-apiserver-5c5f75d696-w7vmr" WorkloadEndpoint="ci--4081.2.1--4--b1553ec4eb-k8s-calico--apiserver--5c5f75d696--w7vmr-eth0" Dec 13 08:53:59.187390 containerd[1483]: 2024-12-13 08:53:59.150 [INFO][4268] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a8154459c6c30914ec17e3bc27750965572a2fb8e83b232fc8d62caaad66a8b0" Namespace="calico-apiserver" Pod="calico-apiserver-5c5f75d696-w7vmr" WorkloadEndpoint="ci--4081.2.1--4--b1553ec4eb-k8s-calico--apiserver--5c5f75d696--w7vmr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--4--b1553ec4eb-k8s-calico--apiserver--5c5f75d696--w7vmr-eth0", GenerateName:"calico-apiserver-5c5f75d696-", Namespace:"calico-apiserver", SelfLink:"", UID:"8ba1434c-4e4e-46fd-97f9-ebbb427b8559", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 53, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c5f75d696", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-4-b1553ec4eb", ContainerID:"a8154459c6c30914ec17e3bc27750965572a2fb8e83b232fc8d62caaad66a8b0", Pod:"calico-apiserver-5c5f75d696-w7vmr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidd01b34e7e3", MAC:"be:c1:16:27:ab:cd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:53:59.187390 containerd[1483]: 2024-12-13 08:53:59.179 [INFO][4268] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a8154459c6c30914ec17e3bc27750965572a2fb8e83b232fc8d62caaad66a8b0" Namespace="calico-apiserver" Pod="calico-apiserver-5c5f75d696-w7vmr" WorkloadEndpoint="ci--4081.2.1--4--b1553ec4eb-k8s-calico--apiserver--5c5f75d696--w7vmr-eth0" Dec 13 08:53:59.274214 containerd[1483]: time="2024-12-13T08:53:59.273727631Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 08:53:59.274214 containerd[1483]: time="2024-12-13T08:53:59.273835822Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 08:53:59.274214 containerd[1483]: time="2024-12-13T08:53:59.273864220Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:53:59.274214 containerd[1483]: time="2024-12-13T08:53:59.274000787Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:53:59.307739 systemd-networkd[1363]: calie0aaa043874: Link UP Dec 13 08:53:59.308618 systemd-networkd[1363]: calie0aaa043874: Gained carrier Dec 13 08:53:59.346499 systemd[1]: Started cri-containerd-a8154459c6c30914ec17e3bc27750965572a2fb8e83b232fc8d62caaad66a8b0.scope - libcontainer container a8154459c6c30914ec17e3bc27750965572a2fb8e83b232fc8d62caaad66a8b0. Dec 13 08:53:59.362718 containerd[1483]: time="2024-12-13T08:53:59.361757210Z" level=info msg="StopPodSandbox for \"e5306418cea0adddfb382ad3a08729db93d26aebae1c8b84a589414da162addf\"" Dec 13 08:53:59.365215 containerd[1483]: time="2024-12-13T08:53:59.364278848Z" level=info msg="StopPodSandbox for \"8b6fef1aab49a17690d10d5f9b7ddf2e6712c5bfcc25e1474967f18112be0076\"" Dec 13 08:53:59.378418 containerd[1483]: 2024-12-13 08:53:58.923 [INFO][4258] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.1--4--b1553ec4eb-k8s-coredns--7db6d8ff4d--jjl49-eth0 coredns-7db6d8ff4d- kube-system 32f1faf5-14a7-4e77-ad30-f5c2a4239f44 810 0 2024-12-13 08:53:22 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.2.1-4-b1553ec4eb coredns-7db6d8ff4d-jjl49 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie0aaa043874 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="a01defefa92eb50a8fe5799e2193c1755a84b7bb046b70a7e4cc15fa8ddf31d5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jjl49" WorkloadEndpoint="ci--4081.2.1--4--b1553ec4eb-k8s-coredns--7db6d8ff4d--jjl49-" Dec 13 08:53:59.378418 containerd[1483]: 2024-12-13 08:53:58.924 [INFO][4258] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a01defefa92eb50a8fe5799e2193c1755a84b7bb046b70a7e4cc15fa8ddf31d5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jjl49" WorkloadEndpoint="ci--4081.2.1--4--b1553ec4eb-k8s-coredns--7db6d8ff4d--jjl49-eth0" Dec 13 08:53:59.378418 containerd[1483]: 2024-12-13 08:53:59.061 [INFO][4281] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a01defefa92eb50a8fe5799e2193c1755a84b7bb046b70a7e4cc15fa8ddf31d5" HandleID="k8s-pod-network.a01defefa92eb50a8fe5799e2193c1755a84b7bb046b70a7e4cc15fa8ddf31d5" Workload="ci--4081.2.1--4--b1553ec4eb-k8s-coredns--7db6d8ff4d--jjl49-eth0" Dec 13 08:53:59.378418 containerd[1483]: 2024-12-13 08:53:59.090 [INFO][4281] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a01defefa92eb50a8fe5799e2193c1755a84b7bb046b70a7e4cc15fa8ddf31d5" HandleID="k8s-pod-network.a01defefa92eb50a8fe5799e2193c1755a84b7bb046b70a7e4cc15fa8ddf31d5" Workload="ci--4081.2.1--4--b1553ec4eb-k8s-coredns--7db6d8ff4d--jjl49-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003971f0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.2.1-4-b1553ec4eb", "pod":"coredns-7db6d8ff4d-jjl49", "timestamp":"2024-12-13 08:53:59.061678143 +0000 UTC"}, Hostname:"ci-4081.2.1-4-b1553ec4eb", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 08:53:59.378418 containerd[1483]: 2024-12-13 08:53:59.090 [INFO][4281] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:53:59.378418 containerd[1483]: 2024-12-13 08:53:59.130 [INFO][4281] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:53:59.378418 containerd[1483]: 2024-12-13 08:53:59.130 [INFO][4281] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.1-4-b1553ec4eb' Dec 13 08:53:59.378418 containerd[1483]: 2024-12-13 08:53:59.133 [INFO][4281] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a01defefa92eb50a8fe5799e2193c1755a84b7bb046b70a7e4cc15fa8ddf31d5" host="ci-4081.2.1-4-b1553ec4eb" Dec 13 08:53:59.378418 containerd[1483]: 2024-12-13 08:53:59.154 [INFO][4281] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.1-4-b1553ec4eb" Dec 13 08:53:59.378418 containerd[1483]: 2024-12-13 08:53:59.192 [INFO][4281] ipam/ipam.go 489: Trying affinity for 192.168.124.0/26 host="ci-4081.2.1-4-b1553ec4eb" Dec 13 08:53:59.378418 containerd[1483]: 2024-12-13 08:53:59.206 [INFO][4281] ipam/ipam.go 155: Attempting to load block cidr=192.168.124.0/26 host="ci-4081.2.1-4-b1553ec4eb" Dec 13 08:53:59.378418 containerd[1483]: 2024-12-13 08:53:59.214 [INFO][4281] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.124.0/26 host="ci-4081.2.1-4-b1553ec4eb" Dec 13 08:53:59.378418 containerd[1483]: 2024-12-13 08:53:59.215 [INFO][4281] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.124.0/26 handle="k8s-pod-network.a01defefa92eb50a8fe5799e2193c1755a84b7bb046b70a7e4cc15fa8ddf31d5" host="ci-4081.2.1-4-b1553ec4eb" Dec 13 08:53:59.378418 containerd[1483]: 2024-12-13 08:53:59.221 [INFO][4281] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a01defefa92eb50a8fe5799e2193c1755a84b7bb046b70a7e4cc15fa8ddf31d5 Dec 13 08:53:59.378418 containerd[1483]: 2024-12-13 08:53:59.241 [INFO][4281] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.124.0/26 handle="k8s-pod-network.a01defefa92eb50a8fe5799e2193c1755a84b7bb046b70a7e4cc15fa8ddf31d5" host="ci-4081.2.1-4-b1553ec4eb" Dec 13 08:53:59.378418 containerd[1483]: 2024-12-13 08:53:59.291 [INFO][4281] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.124.4/26] block=192.168.124.0/26 handle="k8s-pod-network.a01defefa92eb50a8fe5799e2193c1755a84b7bb046b70a7e4cc15fa8ddf31d5" host="ci-4081.2.1-4-b1553ec4eb" Dec 13 08:53:59.378418 containerd[1483]: 2024-12-13 08:53:59.291 [INFO][4281] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.124.4/26] handle="k8s-pod-network.a01defefa92eb50a8fe5799e2193c1755a84b7bb046b70a7e4cc15fa8ddf31d5" host="ci-4081.2.1-4-b1553ec4eb" Dec 13 08:53:59.378418 containerd[1483]: 2024-12-13 08:53:59.291 [INFO][4281] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:53:59.378418 containerd[1483]: 2024-12-13 08:53:59.291 [INFO][4281] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.124.4/26] IPv6=[] ContainerID="a01defefa92eb50a8fe5799e2193c1755a84b7bb046b70a7e4cc15fa8ddf31d5" HandleID="k8s-pod-network.a01defefa92eb50a8fe5799e2193c1755a84b7bb046b70a7e4cc15fa8ddf31d5" Workload="ci--4081.2.1--4--b1553ec4eb-k8s-coredns--7db6d8ff4d--jjl49-eth0" Dec 13 08:53:59.379123 containerd[1483]: 2024-12-13 08:53:59.300 [INFO][4258] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a01defefa92eb50a8fe5799e2193c1755a84b7bb046b70a7e4cc15fa8ddf31d5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jjl49" WorkloadEndpoint="ci--4081.2.1--4--b1553ec4eb-k8s-coredns--7db6d8ff4d--jjl49-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--4--b1553ec4eb-k8s-coredns--7db6d8ff4d--jjl49-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"32f1faf5-14a7-4e77-ad30-f5c2a4239f44", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 53, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-4-b1553ec4eb", ContainerID:"", Pod:"coredns-7db6d8ff4d-jjl49", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie0aaa043874", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:53:59.379123 containerd[1483]: 2024-12-13 08:53:59.301 [INFO][4258] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.124.4/32] ContainerID="a01defefa92eb50a8fe5799e2193c1755a84b7bb046b70a7e4cc15fa8ddf31d5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jjl49" WorkloadEndpoint="ci--4081.2.1--4--b1553ec4eb-k8s-coredns--7db6d8ff4d--jjl49-eth0" Dec 13 08:53:59.379123 containerd[1483]: 2024-12-13 08:53:59.301 [INFO][4258] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie0aaa043874 ContainerID="a01defefa92eb50a8fe5799e2193c1755a84b7bb046b70a7e4cc15fa8ddf31d5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jjl49" WorkloadEndpoint="ci--4081.2.1--4--b1553ec4eb-k8s-coredns--7db6d8ff4d--jjl49-eth0" Dec 13 08:53:59.379123 containerd[1483]: 2024-12-13 08:53:59.309 [INFO][4258] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a01defefa92eb50a8fe5799e2193c1755a84b7bb046b70a7e4cc15fa8ddf31d5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jjl49" WorkloadEndpoint="ci--4081.2.1--4--b1553ec4eb-k8s-coredns--7db6d8ff4d--jjl49-eth0" Dec 13 08:53:59.379123 containerd[1483]: 2024-12-13 08:53:59.314 [INFO][4258] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a01defefa92eb50a8fe5799e2193c1755a84b7bb046b70a7e4cc15fa8ddf31d5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jjl49" WorkloadEndpoint="ci--4081.2.1--4--b1553ec4eb-k8s-coredns--7db6d8ff4d--jjl49-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--4--b1553ec4eb-k8s-coredns--7db6d8ff4d--jjl49-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"32f1faf5-14a7-4e77-ad30-f5c2a4239f44", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 53, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-4-b1553ec4eb", ContainerID:"a01defefa92eb50a8fe5799e2193c1755a84b7bb046b70a7e4cc15fa8ddf31d5", Pod:"coredns-7db6d8ff4d-jjl49", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie0aaa043874", MAC:"7e:01:29:d5:90:1e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:53:59.379123 containerd[1483]: 2024-12-13 08:53:59.368 [INFO][4258] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a01defefa92eb50a8fe5799e2193c1755a84b7bb046b70a7e4cc15fa8ddf31d5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jjl49" WorkloadEndpoint="ci--4081.2.1--4--b1553ec4eb-k8s-coredns--7db6d8ff4d--jjl49-eth0" Dec 13 08:53:59.455635 systemd[1]: run-netns-cni\x2d41265845\x2d8f62\x2d02a6\x2d56c0\x2df4ac80efb2e8.mount: Deactivated successfully. Dec 13 08:53:59.508998 containerd[1483]: time="2024-12-13T08:53:59.507839392Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 08:53:59.508998 containerd[1483]: time="2024-12-13T08:53:59.507926787Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 08:53:59.508998 containerd[1483]: time="2024-12-13T08:53:59.507938640Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:53:59.508998 containerd[1483]: time="2024-12-13T08:53:59.508051841Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:53:59.554531 systemd[1]: run-containerd-runc-k8s.io-a01defefa92eb50a8fe5799e2193c1755a84b7bb046b70a7e4cc15fa8ddf31d5-runc.qS7Af4.mount: Deactivated successfully. Dec 13 08:53:59.564474 systemd[1]: Started cri-containerd-a01defefa92eb50a8fe5799e2193c1755a84b7bb046b70a7e4cc15fa8ddf31d5.scope - libcontainer container a01defefa92eb50a8fe5799e2193c1755a84b7bb046b70a7e4cc15fa8ddf31d5. Dec 13 08:53:59.689512 containerd[1483]: time="2024-12-13T08:53:59.689449136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c5f75d696-w7vmr,Uid:8ba1434c-4e4e-46fd-97f9-ebbb427b8559,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"a8154459c6c30914ec17e3bc27750965572a2fb8e83b232fc8d62caaad66a8b0\"" Dec 13 08:53:59.756385 containerd[1483]: time="2024-12-13T08:53:59.755584515Z" level=info msg="CreateContainer within sandbox \"a8154459c6c30914ec17e3bc27750965572a2fb8e83b232fc8d62caaad66a8b0\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 08:53:59.815778 containerd[1483]: time="2024-12-13T08:53:59.815722847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jjl49,Uid:32f1faf5-14a7-4e77-ad30-f5c2a4239f44,Namespace:kube-system,Attempt:1,} returns sandbox id \"a01defefa92eb50a8fe5799e2193c1755a84b7bb046b70a7e4cc15fa8ddf31d5\"" Dec 13 08:53:59.821577 kubelet[2608]: E1213 08:53:59.816918 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:53:59.824264 containerd[1483]: time="2024-12-13T08:53:59.824211759Z" level=info msg="CreateContainer within sandbox \"a01defefa92eb50a8fe5799e2193c1755a84b7bb046b70a7e4cc15fa8ddf31d5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 08:53:59.866428 containerd[1483]: 2024-12-13 08:53:59.690 [INFO][4381] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e5306418cea0adddfb382ad3a08729db93d26aebae1c8b84a589414da162addf" Dec 13 08:53:59.866428 containerd[1483]: 2024-12-13 08:53:59.690 [INFO][4381] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e5306418cea0adddfb382ad3a08729db93d26aebae1c8b84a589414da162addf" iface="eth0" netns="/var/run/netns/cni-4406a953-0acc-a245-bb67-247320dfc060" Dec 13 08:53:59.866428 containerd[1483]: 2024-12-13 08:53:59.692 [INFO][4381] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e5306418cea0adddfb382ad3a08729db93d26aebae1c8b84a589414da162addf" iface="eth0" netns="/var/run/netns/cni-4406a953-0acc-a245-bb67-247320dfc060" Dec 13 08:53:59.866428 containerd[1483]: 2024-12-13 08:53:59.692 [INFO][4381] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e5306418cea0adddfb382ad3a08729db93d26aebae1c8b84a589414da162addf" iface="eth0" netns="/var/run/netns/cni-4406a953-0acc-a245-bb67-247320dfc060" Dec 13 08:53:59.866428 containerd[1483]: 2024-12-13 08:53:59.692 [INFO][4381] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e5306418cea0adddfb382ad3a08729db93d26aebae1c8b84a589414da162addf" Dec 13 08:53:59.866428 containerd[1483]: 2024-12-13 08:53:59.692 [INFO][4381] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e5306418cea0adddfb382ad3a08729db93d26aebae1c8b84a589414da162addf" Dec 13 08:53:59.866428 containerd[1483]: 2024-12-13 08:53:59.829 [INFO][4437] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e5306418cea0adddfb382ad3a08729db93d26aebae1c8b84a589414da162addf" HandleID="k8s-pod-network.e5306418cea0adddfb382ad3a08729db93d26aebae1c8b84a589414da162addf" Workload="ci--4081.2.1--4--b1553ec4eb-k8s-coredns--7db6d8ff4d--fq2kp-eth0" Dec 13 08:53:59.866428 containerd[1483]: 2024-12-13 08:53:59.829 [INFO][4437] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:53:59.866428 containerd[1483]: 2024-12-13 08:53:59.829 [INFO][4437] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:53:59.866428 containerd[1483]: 2024-12-13 08:53:59.848 [WARNING][4437] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e5306418cea0adddfb382ad3a08729db93d26aebae1c8b84a589414da162addf" HandleID="k8s-pod-network.e5306418cea0adddfb382ad3a08729db93d26aebae1c8b84a589414da162addf" Workload="ci--4081.2.1--4--b1553ec4eb-k8s-coredns--7db6d8ff4d--fq2kp-eth0" Dec 13 08:53:59.866428 containerd[1483]: 2024-12-13 08:53:59.848 [INFO][4437] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e5306418cea0adddfb382ad3a08729db93d26aebae1c8b84a589414da162addf" HandleID="k8s-pod-network.e5306418cea0adddfb382ad3a08729db93d26aebae1c8b84a589414da162addf" Workload="ci--4081.2.1--4--b1553ec4eb-k8s-coredns--7db6d8ff4d--fq2kp-eth0" Dec 13 08:53:59.866428 containerd[1483]: 2024-12-13 08:53:59.852 [INFO][4437] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:53:59.866428 containerd[1483]: 2024-12-13 08:53:59.859 [INFO][4381] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e5306418cea0adddfb382ad3a08729db93d26aebae1c8b84a589414da162addf" Dec 13 08:53:59.867251 containerd[1483]: time="2024-12-13T08:53:59.867209560Z" level=info msg="TearDown network for sandbox \"e5306418cea0adddfb382ad3a08729db93d26aebae1c8b84a589414da162addf\" successfully" Dec 13 08:53:59.867378 containerd[1483]: time="2024-12-13T08:53:59.867362313Z" level=info msg="StopPodSandbox for \"e5306418cea0adddfb382ad3a08729db93d26aebae1c8b84a589414da162addf\" returns successfully" Dec 13 08:53:59.867849 kubelet[2608]: E1213 08:53:59.867819 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:53:59.873355 containerd[1483]: time="2024-12-13T08:53:59.873304894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-fq2kp,Uid:d5b39035-4771-4bba-abc4-b862e2d1a098,Namespace:kube-system,Attempt:1,}" Dec 13 08:53:59.875490 systemd[1]: run-netns-cni\x2d4406a953\x2d0acc\x2da245\x2dbb67\x2d247320dfc060.mount: Deactivated successfully. Dec 13 08:53:59.890350 containerd[1483]: 2024-12-13 08:53:59.613 [INFO][4376] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8b6fef1aab49a17690d10d5f9b7ddf2e6712c5bfcc25e1474967f18112be0076" Dec 13 08:53:59.890350 containerd[1483]: 2024-12-13 08:53:59.613 [INFO][4376] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8b6fef1aab49a17690d10d5f9b7ddf2e6712c5bfcc25e1474967f18112be0076" iface="eth0" netns="/var/run/netns/cni-0d315445-4fdf-dde0-518e-b29535e0b601" Dec 13 08:53:59.890350 containerd[1483]: 2024-12-13 08:53:59.614 [INFO][4376] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8b6fef1aab49a17690d10d5f9b7ddf2e6712c5bfcc25e1474967f18112be0076" iface="eth0" netns="/var/run/netns/cni-0d315445-4fdf-dde0-518e-b29535e0b601" Dec 13 08:53:59.890350 containerd[1483]: 2024-12-13 08:53:59.617 [INFO][4376] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8b6fef1aab49a17690d10d5f9b7ddf2e6712c5bfcc25e1474967f18112be0076" iface="eth0" netns="/var/run/netns/cni-0d315445-4fdf-dde0-518e-b29535e0b601" Dec 13 08:53:59.890350 containerd[1483]: 2024-12-13 08:53:59.617 [INFO][4376] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8b6fef1aab49a17690d10d5f9b7ddf2e6712c5bfcc25e1474967f18112be0076" Dec 13 08:53:59.890350 containerd[1483]: 2024-12-13 08:53:59.617 [INFO][4376] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8b6fef1aab49a17690d10d5f9b7ddf2e6712c5bfcc25e1474967f18112be0076" Dec 13 08:53:59.890350 containerd[1483]: 2024-12-13 08:53:59.845 [INFO][4432] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8b6fef1aab49a17690d10d5f9b7ddf2e6712c5bfcc25e1474967f18112be0076" HandleID="k8s-pod-network.8b6fef1aab49a17690d10d5f9b7ddf2e6712c5bfcc25e1474967f18112be0076" Workload="ci--4081.2.1--4--b1553ec4eb-k8s-csi--node--driver--jd225-eth0" Dec 13 08:53:59.890350 containerd[1483]: 2024-12-13 08:53:59.848 [INFO][4432] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:53:59.890350 containerd[1483]: 2024-12-13 08:53:59.851 [INFO][4432] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:53:59.890350 containerd[1483]: 2024-12-13 08:53:59.877 [WARNING][4432] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8b6fef1aab49a17690d10d5f9b7ddf2e6712c5bfcc25e1474967f18112be0076" HandleID="k8s-pod-network.8b6fef1aab49a17690d10d5f9b7ddf2e6712c5bfcc25e1474967f18112be0076" Workload="ci--4081.2.1--4--b1553ec4eb-k8s-csi--node--driver--jd225-eth0" Dec 13 08:53:59.890350 containerd[1483]: 2024-12-13 08:53:59.878 [INFO][4432] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8b6fef1aab49a17690d10d5f9b7ddf2e6712c5bfcc25e1474967f18112be0076" HandleID="k8s-pod-network.8b6fef1aab49a17690d10d5f9b7ddf2e6712c5bfcc25e1474967f18112be0076" Workload="ci--4081.2.1--4--b1553ec4eb-k8s-csi--node--driver--jd225-eth0" Dec 13 08:53:59.890350 containerd[1483]: 2024-12-13 08:53:59.884 [INFO][4432] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:53:59.890350 containerd[1483]: 2024-12-13 08:53:59.887 [INFO][4376] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8b6fef1aab49a17690d10d5f9b7ddf2e6712c5bfcc25e1474967f18112be0076" Dec 13 08:53:59.890977 containerd[1483]: time="2024-12-13T08:53:59.890919190Z" level=info msg="TearDown network for sandbox \"8b6fef1aab49a17690d10d5f9b7ddf2e6712c5bfcc25e1474967f18112be0076\" successfully" Dec 13 08:53:59.890977 containerd[1483]: time="2024-12-13T08:53:59.890948896Z" level=info msg="StopPodSandbox for \"8b6fef1aab49a17690d10d5f9b7ddf2e6712c5bfcc25e1474967f18112be0076\" returns successfully" Dec 13 08:53:59.892231 containerd[1483]: time="2024-12-13T08:53:59.892056060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jd225,Uid:9c2374d2-93f5-41dd-beaa-5f3be640a74e,Namespace:calico-system,Attempt:1,}" Dec 13 08:53:59.985613 containerd[1483]: time="2024-12-13T08:53:59.985491809Z" level=info msg="CreateContainer within sandbox \"a8154459c6c30914ec17e3bc27750965572a2fb8e83b232fc8d62caaad66a8b0\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b974fefc6c5bebb9cba7b752c36deb13e7fdbd8c89a6db80f391ae17597ae4bd\"" Dec 13 08:53:59.986664 containerd[1483]: time="2024-12-13T08:53:59.986626278Z" level=info msg="StartContainer for \"b974fefc6c5bebb9cba7b752c36deb13e7fdbd8c89a6db80f391ae17597ae4bd\"" Dec 13 08:54:00.083671 systemd[1]: Started cri-containerd-b974fefc6c5bebb9cba7b752c36deb13e7fdbd8c89a6db80f391ae17597ae4bd.scope - libcontainer container b974fefc6c5bebb9cba7b752c36deb13e7fdbd8c89a6db80f391ae17597ae4bd. Dec 13 08:54:00.103797 containerd[1483]: time="2024-12-13T08:54:00.103628597Z" level=info msg="CreateContainer within sandbox \"a01defefa92eb50a8fe5799e2193c1755a84b7bb046b70a7e4cc15fa8ddf31d5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4e864eddd5042ed671340ac943b7128c0c165cb480e977754c60f129a03263ef\"" Dec 13 08:54:00.106529 containerd[1483]: time="2024-12-13T08:54:00.106472351Z" level=info msg="StartContainer for \"4e864eddd5042ed671340ac943b7128c0c165cb480e977754c60f129a03263ef\"" Dec 13 08:54:00.244591 systemd[1]: Started cri-containerd-4e864eddd5042ed671340ac943b7128c0c165cb480e977754c60f129a03263ef.scope - libcontainer container 4e864eddd5042ed671340ac943b7128c0c165cb480e977754c60f129a03263ef. Dec 13 08:54:00.431763 containerd[1483]: time="2024-12-13T08:54:00.431700893Z" level=info msg="StartContainer for \"4e864eddd5042ed671340ac943b7128c0c165cb480e977754c60f129a03263ef\" returns successfully" Dec 13 08:54:00.470166 systemd[1]: run-netns-cni\x2d0d315445\x2d4fdf\x2ddde0\x2d518e\x2db29535e0b601.mount: Deactivated successfully. Dec 13 08:54:00.574580 containerd[1483]: time="2024-12-13T08:54:00.574376990Z" level=info msg="StartContainer for \"b974fefc6c5bebb9cba7b752c36deb13e7fdbd8c89a6db80f391ae17597ae4bd\" returns successfully" Dec 13 08:54:00.659936 systemd-networkd[1363]: calif904b6b038a: Link UP Dec 13 08:54:00.664755 systemd-networkd[1363]: calif904b6b038a: Gained carrier Dec 13 08:54:00.679568 systemd-networkd[1363]: calie0aaa043874: Gained IPv6LL Dec 13 08:54:00.727785 containerd[1483]: 2024-12-13 08:54:00.366 [INFO][4490] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.1--4--b1553ec4eb-k8s-coredns--7db6d8ff4d--fq2kp-eth0 coredns-7db6d8ff4d- kube-system d5b39035-4771-4bba-abc4-b862e2d1a098 824 0 2024-12-13 08:53:22 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.2.1-4-b1553ec4eb coredns-7db6d8ff4d-fq2kp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif904b6b038a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="8c8b5e2f35273895780bc5852da66287267d7cd83756a63a1e4bafe736a16214" Namespace="kube-system" Pod="coredns-7db6d8ff4d-fq2kp" WorkloadEndpoint="ci--4081.2.1--4--b1553ec4eb-k8s-coredns--7db6d8ff4d--fq2kp-" Dec 13 08:54:00.727785 containerd[1483]: 2024-12-13 08:54:00.367 [INFO][4490] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8c8b5e2f35273895780bc5852da66287267d7cd83756a63a1e4bafe736a16214" Namespace="kube-system" Pod="coredns-7db6d8ff4d-fq2kp" WorkloadEndpoint="ci--4081.2.1--4--b1553ec4eb-k8s-coredns--7db6d8ff4d--fq2kp-eth0" Dec 13 08:54:00.727785 containerd[1483]: 2024-12-13 08:54:00.542 [INFO][4537] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8c8b5e2f35273895780bc5852da66287267d7cd83756a63a1e4bafe736a16214" HandleID="k8s-pod-network.8c8b5e2f35273895780bc5852da66287267d7cd83756a63a1e4bafe736a16214" Workload="ci--4081.2.1--4--b1553ec4eb-k8s-coredns--7db6d8ff4d--fq2kp-eth0" Dec 13 08:54:00.727785 containerd[1483]: 2024-12-13 08:54:00.567 [INFO][4537] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8c8b5e2f35273895780bc5852da66287267d7cd83756a63a1e4bafe736a16214" HandleID="k8s-pod-network.8c8b5e2f35273895780bc5852da66287267d7cd83756a63a1e4bafe736a16214" Workload="ci--4081.2.1--4--b1553ec4eb-k8s-coredns--7db6d8ff4d--fq2kp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051ca0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.2.1-4-b1553ec4eb", "pod":"coredns-7db6d8ff4d-fq2kp", "timestamp":"2024-12-13 08:54:00.539168204 +0000 UTC"}, Hostname:"ci-4081.2.1-4-b1553ec4eb", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 08:54:00.727785 containerd[1483]: 2024-12-13 08:54:00.567 [INFO][4537] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:54:00.727785 containerd[1483]: 2024-12-13 08:54:00.567 [INFO][4537] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:54:00.727785 containerd[1483]: 2024-12-13 08:54:00.567 [INFO][4537] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.1-4-b1553ec4eb' Dec 13 08:54:00.727785 containerd[1483]: 2024-12-13 08:54:00.571 [INFO][4537] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8c8b5e2f35273895780bc5852da66287267d7cd83756a63a1e4bafe736a16214" host="ci-4081.2.1-4-b1553ec4eb" Dec 13 08:54:00.727785 containerd[1483]: 2024-12-13 08:54:00.584 [INFO][4537] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.1-4-b1553ec4eb" Dec 13 08:54:00.727785 containerd[1483]: 2024-12-13 08:54:00.592 [INFO][4537] ipam/ipam.go 489: Trying affinity for 192.168.124.0/26 host="ci-4081.2.1-4-b1553ec4eb" Dec 13 08:54:00.727785 containerd[1483]: 2024-12-13 08:54:00.595 [INFO][4537] ipam/ipam.go 155: Attempting to load block cidr=192.168.124.0/26 host="ci-4081.2.1-4-b1553ec4eb" Dec 13 08:54:00.727785 containerd[1483]: 2024-12-13 08:54:00.601 [INFO][4537] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.124.0/26 host="ci-4081.2.1-4-b1553ec4eb" Dec 13 08:54:00.727785 containerd[1483]: 2024-12-13 08:54:00.601 [INFO][4537] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.124.0/26 handle="k8s-pod-network.8c8b5e2f35273895780bc5852da66287267d7cd83756a63a1e4bafe736a16214" host="ci-4081.2.1-4-b1553ec4eb" Dec 13 08:54:00.727785 containerd[1483]: 2024-12-13 08:54:00.604 [INFO][4537] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8c8b5e2f35273895780bc5852da66287267d7cd83756a63a1e4bafe736a16214 Dec 13 08:54:00.727785 containerd[1483]: 2024-12-13 08:54:00.613 [INFO][4537] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.124.0/26 handle="k8s-pod-network.8c8b5e2f35273895780bc5852da66287267d7cd83756a63a1e4bafe736a16214" host="ci-4081.2.1-4-b1553ec4eb" Dec 13 08:54:00.727785 containerd[1483]: 2024-12-13 08:54:00.643 [INFO][4537] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.124.5/26] block=192.168.124.0/26 handle="k8s-pod-network.8c8b5e2f35273895780bc5852da66287267d7cd83756a63a1e4bafe736a16214" host="ci-4081.2.1-4-b1553ec4eb" Dec 13 08:54:00.727785 containerd[1483]: 2024-12-13 08:54:00.644 [INFO][4537] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.124.5/26] handle="k8s-pod-network.8c8b5e2f35273895780bc5852da66287267d7cd83756a63a1e4bafe736a16214" host="ci-4081.2.1-4-b1553ec4eb" Dec 13 08:54:00.727785 containerd[1483]: 2024-12-13 08:54:00.644 [INFO][4537] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:54:00.727785 containerd[1483]: 2024-12-13 08:54:00.644 [INFO][4537] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.124.5/26] IPv6=[] ContainerID="8c8b5e2f35273895780bc5852da66287267d7cd83756a63a1e4bafe736a16214" HandleID="k8s-pod-network.8c8b5e2f35273895780bc5852da66287267d7cd83756a63a1e4bafe736a16214" Workload="ci--4081.2.1--4--b1553ec4eb-k8s-coredns--7db6d8ff4d--fq2kp-eth0" Dec 13 08:54:00.737835 kubelet[2608]: I1213 08:54:00.726904 2608 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5c5f75d696-2kzjr" podStartSLOduration=26.646623606 podStartE2EDuration="29.716151532s" podCreationTimestamp="2024-12-13 08:53:31 +0000 UTC" firstStartedPulling="2024-12-13 08:53:55.302415939 +0000 UTC m=+48.096606636" lastFinishedPulling="2024-12-13 08:53:58.371943752 +0000 UTC m=+51.166134562" observedRunningTime="2024-12-13 08:53:59.795543235 +0000 UTC m=+52.589733951" watchObservedRunningTime="2024-12-13 08:54:00.716151532 +0000 UTC m=+53.510342247" Dec 13 08:54:00.738040 containerd[1483]: 2024-12-13 08:54:00.648 [INFO][4490] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8c8b5e2f35273895780bc5852da66287267d7cd83756a63a1e4bafe736a16214" Namespace="kube-system" Pod="coredns-7db6d8ff4d-fq2kp" WorkloadEndpoint="ci--4081.2.1--4--b1553ec4eb-k8s-coredns--7db6d8ff4d--fq2kp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--4--b1553ec4eb-k8s-coredns--7db6d8ff4d--fq2kp-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"d5b39035-4771-4bba-abc4-b862e2d1a098", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 53, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-4-b1553ec4eb", ContainerID:"", Pod:"coredns-7db6d8ff4d-fq2kp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif904b6b038a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:54:00.738040 containerd[1483]: 2024-12-13 08:54:00.649 [INFO][4490] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.124.5/32] ContainerID="8c8b5e2f35273895780bc5852da66287267d7cd83756a63a1e4bafe736a16214" Namespace="kube-system" Pod="coredns-7db6d8ff4d-fq2kp" WorkloadEndpoint="ci--4081.2.1--4--b1553ec4eb-k8s-coredns--7db6d8ff4d--fq2kp-eth0" Dec 13 08:54:00.738040 containerd[1483]: 2024-12-13 08:54:00.649 [INFO][4490] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif904b6b038a ContainerID="8c8b5e2f35273895780bc5852da66287267d7cd83756a63a1e4bafe736a16214" Namespace="kube-system" Pod="coredns-7db6d8ff4d-fq2kp" WorkloadEndpoint="ci--4081.2.1--4--b1553ec4eb-k8s-coredns--7db6d8ff4d--fq2kp-eth0" Dec 13 08:54:00.738040 containerd[1483]: 2024-12-13 08:54:00.655 [INFO][4490] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8c8b5e2f35273895780bc5852da66287267d7cd83756a63a1e4bafe736a16214" Namespace="kube-system" Pod="coredns-7db6d8ff4d-fq2kp" WorkloadEndpoint="ci--4081.2.1--4--b1553ec4eb-k8s-coredns--7db6d8ff4d--fq2kp-eth0" Dec 13 08:54:00.738040 containerd[1483]: 2024-12-13 08:54:00.655 [INFO][4490] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8c8b5e2f35273895780bc5852da66287267d7cd83756a63a1e4bafe736a16214" Namespace="kube-system" Pod="coredns-7db6d8ff4d-fq2kp" WorkloadEndpoint="ci--4081.2.1--4--b1553ec4eb-k8s-coredns--7db6d8ff4d--fq2kp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--4--b1553ec4eb-k8s-coredns--7db6d8ff4d--fq2kp-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"d5b39035-4771-4bba-abc4-b862e2d1a098", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 53, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-4-b1553ec4eb", ContainerID:"8c8b5e2f35273895780bc5852da66287267d7cd83756a63a1e4bafe736a16214", Pod:"coredns-7db6d8ff4d-fq2kp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif904b6b038a", MAC:"66:5a:2f:2c:07:5d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:54:00.738040 containerd[1483]: 2024-12-13 08:54:00.709 [INFO][4490] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8c8b5e2f35273895780bc5852da66287267d7cd83756a63a1e4bafe736a16214" Namespace="kube-system" Pod="coredns-7db6d8ff4d-fq2kp" WorkloadEndpoint="ci--4081.2.1--4--b1553ec4eb-k8s-coredns--7db6d8ff4d--fq2kp-eth0" Dec 13 08:54:00.780868 kubelet[2608]: I1213 08:54:00.780456 2608 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 08:54:00.787653 kubelet[2608]: E1213 08:54:00.787590 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:54:00.845730 kubelet[2608]: I1213 08:54:00.845248 2608 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5c5f75d696-w7vmr" podStartSLOduration=29.845094613 podStartE2EDuration="29.845094613s" podCreationTimestamp="2024-12-13 08:53:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 08:54:00.842954705 +0000 UTC m=+53.637145425" watchObservedRunningTime="2024-12-13 08:54:00.845094613 +0000 UTC m=+53.639285362" Dec 13 08:54:00.850604 containerd[1483]: time="2024-12-13T08:54:00.849368752Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 08:54:00.850604 containerd[1483]: time="2024-12-13T08:54:00.849465831Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 08:54:00.850604 containerd[1483]: time="2024-12-13T08:54:00.849509449Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:54:00.850604 containerd[1483]: time="2024-12-13T08:54:00.849706983Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:54:00.919540 systemd[1]: Started cri-containerd-8c8b5e2f35273895780bc5852da66287267d7cd83756a63a1e4bafe736a16214.scope - libcontainer container 8c8b5e2f35273895780bc5852da66287267d7cd83756a63a1e4bafe736a16214. Dec 13 08:54:00.990139 systemd-networkd[1363]: calib5b78d8435b: Link UP Dec 13 08:54:00.996419 systemd-networkd[1363]: calib5b78d8435b: Gained carrier Dec 13 08:54:01.029211 kubelet[2608]: I1213 08:54:01.029103 2608 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-jjl49" podStartSLOduration=39.029069007 podStartE2EDuration="39.029069007s" podCreationTimestamp="2024-12-13 08:53:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 08:54:01.025797001 +0000 UTC m=+53.819987719" watchObservedRunningTime="2024-12-13 08:54:01.029069007 +0000 UTC m=+53.823259730" Dec 13 08:54:01.087289 containerd[1483]: time="2024-12-13T08:54:01.087238689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-fq2kp,Uid:d5b39035-4771-4bba-abc4-b862e2d1a098,Namespace:kube-system,Attempt:1,} returns sandbox id \"8c8b5e2f35273895780bc5852da66287267d7cd83756a63a1e4bafe736a16214\"" Dec 13 08:54:01.088679 kubelet[2608]: E1213 08:54:01.088640 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:54:01.092661 containerd[1483]: time="2024-12-13T08:54:01.092608793Z" level=info msg="CreateContainer within sandbox \"8c8b5e2f35273895780bc5852da66287267d7cd83756a63a1e4bafe736a16214\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 08:54:01.116268 containerd[1483]: 2024-12-13 08:54:00.424 [INFO][4514] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.1--4--b1553ec4eb-k8s-csi--node--driver--jd225-eth0 csi-node-driver- calico-system 9c2374d2-93f5-41dd-beaa-5f3be640a74e 823 0 2024-12-13 08:53:31 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.2.1-4-b1553ec4eb csi-node-driver-jd225 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calib5b78d8435b [] []}} ContainerID="3839b819b91bedf8b7cda95e55967fd353a2f5b1a90ebf46274e970a73dc55ff" Namespace="calico-system" Pod="csi-node-driver-jd225" WorkloadEndpoint="ci--4081.2.1--4--b1553ec4eb-k8s-csi--node--driver--jd225-" Dec 13 08:54:01.116268 containerd[1483]: 2024-12-13 08:54:00.425 [INFO][4514] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3839b819b91bedf8b7cda95e55967fd353a2f5b1a90ebf46274e970a73dc55ff" Namespace="calico-system" Pod="csi-node-driver-jd225" WorkloadEndpoint="ci--4081.2.1--4--b1553ec4eb-k8s-csi--node--driver--jd225-eth0" Dec 13 08:54:01.116268 containerd[1483]: 2024-12-13 08:54:00.627 [INFO][4542] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3839b819b91bedf8b7cda95e55967fd353a2f5b1a90ebf46274e970a73dc55ff" HandleID="k8s-pod-network.3839b819b91bedf8b7cda95e55967fd353a2f5b1a90ebf46274e970a73dc55ff" Workload="ci--4081.2.1--4--b1553ec4eb-k8s-csi--node--driver--jd225-eth0" Dec 13 08:54:01.116268 containerd[1483]: 2024-12-13 08:54:00.667 [INFO][4542] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3839b819b91bedf8b7cda95e55967fd353a2f5b1a90ebf46274e970a73dc55ff" HandleID="k8s-pod-network.3839b819b91bedf8b7cda95e55967fd353a2f5b1a90ebf46274e970a73dc55ff" Workload="ci--4081.2.1--4--b1553ec4eb-k8s-csi--node--driver--jd225-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002b6ed0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.2.1-4-b1553ec4eb", "pod":"csi-node-driver-jd225", "timestamp":"2024-12-13 08:54:00.627405559 +0000 UTC"}, Hostname:"ci-4081.2.1-4-b1553ec4eb", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 08:54:01.116268 containerd[1483]: 2024-12-13 08:54:00.667 [INFO][4542] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:54:01.116268 containerd[1483]: 2024-12-13 08:54:00.667 [INFO][4542] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:54:01.116268 containerd[1483]: 2024-12-13 08:54:00.667 [INFO][4542] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.1-4-b1553ec4eb' Dec 13 08:54:01.116268 containerd[1483]: 2024-12-13 08:54:00.674 [INFO][4542] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3839b819b91bedf8b7cda95e55967fd353a2f5b1a90ebf46274e970a73dc55ff" host="ci-4081.2.1-4-b1553ec4eb" Dec 13 08:54:01.116268 containerd[1483]: 2024-12-13 08:54:00.698 [INFO][4542] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.1-4-b1553ec4eb" Dec 13 08:54:01.116268 containerd[1483]: 2024-12-13 08:54:00.736 [INFO][4542] ipam/ipam.go 489: Trying affinity for 192.168.124.0/26 host="ci-4081.2.1-4-b1553ec4eb" Dec 13 08:54:01.116268 containerd[1483]: 2024-12-13 08:54:00.746 [INFO][4542] ipam/ipam.go 155: Attempting to load block cidr=192.168.124.0/26 host="ci-4081.2.1-4-b1553ec4eb" Dec 13 08:54:01.116268 containerd[1483]: 2024-12-13 08:54:00.772 [INFO][4542] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.124.0/26 host="ci-4081.2.1-4-b1553ec4eb" Dec 13 08:54:01.116268 containerd[1483]: 2024-12-13 08:54:00.772 [INFO][4542] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.124.0/26 handle="k8s-pod-network.3839b819b91bedf8b7cda95e55967fd353a2f5b1a90ebf46274e970a73dc55ff" host="ci-4081.2.1-4-b1553ec4eb" Dec 13 08:54:01.116268 containerd[1483]: 2024-12-13 08:54:00.782 [INFO][4542] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3839b819b91bedf8b7cda95e55967fd353a2f5b1a90ebf46274e970a73dc55ff Dec 13 08:54:01.116268 containerd[1483]: 2024-12-13 08:54:00.850 [INFO][4542] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.124.0/26 handle="k8s-pod-network.3839b819b91bedf8b7cda95e55967fd353a2f5b1a90ebf46274e970a73dc55ff" host="ci-4081.2.1-4-b1553ec4eb" Dec 13 08:54:01.116268 containerd[1483]: 2024-12-13 08:54:00.967 [INFO][4542] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.124.6/26] block=192.168.124.0/26 handle="k8s-pod-network.3839b819b91bedf8b7cda95e55967fd353a2f5b1a90ebf46274e970a73dc55ff" host="ci-4081.2.1-4-b1553ec4eb" Dec 13 08:54:01.116268 containerd[1483]: 2024-12-13 08:54:00.967 [INFO][4542] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.124.6/26] handle="k8s-pod-network.3839b819b91bedf8b7cda95e55967fd353a2f5b1a90ebf46274e970a73dc55ff" host="ci-4081.2.1-4-b1553ec4eb" Dec 13 08:54:01.116268 containerd[1483]: 2024-12-13 08:54:00.968 [INFO][4542] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:54:01.116268 containerd[1483]: 2024-12-13 08:54:00.968 [INFO][4542] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.124.6/26] IPv6=[] ContainerID="3839b819b91bedf8b7cda95e55967fd353a2f5b1a90ebf46274e970a73dc55ff" HandleID="k8s-pod-network.3839b819b91bedf8b7cda95e55967fd353a2f5b1a90ebf46274e970a73dc55ff" Workload="ci--4081.2.1--4--b1553ec4eb-k8s-csi--node--driver--jd225-eth0" Dec 13 08:54:01.118931 containerd[1483]: 2024-12-13 08:54:00.971 [INFO][4514] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3839b819b91bedf8b7cda95e55967fd353a2f5b1a90ebf46274e970a73dc55ff" Namespace="calico-system" Pod="csi-node-driver-jd225" WorkloadEndpoint="ci--4081.2.1--4--b1553ec4eb-k8s-csi--node--driver--jd225-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--4--b1553ec4eb-k8s-csi--node--driver--jd225-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9c2374d2-93f5-41dd-beaa-5f3be640a74e", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 53, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-4-b1553ec4eb", ContainerID:"", Pod:"csi-node-driver-jd225", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.124.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib5b78d8435b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:54:01.118931 containerd[1483]: 2024-12-13 08:54:00.972 [INFO][4514] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.124.6/32] ContainerID="3839b819b91bedf8b7cda95e55967fd353a2f5b1a90ebf46274e970a73dc55ff" Namespace="calico-system" Pod="csi-node-driver-jd225" WorkloadEndpoint="ci--4081.2.1--4--b1553ec4eb-k8s-csi--node--driver--jd225-eth0" Dec 13 08:54:01.118931 containerd[1483]: 2024-12-13 08:54:00.972 [INFO][4514] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib5b78d8435b ContainerID="3839b819b91bedf8b7cda95e55967fd353a2f5b1a90ebf46274e970a73dc55ff" Namespace="calico-system" Pod="csi-node-driver-jd225" WorkloadEndpoint="ci--4081.2.1--4--b1553ec4eb-k8s-csi--node--driver--jd225-eth0" Dec 13 08:54:01.118931 containerd[1483]: 2024-12-13 08:54:01.006 [INFO][4514] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3839b819b91bedf8b7cda95e55967fd353a2f5b1a90ebf46274e970a73dc55ff" Namespace="calico-system" Pod="csi-node-driver-jd225" WorkloadEndpoint="ci--4081.2.1--4--b1553ec4eb-k8s-csi--node--driver--jd225-eth0" Dec 13 08:54:01.118931 containerd[1483]: 2024-12-13 08:54:01.024 [INFO][4514] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3839b819b91bedf8b7cda95e55967fd353a2f5b1a90ebf46274e970a73dc55ff" Namespace="calico-system" Pod="csi-node-driver-jd225" WorkloadEndpoint="ci--4081.2.1--4--b1553ec4eb-k8s-csi--node--driver--jd225-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--4--b1553ec4eb-k8s-csi--node--driver--jd225-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9c2374d2-93f5-41dd-beaa-5f3be640a74e", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 53, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-4-b1553ec4eb", ContainerID:"3839b819b91bedf8b7cda95e55967fd353a2f5b1a90ebf46274e970a73dc55ff", Pod:"csi-node-driver-jd225", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.124.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib5b78d8435b", MAC:"da:77:4b:9c:f8:82", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:54:01.118931 containerd[1483]: 2024-12-13 08:54:01.110 [INFO][4514] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3839b819b91bedf8b7cda95e55967fd353a2f5b1a90ebf46274e970a73dc55ff" Namespace="calico-system" Pod="csi-node-driver-jd225" WorkloadEndpoint="ci--4081.2.1--4--b1553ec4eb-k8s-csi--node--driver--jd225-eth0" Dec 13 08:54:01.127486 systemd-networkd[1363]: calidd01b34e7e3: Gained IPv6LL Dec 13 08:54:01.224257 containerd[1483]: time="2024-12-13T08:54:01.222389347Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 08:54:01.224257 containerd[1483]: time="2024-12-13T08:54:01.222473156Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 08:54:01.224257 containerd[1483]: time="2024-12-13T08:54:01.222499338Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:54:01.224257 containerd[1483]: time="2024-12-13T08:54:01.222630457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:54:01.286252 containerd[1483]: time="2024-12-13T08:54:01.283031336Z" level=info msg="CreateContainer within sandbox \"8c8b5e2f35273895780bc5852da66287267d7cd83756a63a1e4bafe736a16214\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cbceca266e962f7eaefe173d7cf58a1dd46e0670fdd70e6a5d5711dec271b6cf\"" Dec 13 08:54:01.286252 containerd[1483]: time="2024-12-13T08:54:01.286149946Z" level=info msg="StartContainer for \"cbceca266e962f7eaefe173d7cf58a1dd46e0670fdd70e6a5d5711dec271b6cf\"" Dec 13 08:54:01.333322 systemd[1]: Started cri-containerd-3839b819b91bedf8b7cda95e55967fd353a2f5b1a90ebf46274e970a73dc55ff.scope - libcontainer container 3839b819b91bedf8b7cda95e55967fd353a2f5b1a90ebf46274e970a73dc55ff. Dec 13 08:54:01.472047 systemd[1]: Started cri-containerd-cbceca266e962f7eaefe173d7cf58a1dd46e0670fdd70e6a5d5711dec271b6cf.scope - libcontainer container cbceca266e962f7eaefe173d7cf58a1dd46e0670fdd70e6a5d5711dec271b6cf. Dec 13 08:54:01.613352 containerd[1483]: time="2024-12-13T08:54:01.612865636Z" level=info msg="StartContainer for \"cbceca266e962f7eaefe173d7cf58a1dd46e0670fdd70e6a5d5711dec271b6cf\" returns successfully" Dec 13 08:54:01.836607 kubelet[2608]: I1213 08:54:01.835756 2608 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 08:54:01.849165 kubelet[2608]: E1213 08:54:01.845015 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:54:01.849165 kubelet[2608]: I1213 08:54:01.845628 2608 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 08:54:01.849165 kubelet[2608]: E1213 08:54:01.846660 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:54:01.983574 kubelet[2608]: I1213 08:54:01.983421 2608 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-fq2kp" podStartSLOduration=39.983325644 podStartE2EDuration="39.983325644s" podCreationTimestamp="2024-12-13 08:53:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 08:54:01.897740577 +0000 UTC m=+54.691931293" watchObservedRunningTime="2024-12-13 08:54:01.983325644 +0000 UTC m=+54.777516354" Dec 13 08:54:02.140094 containerd[1483]: time="2024-12-13T08:54:02.140010034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jd225,Uid:9c2374d2-93f5-41dd-beaa-5f3be640a74e,Namespace:calico-system,Attempt:1,} returns sandbox id \"3839b819b91bedf8b7cda95e55967fd353a2f5b1a90ebf46274e970a73dc55ff\"" Dec 13 08:54:02.343695 systemd-networkd[1363]: calif904b6b038a: Gained IPv6LL Dec 13 08:54:02.728214 systemd-networkd[1363]: calib5b78d8435b: Gained IPv6LL Dec 13 08:54:02.873382 kubelet[2608]: E1213 08:54:02.871827 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:54:02.877898 kubelet[2608]: E1213 08:54:02.877845 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:54:03.230519 containerd[1483]: time="2024-12-13T08:54:03.230446698Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:54:03.235813 containerd[1483]: time="2024-12-13T08:54:03.235630665Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Dec 13 08:54:03.241434 containerd[1483]: time="2024-12-13T08:54:03.240603196Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:54:03.256207 containerd[1483]: time="2024-12-13T08:54:03.256110622Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 4.881848709s" Dec 13 08:54:03.258340 containerd[1483]: time="2024-12-13T08:54:03.258253981Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Dec 13 08:54:03.262397 containerd[1483]: time="2024-12-13T08:54:03.262099573Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Dec 13 08:54:03.304007 containerd[1483]: time="2024-12-13T08:54:03.303638304Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:54:03.314030 containerd[1483]: time="2024-12-13T08:54:03.313065107Z" level=info msg="CreateContainer within sandbox \"60ca077f7c0888b82333bfdeb9afdc41800b071196993feecc58e74e419c7ca8\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Dec 13 08:54:03.374935 containerd[1483]: time="2024-12-13T08:54:03.374313273Z" level=info msg="CreateContainer within sandbox \"60ca077f7c0888b82333bfdeb9afdc41800b071196993feecc58e74e419c7ca8\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"4071a20caee398909747bbcaa2dd851b1d188a2f3f840b66631527b9339ba9f1\"" Dec 13 08:54:03.378170 containerd[1483]: time="2024-12-13T08:54:03.376443295Z" level=info msg="StartContainer for \"4071a20caee398909747bbcaa2dd851b1d188a2f3f840b66631527b9339ba9f1\"" Dec 13 08:54:03.461508 systemd[1]: Started cri-containerd-4071a20caee398909747bbcaa2dd851b1d188a2f3f840b66631527b9339ba9f1.scope - libcontainer container 4071a20caee398909747bbcaa2dd851b1d188a2f3f840b66631527b9339ba9f1. Dec 13 08:54:03.713956 containerd[1483]: time="2024-12-13T08:54:03.713898654Z" level=info msg="StartContainer for \"4071a20caee398909747bbcaa2dd851b1d188a2f3f840b66631527b9339ba9f1\" returns successfully" Dec 13 08:54:03.865077 kubelet[2608]: E1213 08:54:03.865032 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:54:03.865880 kubelet[2608]: E1213 08:54:03.865850 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:54:04.109544 kubelet[2608]: I1213 08:54:04.109306 2608 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-57b6c9448b-wct54" podStartSLOduration=25.840519138 podStartE2EDuration="32.10926409s" podCreationTimestamp="2024-12-13 08:53:32 +0000 UTC" firstStartedPulling="2024-12-13 08:53:56.992312473 +0000 UTC m=+49.786503169" lastFinishedPulling="2024-12-13 08:54:03.261057405 +0000 UTC m=+56.055248121" observedRunningTime="2024-12-13 08:54:03.896235 +0000 UTC m=+56.690425758" watchObservedRunningTime="2024-12-13 08:54:04.10926409 +0000 UTC m=+56.903454806" Dec 13 08:54:04.868019 kubelet[2608]: E1213 08:54:04.866551 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:54:05.123727 containerd[1483]: time="2024-12-13T08:54:05.123570257Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:54:05.125934 containerd[1483]: time="2024-12-13T08:54:05.125814971Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Dec 13 08:54:05.129852 containerd[1483]: time="2024-12-13T08:54:05.129765706Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:54:05.135236 containerd[1483]: time="2024-12-13T08:54:05.135132090Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:54:05.136785 containerd[1483]: time="2024-12-13T08:54:05.136633168Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.871939595s" Dec 13 08:54:05.136785 containerd[1483]: time="2024-12-13T08:54:05.136684808Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Dec 13 08:54:05.140969 containerd[1483]: time="2024-12-13T08:54:05.140927197Z" level=info msg="CreateContainer within sandbox \"3839b819b91bedf8b7cda95e55967fd353a2f5b1a90ebf46274e970a73dc55ff\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Dec 13 08:54:05.187793 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1255582216.mount: Deactivated successfully. Dec 13 08:54:05.190728 containerd[1483]: time="2024-12-13T08:54:05.190659361Z" level=info msg="CreateContainer within sandbox \"3839b819b91bedf8b7cda95e55967fd353a2f5b1a90ebf46274e970a73dc55ff\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"7f578a6f4c68f0ccf09b639c9179abc73e989c083def216ee4a6c319a27d392e\"" Dec 13 08:54:05.192591 containerd[1483]: time="2024-12-13T08:54:05.192554020Z" level=info msg="StartContainer for \"7f578a6f4c68f0ccf09b639c9179abc73e989c083def216ee4a6c319a27d392e\"" Dec 13 08:54:05.268451 systemd[1]: Started cri-containerd-7f578a6f4c68f0ccf09b639c9179abc73e989c083def216ee4a6c319a27d392e.scope - libcontainer container 7f578a6f4c68f0ccf09b639c9179abc73e989c083def216ee4a6c319a27d392e. Dec 13 08:54:05.379458 containerd[1483]: time="2024-12-13T08:54:05.378023441Z" level=info msg="StartContainer for \"7f578a6f4c68f0ccf09b639c9179abc73e989c083def216ee4a6c319a27d392e\" returns successfully" Dec 13 08:54:05.385551 containerd[1483]: time="2024-12-13T08:54:05.384742797Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Dec 13 08:54:07.440570 containerd[1483]: time="2024-12-13T08:54:07.440437600Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:54:07.447326 containerd[1483]: time="2024-12-13T08:54:07.447250693Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Dec 13 08:54:07.453498 containerd[1483]: time="2024-12-13T08:54:07.453416248Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:54:07.469086 containerd[1483]: time="2024-12-13T08:54:07.469027743Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:54:07.476746 containerd[1483]: time="2024-12-13T08:54:07.476668748Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 2.09149929s" Dec 13 08:54:07.478753 containerd[1483]: time="2024-12-13T08:54:07.478548027Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Dec 13 08:54:07.592661 containerd[1483]: time="2024-12-13T08:54:07.592587826Z" level=info msg="CreateContainer within sandbox \"3839b819b91bedf8b7cda95e55967fd353a2f5b1a90ebf46274e970a73dc55ff\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Dec 13 08:54:07.599596 containerd[1483]: time="2024-12-13T08:54:07.599258294Z" level=info msg="StopPodSandbox for \"2aff4406cc488b46d47506cc8c40fcf91b4e6a48a242e0eada84d6cf9628d908\"" Dec 13 08:54:07.643641 containerd[1483]: time="2024-12-13T08:54:07.643577596Z" level=info msg="CreateContainer within sandbox \"3839b819b91bedf8b7cda95e55967fd353a2f5b1a90ebf46274e970a73dc55ff\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"2babba147fe3906e409bc24e39f61c5b37a0b08d426b6940ea03adcce76faf4e\"" Dec 13 08:54:07.645256 containerd[1483]: time="2024-12-13T08:54:07.645000723Z" level=info msg="StartContainer for \"2babba147fe3906e409bc24e39f61c5b37a0b08d426b6940ea03adcce76faf4e\"" Dec 13 08:54:07.718287 systemd[1]: run-containerd-runc-k8s.io-2babba147fe3906e409bc24e39f61c5b37a0b08d426b6940ea03adcce76faf4e-runc.Eka4sk.mount: Deactivated successfully. Dec 13 08:54:07.727400 systemd[1]: Started cri-containerd-2babba147fe3906e409bc24e39f61c5b37a0b08d426b6940ea03adcce76faf4e.scope - libcontainer container 2babba147fe3906e409bc24e39f61c5b37a0b08d426b6940ea03adcce76faf4e. Dec 13 08:54:07.787186 containerd[1483]: time="2024-12-13T08:54:07.787047664Z" level=info msg="StartContainer for \"2babba147fe3906e409bc24e39f61c5b37a0b08d426b6940ea03adcce76faf4e\" returns successfully" Dec 13 08:54:07.938508 containerd[1483]: 2024-12-13 08:54:07.826 [WARNING][4856] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2aff4406cc488b46d47506cc8c40fcf91b4e6a48a242e0eada84d6cf9628d908" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--4--b1553ec4eb-k8s-calico--apiserver--5c5f75d696--2kzjr-eth0", GenerateName:"calico-apiserver-5c5f75d696-", Namespace:"calico-apiserver", SelfLink:"", UID:"ddf3153c-b769-4b7e-ba57-f0bc3c3374a4", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 53, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c5f75d696", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-4-b1553ec4eb", ContainerID:"2b678feefc730f87accf5bf16bd57033ca5bc2617de95496efa3ff0265316817", Pod:"calico-apiserver-5c5f75d696-2kzjr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9e85804ae9e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:54:07.938508 containerd[1483]: 2024-12-13 08:54:07.830 [INFO][4856] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2aff4406cc488b46d47506cc8c40fcf91b4e6a48a242e0eada84d6cf9628d908" Dec 13 08:54:07.938508 containerd[1483]: 2024-12-13 08:54:07.830 [INFO][4856] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2aff4406cc488b46d47506cc8c40fcf91b4e6a48a242e0eada84d6cf9628d908" iface="eth0" netns="" Dec 13 08:54:07.938508 containerd[1483]: 2024-12-13 08:54:07.830 [INFO][4856] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2aff4406cc488b46d47506cc8c40fcf91b4e6a48a242e0eada84d6cf9628d908" Dec 13 08:54:07.938508 containerd[1483]: 2024-12-13 08:54:07.830 [INFO][4856] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2aff4406cc488b46d47506cc8c40fcf91b4e6a48a242e0eada84d6cf9628d908" Dec 13 08:54:07.938508 containerd[1483]: 2024-12-13 08:54:07.910 [INFO][4900] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2aff4406cc488b46d47506cc8c40fcf91b4e6a48a242e0eada84d6cf9628d908" HandleID="k8s-pod-network.2aff4406cc488b46d47506cc8c40fcf91b4e6a48a242e0eada84d6cf9628d908" Workload="ci--4081.2.1--4--b1553ec4eb-k8s-calico--apiserver--5c5f75d696--2kzjr-eth0" Dec 13 08:54:07.938508 containerd[1483]: 2024-12-13 08:54:07.910 [INFO][4900] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:54:07.938508 containerd[1483]: 2024-12-13 08:54:07.911 [INFO][4900] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:54:07.938508 containerd[1483]: 2024-12-13 08:54:07.925 [WARNING][4900] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2aff4406cc488b46d47506cc8c40fcf91b4e6a48a242e0eada84d6cf9628d908" HandleID="k8s-pod-network.2aff4406cc488b46d47506cc8c40fcf91b4e6a48a242e0eada84d6cf9628d908" Workload="ci--4081.2.1--4--b1553ec4eb-k8s-calico--apiserver--5c5f75d696--2kzjr-eth0" Dec 13 08:54:07.938508 containerd[1483]: 2024-12-13 08:54:07.926 [INFO][4900] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2aff4406cc488b46d47506cc8c40fcf91b4e6a48a242e0eada84d6cf9628d908" HandleID="k8s-pod-network.2aff4406cc488b46d47506cc8c40fcf91b4e6a48a242e0eada84d6cf9628d908" Workload="ci--4081.2.1--4--b1553ec4eb-k8s-calico--apiserver--5c5f75d696--2kzjr-eth0" Dec 13 08:54:07.938508 containerd[1483]: 2024-12-13 08:54:07.930 [INFO][4900] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:54:07.938508 containerd[1483]: 2024-12-13 08:54:07.935 [INFO][4856] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2aff4406cc488b46d47506cc8c40fcf91b4e6a48a242e0eada84d6cf9628d908" Dec 13 08:54:07.939736 containerd[1483]: time="2024-12-13T08:54:07.939619948Z" level=info msg="TearDown network for sandbox \"2aff4406cc488b46d47506cc8c40fcf91b4e6a48a242e0eada84d6cf9628d908\" successfully" Dec 13 08:54:07.939736 containerd[1483]: time="2024-12-13T08:54:07.939660543Z" level=info msg="StopPodSandbox for \"2aff4406cc488b46d47506cc8c40fcf91b4e6a48a242e0eada84d6cf9628d908\" returns successfully" Dec 13 08:54:07.948740 containerd[1483]: time="2024-12-13T08:54:07.948681750Z" level=info msg="RemovePodSandbox for \"2aff4406cc488b46d47506cc8c40fcf91b4e6a48a242e0eada84d6cf9628d908\"" Dec 13 08:54:07.948740 containerd[1483]: time="2024-12-13T08:54:07.948744105Z" level=info msg="Forcibly stopping sandbox \"2aff4406cc488b46d47506cc8c40fcf91b4e6a48a242e0eada84d6cf9628d908\"" Dec 13 08:54:07.957685 kubelet[2608]: I1213 08:54:07.956893 2608 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-jd225" podStartSLOduration=31.53974555 podStartE2EDuration="36.95674131s" podCreationTimestamp="2024-12-13 08:53:31 +0000 UTC" firstStartedPulling="2024-12-13 08:54:02.146911487 +0000 UTC m=+54.941102194" lastFinishedPulling="2024-12-13 08:54:07.563907243 +0000 UTC m=+60.358097954" observedRunningTime="2024-12-13 08:54:07.950025349 +0000 UTC m=+60.744216066" watchObservedRunningTime="2024-12-13 08:54:07.95674131 +0000 UTC m=+60.750932016" Dec 13 08:54:08.140550 containerd[1483]: 2024-12-13 08:54:08.054 [WARNING][4921] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2aff4406cc488b46d47506cc8c40fcf91b4e6a48a242e0eada84d6cf9628d908" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--4--b1553ec4eb-k8s-calico--apiserver--5c5f75d696--2kzjr-eth0", GenerateName:"calico-apiserver-5c5f75d696-", Namespace:"calico-apiserver", SelfLink:"", UID:"ddf3153c-b769-4b7e-ba57-f0bc3c3374a4", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 53, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c5f75d696", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-4-b1553ec4eb", ContainerID:"2b678feefc730f87accf5bf16bd57033ca5bc2617de95496efa3ff0265316817", Pod:"calico-apiserver-5c5f75d696-2kzjr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9e85804ae9e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:54:08.140550 containerd[1483]: 2024-12-13 08:54:08.056 [INFO][4921] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2aff4406cc488b46d47506cc8c40fcf91b4e6a48a242e0eada84d6cf9628d908" Dec 13 08:54:08.140550 containerd[1483]: 2024-12-13 08:54:08.056 [INFO][4921] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2aff4406cc488b46d47506cc8c40fcf91b4e6a48a242e0eada84d6cf9628d908" iface="eth0" netns="" Dec 13 08:54:08.140550 containerd[1483]: 2024-12-13 08:54:08.056 [INFO][4921] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2aff4406cc488b46d47506cc8c40fcf91b4e6a48a242e0eada84d6cf9628d908" Dec 13 08:54:08.140550 containerd[1483]: 2024-12-13 08:54:08.056 [INFO][4921] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2aff4406cc488b46d47506cc8c40fcf91b4e6a48a242e0eada84d6cf9628d908" Dec 13 08:54:08.140550 containerd[1483]: 2024-12-13 08:54:08.115 [INFO][4927] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2aff4406cc488b46d47506cc8c40fcf91b4e6a48a242e0eada84d6cf9628d908" HandleID="k8s-pod-network.2aff4406cc488b46d47506cc8c40fcf91b4e6a48a242e0eada84d6cf9628d908" Workload="ci--4081.2.1--4--b1553ec4eb-k8s-calico--apiserver--5c5f75d696--2kzjr-eth0" Dec 13 08:54:08.140550 containerd[1483]: 2024-12-13 08:54:08.115 [INFO][4927] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:54:08.140550 containerd[1483]: 2024-12-13 08:54:08.115 [INFO][4927] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:54:08.140550 containerd[1483]: 2024-12-13 08:54:08.131 [WARNING][4927] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2aff4406cc488b46d47506cc8c40fcf91b4e6a48a242e0eada84d6cf9628d908" HandleID="k8s-pod-network.2aff4406cc488b46d47506cc8c40fcf91b4e6a48a242e0eada84d6cf9628d908" Workload="ci--4081.2.1--4--b1553ec4eb-k8s-calico--apiserver--5c5f75d696--2kzjr-eth0" Dec 13 08:54:08.140550 containerd[1483]: 2024-12-13 08:54:08.131 [INFO][4927] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2aff4406cc488b46d47506cc8c40fcf91b4e6a48a242e0eada84d6cf9628d908" HandleID="k8s-pod-network.2aff4406cc488b46d47506cc8c40fcf91b4e6a48a242e0eada84d6cf9628d908" Workload="ci--4081.2.1--4--b1553ec4eb-k8s-calico--apiserver--5c5f75d696--2kzjr-eth0" Dec 13 08:54:08.140550 containerd[1483]: 2024-12-13 08:54:08.134 [INFO][4927] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:54:08.140550 containerd[1483]: 2024-12-13 08:54:08.138 [INFO][4921] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2aff4406cc488b46d47506cc8c40fcf91b4e6a48a242e0eada84d6cf9628d908" Dec 13 08:54:08.141354 containerd[1483]: time="2024-12-13T08:54:08.140530456Z" level=info msg="TearDown network for sandbox \"2aff4406cc488b46d47506cc8c40fcf91b4e6a48a242e0eada84d6cf9628d908\" successfully" Dec 13 08:54:08.177259 containerd[1483]: time="2024-12-13T08:54:08.177174348Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2aff4406cc488b46d47506cc8c40fcf91b4e6a48a242e0eada84d6cf9628d908\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 08:54:08.177452 containerd[1483]: time="2024-12-13T08:54:08.177314805Z" level=info msg="RemovePodSandbox \"2aff4406cc488b46d47506cc8c40fcf91b4e6a48a242e0eada84d6cf9628d908\" returns successfully" Dec 13 08:54:08.180628 containerd[1483]: time="2024-12-13T08:54:08.180547321Z" level=info msg="StopPodSandbox for \"e5306418cea0adddfb382ad3a08729db93d26aebae1c8b84a589414da162addf\"" Dec 13 08:54:08.377715 containerd[1483]: 2024-12-13 08:54:08.295 [WARNING][4945] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e5306418cea0adddfb382ad3a08729db93d26aebae1c8b84a589414da162addf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--4--b1553ec4eb-k8s-coredns--7db6d8ff4d--fq2kp-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"d5b39035-4771-4bba-abc4-b862e2d1a098", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 53, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-4-b1553ec4eb", ContainerID:"8c8b5e2f35273895780bc5852da66287267d7cd83756a63a1e4bafe736a16214", Pod:"coredns-7db6d8ff4d-fq2kp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif904b6b038a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:54:08.377715 containerd[1483]: 2024-12-13 08:54:08.295 [INFO][4945] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e5306418cea0adddfb382ad3a08729db93d26aebae1c8b84a589414da162addf" Dec 13 08:54:08.377715 containerd[1483]: 2024-12-13 08:54:08.295 [INFO][4945] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e5306418cea0adddfb382ad3a08729db93d26aebae1c8b84a589414da162addf" iface="eth0" netns="" Dec 13 08:54:08.377715 containerd[1483]: 2024-12-13 08:54:08.295 [INFO][4945] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e5306418cea0adddfb382ad3a08729db93d26aebae1c8b84a589414da162addf" Dec 13 08:54:08.377715 containerd[1483]: 2024-12-13 08:54:08.295 [INFO][4945] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e5306418cea0adddfb382ad3a08729db93d26aebae1c8b84a589414da162addf" Dec 13 08:54:08.377715 containerd[1483]: 2024-12-13 08:54:08.347 [INFO][4951] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e5306418cea0adddfb382ad3a08729db93d26aebae1c8b84a589414da162addf" HandleID="k8s-pod-network.e5306418cea0adddfb382ad3a08729db93d26aebae1c8b84a589414da162addf" Workload="ci--4081.2.1--4--b1553ec4eb-k8s-coredns--7db6d8ff4d--fq2kp-eth0" Dec 13 08:54:08.377715 containerd[1483]: 2024-12-13 08:54:08.348 [INFO][4951] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:54:08.377715 containerd[1483]: 2024-12-13 08:54:08.350 [INFO][4951] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:54:08.377715 containerd[1483]: 2024-12-13 08:54:08.368 [WARNING][4951] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e5306418cea0adddfb382ad3a08729db93d26aebae1c8b84a589414da162addf" HandleID="k8s-pod-network.e5306418cea0adddfb382ad3a08729db93d26aebae1c8b84a589414da162addf" Workload="ci--4081.2.1--4--b1553ec4eb-k8s-coredns--7db6d8ff4d--fq2kp-eth0" Dec 13 08:54:08.377715 containerd[1483]: 2024-12-13 08:54:08.368 [INFO][4951] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e5306418cea0adddfb382ad3a08729db93d26aebae1c8b84a589414da162addf" HandleID="k8s-pod-network.e5306418cea0adddfb382ad3a08729db93d26aebae1c8b84a589414da162addf" Workload="ci--4081.2.1--4--b1553ec4eb-k8s-coredns--7db6d8ff4d--fq2kp-eth0" Dec 13 08:54:08.377715 containerd[1483]: 2024-12-13 08:54:08.371 [INFO][4951] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:54:08.377715 containerd[1483]: 2024-12-13 08:54:08.374 [INFO][4945] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e5306418cea0adddfb382ad3a08729db93d26aebae1c8b84a589414da162addf" Dec 13 08:54:08.379658 containerd[1483]: time="2024-12-13T08:54:08.377786482Z" level=info msg="TearDown network for sandbox \"e5306418cea0adddfb382ad3a08729db93d26aebae1c8b84a589414da162addf\" successfully" Dec 13 08:54:08.379658 containerd[1483]: time="2024-12-13T08:54:08.377822911Z" level=info msg="StopPodSandbox for \"e5306418cea0adddfb382ad3a08729db93d26aebae1c8b84a589414da162addf\" returns successfully" Dec 13 08:54:08.380863 containerd[1483]: time="2024-12-13T08:54:08.380823437Z" level=info msg="RemovePodSandbox for \"e5306418cea0adddfb382ad3a08729db93d26aebae1c8b84a589414da162addf\"" Dec 13 08:54:08.380982 containerd[1483]: time="2024-12-13T08:54:08.380872585Z" level=info msg="Forcibly stopping sandbox \"e5306418cea0adddfb382ad3a08729db93d26aebae1c8b84a589414da162addf\"" Dec 13 08:54:08.517778 containerd[1483]: 2024-12-13 08:54:08.457 [WARNING][4969] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e5306418cea0adddfb382ad3a08729db93d26aebae1c8b84a589414da162addf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--4--b1553ec4eb-k8s-coredns--7db6d8ff4d--fq2kp-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"d5b39035-4771-4bba-abc4-b862e2d1a098", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 53, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-4-b1553ec4eb", ContainerID:"8c8b5e2f35273895780bc5852da66287267d7cd83756a63a1e4bafe736a16214", Pod:"coredns-7db6d8ff4d-fq2kp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif904b6b038a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:54:08.517778 containerd[1483]: 2024-12-13 08:54:08.458 [INFO][4969] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e5306418cea0adddfb382ad3a08729db93d26aebae1c8b84a589414da162addf" Dec 13 08:54:08.517778 containerd[1483]: 2024-12-13 08:54:08.458 [INFO][4969] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e5306418cea0adddfb382ad3a08729db93d26aebae1c8b84a589414da162addf" iface="eth0" netns="" Dec 13 08:54:08.517778 containerd[1483]: 2024-12-13 08:54:08.458 [INFO][4969] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e5306418cea0adddfb382ad3a08729db93d26aebae1c8b84a589414da162addf" Dec 13 08:54:08.517778 containerd[1483]: 2024-12-13 08:54:08.458 [INFO][4969] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e5306418cea0adddfb382ad3a08729db93d26aebae1c8b84a589414da162addf" Dec 13 08:54:08.517778 containerd[1483]: 2024-12-13 08:54:08.498 [INFO][4975] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e5306418cea0adddfb382ad3a08729db93d26aebae1c8b84a589414da162addf" HandleID="k8s-pod-network.e5306418cea0adddfb382ad3a08729db93d26aebae1c8b84a589414da162addf" Workload="ci--4081.2.1--4--b1553ec4eb-k8s-coredns--7db6d8ff4d--fq2kp-eth0" Dec 13 08:54:08.517778 containerd[1483]: 2024-12-13 08:54:08.499 [INFO][4975] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:54:08.517778 containerd[1483]: 2024-12-13 08:54:08.499 [INFO][4975] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:54:08.517778 containerd[1483]: 2024-12-13 08:54:08.510 [WARNING][4975] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e5306418cea0adddfb382ad3a08729db93d26aebae1c8b84a589414da162addf" HandleID="k8s-pod-network.e5306418cea0adddfb382ad3a08729db93d26aebae1c8b84a589414da162addf" Workload="ci--4081.2.1--4--b1553ec4eb-k8s-coredns--7db6d8ff4d--fq2kp-eth0" Dec 13 08:54:08.517778 containerd[1483]: 2024-12-13 08:54:08.510 [INFO][4975] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e5306418cea0adddfb382ad3a08729db93d26aebae1c8b84a589414da162addf" HandleID="k8s-pod-network.e5306418cea0adddfb382ad3a08729db93d26aebae1c8b84a589414da162addf" Workload="ci--4081.2.1--4--b1553ec4eb-k8s-coredns--7db6d8ff4d--fq2kp-eth0" Dec 13 08:54:08.517778 containerd[1483]: 2024-12-13 08:54:08.512 [INFO][4975] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:54:08.517778 containerd[1483]: 2024-12-13 08:54:08.514 [INFO][4969] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e5306418cea0adddfb382ad3a08729db93d26aebae1c8b84a589414da162addf" Dec 13 08:54:08.520766 containerd[1483]: time="2024-12-13T08:54:08.517864224Z" level=info msg="TearDown network for sandbox \"e5306418cea0adddfb382ad3a08729db93d26aebae1c8b84a589414da162addf\" successfully" Dec 13 08:54:08.531495 containerd[1483]: time="2024-12-13T08:54:08.531318342Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e5306418cea0adddfb382ad3a08729db93d26aebae1c8b84a589414da162addf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 08:54:08.531495 containerd[1483]: time="2024-12-13T08:54:08.531426803Z" level=info msg="RemovePodSandbox \"e5306418cea0adddfb382ad3a08729db93d26aebae1c8b84a589414da162addf\" returns successfully" Dec 13 08:54:08.533253 containerd[1483]: time="2024-12-13T08:54:08.532597381Z" level=info msg="StopPodSandbox for \"78cb6c96573eb180d6c043237feef4c7adb280e484418384c57bc32cd68ec686\"" Dec 13 08:54:08.620620 kubelet[2608]: I1213 08:54:08.620562 2608 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Dec 13 08:54:08.620944 kubelet[2608]: I1213 08:54:08.620928 2608 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Dec 13 08:54:08.681356 containerd[1483]: 2024-12-13 08:54:08.603 [WARNING][4993] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="78cb6c96573eb180d6c043237feef4c7adb280e484418384c57bc32cd68ec686" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--4--b1553ec4eb-k8s-calico--apiserver--5c5f75d696--w7vmr-eth0", GenerateName:"calico-apiserver-5c5f75d696-", Namespace:"calico-apiserver", SelfLink:"", UID:"8ba1434c-4e4e-46fd-97f9-ebbb427b8559", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 53, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c5f75d696", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-4-b1553ec4eb", ContainerID:"a8154459c6c30914ec17e3bc27750965572a2fb8e83b232fc8d62caaad66a8b0", Pod:"calico-apiserver-5c5f75d696-w7vmr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidd01b34e7e3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:54:08.681356 containerd[1483]: 2024-12-13 08:54:08.604 [INFO][4993] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="78cb6c96573eb180d6c043237feef4c7adb280e484418384c57bc32cd68ec686" Dec 13 08:54:08.681356 containerd[1483]: 2024-12-13 08:54:08.604 [INFO][4993] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="78cb6c96573eb180d6c043237feef4c7adb280e484418384c57bc32cd68ec686" iface="eth0" netns="" Dec 13 08:54:08.681356 containerd[1483]: 2024-12-13 08:54:08.604 [INFO][4993] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="78cb6c96573eb180d6c043237feef4c7adb280e484418384c57bc32cd68ec686" Dec 13 08:54:08.681356 containerd[1483]: 2024-12-13 08:54:08.604 [INFO][4993] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="78cb6c96573eb180d6c043237feef4c7adb280e484418384c57bc32cd68ec686" Dec 13 08:54:08.681356 containerd[1483]: 2024-12-13 08:54:08.645 [INFO][5000] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="78cb6c96573eb180d6c043237feef4c7adb280e484418384c57bc32cd68ec686" HandleID="k8s-pod-network.78cb6c96573eb180d6c043237feef4c7adb280e484418384c57bc32cd68ec686" Workload="ci--4081.2.1--4--b1553ec4eb-k8s-calico--apiserver--5c5f75d696--w7vmr-eth0" Dec 13 08:54:08.681356 containerd[1483]: 2024-12-13 08:54:08.645 [INFO][5000] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:54:08.681356 containerd[1483]: 2024-12-13 08:54:08.645 [INFO][5000] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:54:08.681356 containerd[1483]: 2024-12-13 08:54:08.662 [WARNING][5000] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="78cb6c96573eb180d6c043237feef4c7adb280e484418384c57bc32cd68ec686" HandleID="k8s-pod-network.78cb6c96573eb180d6c043237feef4c7adb280e484418384c57bc32cd68ec686" Workload="ci--4081.2.1--4--b1553ec4eb-k8s-calico--apiserver--5c5f75d696--w7vmr-eth0" Dec 13 08:54:08.681356 containerd[1483]: 2024-12-13 08:54:08.662 [INFO][5000] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="78cb6c96573eb180d6c043237feef4c7adb280e484418384c57bc32cd68ec686" HandleID="k8s-pod-network.78cb6c96573eb180d6c043237feef4c7adb280e484418384c57bc32cd68ec686" Workload="ci--4081.2.1--4--b1553ec4eb-k8s-calico--apiserver--5c5f75d696--w7vmr-eth0" Dec 13 08:54:08.681356 containerd[1483]: 2024-12-13 08:54:08.668 [INFO][5000] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:54:08.681356 containerd[1483]: 2024-12-13 08:54:08.675 [INFO][4993] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="78cb6c96573eb180d6c043237feef4c7adb280e484418384c57bc32cd68ec686" Dec 13 08:54:08.681356 containerd[1483]: time="2024-12-13T08:54:08.681069268Z" level=info msg="TearDown network for sandbox \"78cb6c96573eb180d6c043237feef4c7adb280e484418384c57bc32cd68ec686\" successfully" Dec 13 08:54:08.681356 containerd[1483]: time="2024-12-13T08:54:08.681097430Z" level=info msg="StopPodSandbox for \"78cb6c96573eb180d6c043237feef4c7adb280e484418384c57bc32cd68ec686\" returns successfully" Dec 13 08:54:08.683026 containerd[1483]: time="2024-12-13T08:54:08.682969212Z" level=info msg="RemovePodSandbox for \"78cb6c96573eb180d6c043237feef4c7adb280e484418384c57bc32cd68ec686\"" Dec 13 08:54:08.683074 containerd[1483]: time="2024-12-13T08:54:08.683046852Z" level=info msg="Forcibly stopping sandbox \"78cb6c96573eb180d6c043237feef4c7adb280e484418384c57bc32cd68ec686\"" Dec 13 08:54:08.859938 containerd[1483]: 2024-12-13 08:54:08.796 [WARNING][5018] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="78cb6c96573eb180d6c043237feef4c7adb280e484418384c57bc32cd68ec686" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--4--b1553ec4eb-k8s-calico--apiserver--5c5f75d696--w7vmr-eth0", GenerateName:"calico-apiserver-5c5f75d696-", Namespace:"calico-apiserver", SelfLink:"", UID:"8ba1434c-4e4e-46fd-97f9-ebbb427b8559", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 53, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c5f75d696", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-4-b1553ec4eb", ContainerID:"a8154459c6c30914ec17e3bc27750965572a2fb8e83b232fc8d62caaad66a8b0", Pod:"calico-apiserver-5c5f75d696-w7vmr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidd01b34e7e3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:54:08.859938 containerd[1483]: 2024-12-13 08:54:08.797 [INFO][5018] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="78cb6c96573eb180d6c043237feef4c7adb280e484418384c57bc32cd68ec686" Dec 13 08:54:08.859938 containerd[1483]: 2024-12-13 08:54:08.797 [INFO][5018] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="78cb6c96573eb180d6c043237feef4c7adb280e484418384c57bc32cd68ec686" iface="eth0" netns="" Dec 13 08:54:08.859938 containerd[1483]: 2024-12-13 08:54:08.797 [INFO][5018] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="78cb6c96573eb180d6c043237feef4c7adb280e484418384c57bc32cd68ec686" Dec 13 08:54:08.859938 containerd[1483]: 2024-12-13 08:54:08.797 [INFO][5018] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="78cb6c96573eb180d6c043237feef4c7adb280e484418384c57bc32cd68ec686" Dec 13 08:54:08.859938 containerd[1483]: 2024-12-13 08:54:08.835 [INFO][5025] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="78cb6c96573eb180d6c043237feef4c7adb280e484418384c57bc32cd68ec686" HandleID="k8s-pod-network.78cb6c96573eb180d6c043237feef4c7adb280e484418384c57bc32cd68ec686" Workload="ci--4081.2.1--4--b1553ec4eb-k8s-calico--apiserver--5c5f75d696--w7vmr-eth0" Dec 13 08:54:08.859938 containerd[1483]: 2024-12-13 08:54:08.835 [INFO][5025] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:54:08.859938 containerd[1483]: 2024-12-13 08:54:08.835 [INFO][5025] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:54:08.859938 containerd[1483]: 2024-12-13 08:54:08.849 [WARNING][5025] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="78cb6c96573eb180d6c043237feef4c7adb280e484418384c57bc32cd68ec686" HandleID="k8s-pod-network.78cb6c96573eb180d6c043237feef4c7adb280e484418384c57bc32cd68ec686" Workload="ci--4081.2.1--4--b1553ec4eb-k8s-calico--apiserver--5c5f75d696--w7vmr-eth0" Dec 13 08:54:08.859938 containerd[1483]: 2024-12-13 08:54:08.849 [INFO][5025] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="78cb6c96573eb180d6c043237feef4c7adb280e484418384c57bc32cd68ec686" HandleID="k8s-pod-network.78cb6c96573eb180d6c043237feef4c7adb280e484418384c57bc32cd68ec686" Workload="ci--4081.2.1--4--b1553ec4eb-k8s-calico--apiserver--5c5f75d696--w7vmr-eth0" Dec 13 08:54:08.859938 containerd[1483]: 2024-12-13 08:54:08.854 [INFO][5025] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:54:08.859938 containerd[1483]: 2024-12-13 08:54:08.857 [INFO][5018] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="78cb6c96573eb180d6c043237feef4c7adb280e484418384c57bc32cd68ec686" Dec 13 08:54:08.859938 containerd[1483]: time="2024-12-13T08:54:08.859539485Z" level=info msg="TearDown network for sandbox \"78cb6c96573eb180d6c043237feef4c7adb280e484418384c57bc32cd68ec686\" successfully" Dec 13 08:54:08.867185 containerd[1483]: time="2024-12-13T08:54:08.866843609Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"78cb6c96573eb180d6c043237feef4c7adb280e484418384c57bc32cd68ec686\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 08:54:08.867185 containerd[1483]: time="2024-12-13T08:54:08.866946521Z" level=info msg="RemovePodSandbox \"78cb6c96573eb180d6c043237feef4c7adb280e484418384c57bc32cd68ec686\" returns successfully" Dec 13 08:54:08.869104 containerd[1483]: time="2024-12-13T08:54:08.868739980Z" level=info msg="StopPodSandbox for \"bdc5c326c198be84dc8157066841b00cc97215a6990384f726d76c81f085d0ad\"" Dec 13 08:54:09.023979 containerd[1483]: 2024-12-13 08:54:08.957 [WARNING][5044] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bdc5c326c198be84dc8157066841b00cc97215a6990384f726d76c81f085d0ad" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--4--b1553ec4eb-k8s-coredns--7db6d8ff4d--jjl49-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"32f1faf5-14a7-4e77-ad30-f5c2a4239f44", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 53, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-4-b1553ec4eb", ContainerID:"a01defefa92eb50a8fe5799e2193c1755a84b7bb046b70a7e4cc15fa8ddf31d5", Pod:"coredns-7db6d8ff4d-jjl49", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie0aaa043874", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:54:09.023979 containerd[1483]: 2024-12-13 08:54:08.958 [INFO][5044] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bdc5c326c198be84dc8157066841b00cc97215a6990384f726d76c81f085d0ad" Dec 13 08:54:09.023979 containerd[1483]: 2024-12-13 08:54:08.958 [INFO][5044] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bdc5c326c198be84dc8157066841b00cc97215a6990384f726d76c81f085d0ad" iface="eth0" netns="" Dec 13 08:54:09.023979 containerd[1483]: 2024-12-13 08:54:08.958 [INFO][5044] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bdc5c326c198be84dc8157066841b00cc97215a6990384f726d76c81f085d0ad" Dec 13 08:54:09.023979 containerd[1483]: 2024-12-13 08:54:08.958 [INFO][5044] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bdc5c326c198be84dc8157066841b00cc97215a6990384f726d76c81f085d0ad" Dec 13 08:54:09.023979 containerd[1483]: 2024-12-13 08:54:08.999 [INFO][5050] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bdc5c326c198be84dc8157066841b00cc97215a6990384f726d76c81f085d0ad" HandleID="k8s-pod-network.bdc5c326c198be84dc8157066841b00cc97215a6990384f726d76c81f085d0ad" Workload="ci--4081.2.1--4--b1553ec4eb-k8s-coredns--7db6d8ff4d--jjl49-eth0" Dec 13 08:54:09.023979 containerd[1483]: 2024-12-13 08:54:09.000 [INFO][5050] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:54:09.023979 containerd[1483]: 2024-12-13 08:54:09.000 [INFO][5050] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:54:09.023979 containerd[1483]: 2024-12-13 08:54:09.009 [WARNING][5050] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bdc5c326c198be84dc8157066841b00cc97215a6990384f726d76c81f085d0ad" HandleID="k8s-pod-network.bdc5c326c198be84dc8157066841b00cc97215a6990384f726d76c81f085d0ad" Workload="ci--4081.2.1--4--b1553ec4eb-k8s-coredns--7db6d8ff4d--jjl49-eth0" Dec 13 08:54:09.023979 containerd[1483]: 2024-12-13 08:54:09.009 [INFO][5050] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bdc5c326c198be84dc8157066841b00cc97215a6990384f726d76c81f085d0ad" HandleID="k8s-pod-network.bdc5c326c198be84dc8157066841b00cc97215a6990384f726d76c81f085d0ad" Workload="ci--4081.2.1--4--b1553ec4eb-k8s-coredns--7db6d8ff4d--jjl49-eth0" Dec 13 08:54:09.023979 containerd[1483]: 2024-12-13 08:54:09.016 [INFO][5050] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:54:09.023979 containerd[1483]: 2024-12-13 08:54:09.019 [INFO][5044] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bdc5c326c198be84dc8157066841b00cc97215a6990384f726d76c81f085d0ad" Dec 13 08:54:09.025629 containerd[1483]: time="2024-12-13T08:54:09.024517315Z" level=info msg="TearDown network for sandbox \"bdc5c326c198be84dc8157066841b00cc97215a6990384f726d76c81f085d0ad\" successfully" Dec 13 08:54:09.025629 containerd[1483]: time="2024-12-13T08:54:09.024559985Z" level=info msg="StopPodSandbox for \"bdc5c326c198be84dc8157066841b00cc97215a6990384f726d76c81f085d0ad\" returns successfully" Dec 13 08:54:09.026660 containerd[1483]: time="2024-12-13T08:54:09.026004643Z" level=info msg="RemovePodSandbox for \"bdc5c326c198be84dc8157066841b00cc97215a6990384f726d76c81f085d0ad\"" Dec 13 08:54:09.026660 containerd[1483]: time="2024-12-13T08:54:09.026050898Z" level=info msg="Forcibly stopping sandbox \"bdc5c326c198be84dc8157066841b00cc97215a6990384f726d76c81f085d0ad\"" Dec 13 08:54:09.148496 containerd[1483]: 2024-12-13 08:54:09.093 [WARNING][5069] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bdc5c326c198be84dc8157066841b00cc97215a6990384f726d76c81f085d0ad" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--4--b1553ec4eb-k8s-coredns--7db6d8ff4d--jjl49-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"32f1faf5-14a7-4e77-ad30-f5c2a4239f44", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 53, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-4-b1553ec4eb", ContainerID:"a01defefa92eb50a8fe5799e2193c1755a84b7bb046b70a7e4cc15fa8ddf31d5", Pod:"coredns-7db6d8ff4d-jjl49", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie0aaa043874", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:54:09.148496 containerd[1483]: 2024-12-13 08:54:09.094 [INFO][5069] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bdc5c326c198be84dc8157066841b00cc97215a6990384f726d76c81f085d0ad" Dec 13 08:54:09.148496 containerd[1483]: 2024-12-13 08:54:09.094 [INFO][5069] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bdc5c326c198be84dc8157066841b00cc97215a6990384f726d76c81f085d0ad" iface="eth0" netns="" Dec 13 08:54:09.148496 containerd[1483]: 2024-12-13 08:54:09.094 [INFO][5069] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bdc5c326c198be84dc8157066841b00cc97215a6990384f726d76c81f085d0ad" Dec 13 08:54:09.148496 containerd[1483]: 2024-12-13 08:54:09.094 [INFO][5069] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bdc5c326c198be84dc8157066841b00cc97215a6990384f726d76c81f085d0ad" Dec 13 08:54:09.148496 containerd[1483]: 2024-12-13 08:54:09.128 [INFO][5075] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bdc5c326c198be84dc8157066841b00cc97215a6990384f726d76c81f085d0ad" HandleID="k8s-pod-network.bdc5c326c198be84dc8157066841b00cc97215a6990384f726d76c81f085d0ad" Workload="ci--4081.2.1--4--b1553ec4eb-k8s-coredns--7db6d8ff4d--jjl49-eth0" Dec 13 08:54:09.148496 containerd[1483]: 2024-12-13 08:54:09.128 [INFO][5075] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:54:09.148496 containerd[1483]: 2024-12-13 08:54:09.128 [INFO][5075] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:54:09.148496 containerd[1483]: 2024-12-13 08:54:09.136 [WARNING][5075] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bdc5c326c198be84dc8157066841b00cc97215a6990384f726d76c81f085d0ad" HandleID="k8s-pod-network.bdc5c326c198be84dc8157066841b00cc97215a6990384f726d76c81f085d0ad" Workload="ci--4081.2.1--4--b1553ec4eb-k8s-coredns--7db6d8ff4d--jjl49-eth0" Dec 13 08:54:09.148496 containerd[1483]: 2024-12-13 08:54:09.136 [INFO][5075] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bdc5c326c198be84dc8157066841b00cc97215a6990384f726d76c81f085d0ad" HandleID="k8s-pod-network.bdc5c326c198be84dc8157066841b00cc97215a6990384f726d76c81f085d0ad" Workload="ci--4081.2.1--4--b1553ec4eb-k8s-coredns--7db6d8ff4d--jjl49-eth0" Dec 13 08:54:09.148496 containerd[1483]: 2024-12-13 08:54:09.139 [INFO][5075] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:54:09.148496 containerd[1483]: 2024-12-13 08:54:09.144 [INFO][5069] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bdc5c326c198be84dc8157066841b00cc97215a6990384f726d76c81f085d0ad" Dec 13 08:54:09.150474 containerd[1483]: time="2024-12-13T08:54:09.148507687Z" level=info msg="TearDown network for sandbox \"bdc5c326c198be84dc8157066841b00cc97215a6990384f726d76c81f085d0ad\" successfully" Dec 13 08:54:09.156794 containerd[1483]: time="2024-12-13T08:54:09.156622643Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bdc5c326c198be84dc8157066841b00cc97215a6990384f726d76c81f085d0ad\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 08:54:09.156794 containerd[1483]: time="2024-12-13T08:54:09.156746001Z" level=info msg="RemovePodSandbox \"bdc5c326c198be84dc8157066841b00cc97215a6990384f726d76c81f085d0ad\" returns successfully" Dec 13 08:54:09.157747 containerd[1483]: time="2024-12-13T08:54:09.157710218Z" level=info msg="StopPodSandbox for \"fddcdc4e4228fb516182855e6acda500bf9dfa5babdf534e12f5f62252f97ea8\"" Dec 13 08:54:09.294764 containerd[1483]: 2024-12-13 08:54:09.223 [WARNING][5094] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fddcdc4e4228fb516182855e6acda500bf9dfa5babdf534e12f5f62252f97ea8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--4--b1553ec4eb-k8s-calico--kube--controllers--57b6c9448b--wct54-eth0", GenerateName:"calico-kube-controllers-57b6c9448b-", Namespace:"calico-system", SelfLink:"", UID:"5f39249e-c6ea-419b-8152-f432f5354acc", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 53, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"57b6c9448b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-4-b1553ec4eb", ContainerID:"60ca077f7c0888b82333bfdeb9afdc41800b071196993feecc58e74e419c7ca8", Pod:"calico-kube-controllers-57b6c9448b-wct54", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.124.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia7994ea9a7e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:54:09.294764 containerd[1483]: 2024-12-13 08:54:09.224 [INFO][5094] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fddcdc4e4228fb516182855e6acda500bf9dfa5babdf534e12f5f62252f97ea8" Dec 13 08:54:09.294764 containerd[1483]: 2024-12-13 08:54:09.224 [INFO][5094] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fddcdc4e4228fb516182855e6acda500bf9dfa5babdf534e12f5f62252f97ea8" iface="eth0" netns="" Dec 13 08:54:09.294764 containerd[1483]: 2024-12-13 08:54:09.224 [INFO][5094] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fddcdc4e4228fb516182855e6acda500bf9dfa5babdf534e12f5f62252f97ea8" Dec 13 08:54:09.294764 containerd[1483]: 2024-12-13 08:54:09.224 [INFO][5094] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fddcdc4e4228fb516182855e6acda500bf9dfa5babdf534e12f5f62252f97ea8" Dec 13 08:54:09.294764 containerd[1483]: 2024-12-13 08:54:09.269 [INFO][5100] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fddcdc4e4228fb516182855e6acda500bf9dfa5babdf534e12f5f62252f97ea8" HandleID="k8s-pod-network.fddcdc4e4228fb516182855e6acda500bf9dfa5babdf534e12f5f62252f97ea8" Workload="ci--4081.2.1--4--b1553ec4eb-k8s-calico--kube--controllers--57b6c9448b--wct54-eth0" Dec 13 08:54:09.294764 containerd[1483]: 2024-12-13 08:54:09.269 [INFO][5100] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:54:09.294764 containerd[1483]: 2024-12-13 08:54:09.269 [INFO][5100] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:54:09.294764 containerd[1483]: 2024-12-13 08:54:09.281 [WARNING][5100] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fddcdc4e4228fb516182855e6acda500bf9dfa5babdf534e12f5f62252f97ea8" HandleID="k8s-pod-network.fddcdc4e4228fb516182855e6acda500bf9dfa5babdf534e12f5f62252f97ea8" Workload="ci--4081.2.1--4--b1553ec4eb-k8s-calico--kube--controllers--57b6c9448b--wct54-eth0" Dec 13 08:54:09.294764 containerd[1483]: 2024-12-13 08:54:09.282 [INFO][5100] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fddcdc4e4228fb516182855e6acda500bf9dfa5babdf534e12f5f62252f97ea8" HandleID="k8s-pod-network.fddcdc4e4228fb516182855e6acda500bf9dfa5babdf534e12f5f62252f97ea8" Workload="ci--4081.2.1--4--b1553ec4eb-k8s-calico--kube--controllers--57b6c9448b--wct54-eth0" Dec 13 08:54:09.294764 containerd[1483]: 2024-12-13 08:54:09.288 [INFO][5100] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:54:09.294764 containerd[1483]: 2024-12-13 08:54:09.291 [INFO][5094] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fddcdc4e4228fb516182855e6acda500bf9dfa5babdf534e12f5f62252f97ea8" Dec 13 08:54:09.296999 containerd[1483]: time="2024-12-13T08:54:09.294864519Z" level=info msg="TearDown network for sandbox \"fddcdc4e4228fb516182855e6acda500bf9dfa5babdf534e12f5f62252f97ea8\" successfully" Dec 13 08:54:09.296999 containerd[1483]: time="2024-12-13T08:54:09.294924800Z" level=info msg="StopPodSandbox for \"fddcdc4e4228fb516182855e6acda500bf9dfa5babdf534e12f5f62252f97ea8\" returns successfully" Dec 13 08:54:09.296999 containerd[1483]: time="2024-12-13T08:54:09.296893068Z" level=info msg="RemovePodSandbox for \"fddcdc4e4228fb516182855e6acda500bf9dfa5babdf534e12f5f62252f97ea8\"" Dec 13 08:54:09.296999 containerd[1483]: time="2024-12-13T08:54:09.296946420Z" level=info msg="Forcibly stopping sandbox \"fddcdc4e4228fb516182855e6acda500bf9dfa5babdf534e12f5f62252f97ea8\"" Dec 13 08:54:09.452679 containerd[1483]: 2024-12-13 08:54:09.376 [WARNING][5118] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fddcdc4e4228fb516182855e6acda500bf9dfa5babdf534e12f5f62252f97ea8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--4--b1553ec4eb-k8s-calico--kube--controllers--57b6c9448b--wct54-eth0", GenerateName:"calico-kube-controllers-57b6c9448b-", Namespace:"calico-system", SelfLink:"", UID:"5f39249e-c6ea-419b-8152-f432f5354acc", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 53, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"57b6c9448b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-4-b1553ec4eb", ContainerID:"60ca077f7c0888b82333bfdeb9afdc41800b071196993feecc58e74e419c7ca8", Pod:"calico-kube-controllers-57b6c9448b-wct54", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.124.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia7994ea9a7e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:54:09.452679 containerd[1483]: 2024-12-13 08:54:09.377 [INFO][5118] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fddcdc4e4228fb516182855e6acda500bf9dfa5babdf534e12f5f62252f97ea8" Dec 13 08:54:09.452679 containerd[1483]: 2024-12-13 08:54:09.377 [INFO][5118] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fddcdc4e4228fb516182855e6acda500bf9dfa5babdf534e12f5f62252f97ea8" iface="eth0" netns="" Dec 13 08:54:09.452679 containerd[1483]: 2024-12-13 08:54:09.377 [INFO][5118] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fddcdc4e4228fb516182855e6acda500bf9dfa5babdf534e12f5f62252f97ea8" Dec 13 08:54:09.452679 containerd[1483]: 2024-12-13 08:54:09.377 [INFO][5118] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fddcdc4e4228fb516182855e6acda500bf9dfa5babdf534e12f5f62252f97ea8" Dec 13 08:54:09.452679 containerd[1483]: 2024-12-13 08:54:09.429 [INFO][5124] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fddcdc4e4228fb516182855e6acda500bf9dfa5babdf534e12f5f62252f97ea8" HandleID="k8s-pod-network.fddcdc4e4228fb516182855e6acda500bf9dfa5babdf534e12f5f62252f97ea8" Workload="ci--4081.2.1--4--b1553ec4eb-k8s-calico--kube--controllers--57b6c9448b--wct54-eth0" Dec 13 08:54:09.452679 containerd[1483]: 2024-12-13 08:54:09.429 [INFO][5124] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:54:09.452679 containerd[1483]: 2024-12-13 08:54:09.429 [INFO][5124] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:54:09.452679 containerd[1483]: 2024-12-13 08:54:09.439 [WARNING][5124] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fddcdc4e4228fb516182855e6acda500bf9dfa5babdf534e12f5f62252f97ea8" HandleID="k8s-pod-network.fddcdc4e4228fb516182855e6acda500bf9dfa5babdf534e12f5f62252f97ea8" Workload="ci--4081.2.1--4--b1553ec4eb-k8s-calico--kube--controllers--57b6c9448b--wct54-eth0" Dec 13 08:54:09.452679 containerd[1483]: 2024-12-13 08:54:09.439 [INFO][5124] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fddcdc4e4228fb516182855e6acda500bf9dfa5babdf534e12f5f62252f97ea8" HandleID="k8s-pod-network.fddcdc4e4228fb516182855e6acda500bf9dfa5babdf534e12f5f62252f97ea8" Workload="ci--4081.2.1--4--b1553ec4eb-k8s-calico--kube--controllers--57b6c9448b--wct54-eth0" Dec 13 08:54:09.452679 containerd[1483]: 2024-12-13 08:54:09.446 [INFO][5124] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:54:09.452679 containerd[1483]: 2024-12-13 08:54:09.449 [INFO][5118] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fddcdc4e4228fb516182855e6acda500bf9dfa5babdf534e12f5f62252f97ea8" Dec 13 08:54:09.453581 containerd[1483]: time="2024-12-13T08:54:09.452751968Z" level=info msg="TearDown network for sandbox \"fddcdc4e4228fb516182855e6acda500bf9dfa5babdf534e12f5f62252f97ea8\" successfully" Dec 13 08:54:09.460488 containerd[1483]: time="2024-12-13T08:54:09.460351475Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fddcdc4e4228fb516182855e6acda500bf9dfa5babdf534e12f5f62252f97ea8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 08:54:09.460731 containerd[1483]: time="2024-12-13T08:54:09.460518587Z" level=info msg="RemovePodSandbox \"fddcdc4e4228fb516182855e6acda500bf9dfa5babdf534e12f5f62252f97ea8\" returns successfully" Dec 13 08:54:09.461529 containerd[1483]: time="2024-12-13T08:54:09.461484399Z" level=info msg="StopPodSandbox for \"8b6fef1aab49a17690d10d5f9b7ddf2e6712c5bfcc25e1474967f18112be0076\"" Dec 13 08:54:09.597029 containerd[1483]: 2024-12-13 08:54:09.542 [WARNING][5143] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8b6fef1aab49a17690d10d5f9b7ddf2e6712c5bfcc25e1474967f18112be0076" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--4--b1553ec4eb-k8s-csi--node--driver--jd225-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9c2374d2-93f5-41dd-beaa-5f3be640a74e", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 53, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-4-b1553ec4eb", ContainerID:"3839b819b91bedf8b7cda95e55967fd353a2f5b1a90ebf46274e970a73dc55ff", Pod:"csi-node-driver-jd225", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.124.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib5b78d8435b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:54:09.597029 containerd[1483]: 2024-12-13 08:54:09.543 [INFO][5143] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8b6fef1aab49a17690d10d5f9b7ddf2e6712c5bfcc25e1474967f18112be0076" Dec 13 08:54:09.597029 containerd[1483]: 2024-12-13 08:54:09.543 [INFO][5143] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8b6fef1aab49a17690d10d5f9b7ddf2e6712c5bfcc25e1474967f18112be0076" iface="eth0" netns="" Dec 13 08:54:09.597029 containerd[1483]: 2024-12-13 08:54:09.543 [INFO][5143] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8b6fef1aab49a17690d10d5f9b7ddf2e6712c5bfcc25e1474967f18112be0076" Dec 13 08:54:09.597029 containerd[1483]: 2024-12-13 08:54:09.543 [INFO][5143] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8b6fef1aab49a17690d10d5f9b7ddf2e6712c5bfcc25e1474967f18112be0076" Dec 13 08:54:09.597029 containerd[1483]: 2024-12-13 08:54:09.576 [INFO][5150] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8b6fef1aab49a17690d10d5f9b7ddf2e6712c5bfcc25e1474967f18112be0076" HandleID="k8s-pod-network.8b6fef1aab49a17690d10d5f9b7ddf2e6712c5bfcc25e1474967f18112be0076" Workload="ci--4081.2.1--4--b1553ec4eb-k8s-csi--node--driver--jd225-eth0" Dec 13 08:54:09.597029 containerd[1483]: 2024-12-13 08:54:09.576 [INFO][5150] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:54:09.597029 containerd[1483]: 2024-12-13 08:54:09.576 [INFO][5150] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:54:09.597029 containerd[1483]: 2024-12-13 08:54:09.585 [WARNING][5150] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8b6fef1aab49a17690d10d5f9b7ddf2e6712c5bfcc25e1474967f18112be0076" HandleID="k8s-pod-network.8b6fef1aab49a17690d10d5f9b7ddf2e6712c5bfcc25e1474967f18112be0076" Workload="ci--4081.2.1--4--b1553ec4eb-k8s-csi--node--driver--jd225-eth0" Dec 13 08:54:09.597029 containerd[1483]: 2024-12-13 08:54:09.585 [INFO][5150] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8b6fef1aab49a17690d10d5f9b7ddf2e6712c5bfcc25e1474967f18112be0076" HandleID="k8s-pod-network.8b6fef1aab49a17690d10d5f9b7ddf2e6712c5bfcc25e1474967f18112be0076" Workload="ci--4081.2.1--4--b1553ec4eb-k8s-csi--node--driver--jd225-eth0" Dec 13 08:54:09.597029 containerd[1483]: 2024-12-13 08:54:09.589 [INFO][5150] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:54:09.597029 containerd[1483]: 2024-12-13 08:54:09.593 [INFO][5143] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8b6fef1aab49a17690d10d5f9b7ddf2e6712c5bfcc25e1474967f18112be0076" Dec 13 08:54:09.597029 containerd[1483]: time="2024-12-13T08:54:09.596453858Z" level=info msg="TearDown network for sandbox \"8b6fef1aab49a17690d10d5f9b7ddf2e6712c5bfcc25e1474967f18112be0076\" successfully" Dec 13 08:54:09.597029 containerd[1483]: time="2024-12-13T08:54:09.596484851Z" level=info msg="StopPodSandbox for \"8b6fef1aab49a17690d10d5f9b7ddf2e6712c5bfcc25e1474967f18112be0076\" returns successfully" Dec 13 08:54:09.599327 containerd[1483]: time="2024-12-13T08:54:09.597442290Z" level=info msg="RemovePodSandbox for \"8b6fef1aab49a17690d10d5f9b7ddf2e6712c5bfcc25e1474967f18112be0076\"" Dec 13 08:54:09.599327 containerd[1483]: time="2024-12-13T08:54:09.597481648Z" level=info msg="Forcibly stopping sandbox \"8b6fef1aab49a17690d10d5f9b7ddf2e6712c5bfcc25e1474967f18112be0076\"" Dec 13 08:54:09.722221 containerd[1483]: 2024-12-13 08:54:09.664 [WARNING][5168] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8b6fef1aab49a17690d10d5f9b7ddf2e6712c5bfcc25e1474967f18112be0076" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--4--b1553ec4eb-k8s-csi--node--driver--jd225-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9c2374d2-93f5-41dd-beaa-5f3be640a74e", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 53, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-4-b1553ec4eb", ContainerID:"3839b819b91bedf8b7cda95e55967fd353a2f5b1a90ebf46274e970a73dc55ff", Pod:"csi-node-driver-jd225", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.124.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib5b78d8435b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:54:09.722221 containerd[1483]: 2024-12-13 08:54:09.665 [INFO][5168] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8b6fef1aab49a17690d10d5f9b7ddf2e6712c5bfcc25e1474967f18112be0076" Dec 13 08:54:09.722221 containerd[1483]: 2024-12-13 08:54:09.665 [INFO][5168] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8b6fef1aab49a17690d10d5f9b7ddf2e6712c5bfcc25e1474967f18112be0076" iface="eth0" netns="" Dec 13 08:54:09.722221 containerd[1483]: 2024-12-13 08:54:09.665 [INFO][5168] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8b6fef1aab49a17690d10d5f9b7ddf2e6712c5bfcc25e1474967f18112be0076" Dec 13 08:54:09.722221 containerd[1483]: 2024-12-13 08:54:09.665 [INFO][5168] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8b6fef1aab49a17690d10d5f9b7ddf2e6712c5bfcc25e1474967f18112be0076" Dec 13 08:54:09.722221 containerd[1483]: 2024-12-13 08:54:09.699 [INFO][5175] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8b6fef1aab49a17690d10d5f9b7ddf2e6712c5bfcc25e1474967f18112be0076" HandleID="k8s-pod-network.8b6fef1aab49a17690d10d5f9b7ddf2e6712c5bfcc25e1474967f18112be0076" Workload="ci--4081.2.1--4--b1553ec4eb-k8s-csi--node--driver--jd225-eth0" Dec 13 08:54:09.722221 containerd[1483]: 2024-12-13 08:54:09.700 [INFO][5175] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:54:09.722221 containerd[1483]: 2024-12-13 08:54:09.700 [INFO][5175] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:54:09.722221 containerd[1483]: 2024-12-13 08:54:09.710 [WARNING][5175] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8b6fef1aab49a17690d10d5f9b7ddf2e6712c5bfcc25e1474967f18112be0076" HandleID="k8s-pod-network.8b6fef1aab49a17690d10d5f9b7ddf2e6712c5bfcc25e1474967f18112be0076" Workload="ci--4081.2.1--4--b1553ec4eb-k8s-csi--node--driver--jd225-eth0" Dec 13 08:54:09.722221 containerd[1483]: 2024-12-13 08:54:09.710 [INFO][5175] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8b6fef1aab49a17690d10d5f9b7ddf2e6712c5bfcc25e1474967f18112be0076" HandleID="k8s-pod-network.8b6fef1aab49a17690d10d5f9b7ddf2e6712c5bfcc25e1474967f18112be0076" Workload="ci--4081.2.1--4--b1553ec4eb-k8s-csi--node--driver--jd225-eth0" Dec 13 08:54:09.722221 containerd[1483]: 2024-12-13 08:54:09.715 [INFO][5175] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:54:09.722221 containerd[1483]: 2024-12-13 08:54:09.718 [INFO][5168] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8b6fef1aab49a17690d10d5f9b7ddf2e6712c5bfcc25e1474967f18112be0076" Dec 13 08:54:09.724348 containerd[1483]: time="2024-12-13T08:54:09.724292110Z" level=info msg="TearDown network for sandbox \"8b6fef1aab49a17690d10d5f9b7ddf2e6712c5bfcc25e1474967f18112be0076\" successfully" Dec 13 08:54:09.731922 containerd[1483]: time="2024-12-13T08:54:09.731688825Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8b6fef1aab49a17690d10d5f9b7ddf2e6712c5bfcc25e1474967f18112be0076\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 08:54:09.732109 containerd[1483]: time="2024-12-13T08:54:09.731951694Z" level=info msg="RemovePodSandbox \"8b6fef1aab49a17690d10d5f9b7ddf2e6712c5bfcc25e1474967f18112be0076\" returns successfully" Dec 13 08:54:21.355889 kubelet[2608]: E1213 08:54:21.355605 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:54:26.356478 kubelet[2608]: E1213 08:54:26.356353 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:54:27.356622 kubelet[2608]: E1213 08:54:27.356471 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:54:30.601174 systemd[1]: run-containerd-runc-k8s.io-4071a20caee398909747bbcaa2dd851b1d188a2f3f840b66631527b9339ba9f1-runc.iiZcRy.mount: Deactivated successfully. Dec 13 08:54:41.044018 kubelet[2608]: I1213 08:54:41.043011 2608 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 08:54:51.357088 kubelet[2608]: E1213 08:54:51.355938 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:54:59.356660 kubelet[2608]: E1213 08:54:59.356542 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:55:06.356315 kubelet[2608]: E1213 08:55:06.356223 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:55:12.356146 kubelet[2608]: E1213 08:55:12.355912 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:55:12.877990 systemd[1]: run-containerd-runc-k8s.io-4071a20caee398909747bbcaa2dd851b1d188a2f3f840b66631527b9339ba9f1-runc.mRRXzL.mount: Deactivated successfully. Dec 13 08:55:17.357109 kubelet[2608]: E1213 08:55:17.356429 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:55:32.514902 systemd[1]: Started sshd@13-144.126.221.125:22-218.92.0.157:39376.service - OpenSSH per-connection server daemon (218.92.0.157:39376). Dec 13 08:55:33.788788 sshd[5402]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.157 user=root Dec 13 08:55:35.145613 sshd[5400]: PAM: Permission denied for root from 218.92.0.157 Dec 13 08:55:35.477275 sshd[5403]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.157 user=root Dec 13 08:55:36.355966 kubelet[2608]: E1213 08:55:36.355859 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:55:37.109271 sshd[5400]: PAM: Permission denied for root from 218.92.0.157 Dec 13 08:55:37.440455 sshd[5404]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.157 user=root Dec 13 08:55:39.680234 sshd[5400]: PAM: Permission denied for root from 218.92.0.157 Dec 13 08:55:39.845658 sshd[5400]: Received disconnect from 218.92.0.157 port 39376:11: [preauth] Dec 13 08:55:39.845658 sshd[5400]: Disconnected from authenticating user root 218.92.0.157 port 39376 [preauth] Dec 13 08:55:39.848331 systemd[1]: sshd@13-144.126.221.125:22-218.92.0.157:39376.service: Deactivated successfully. Dec 13 08:55:51.790606 systemd[1]: sshd@12-144.126.221.125:22-218.92.0.157:43031.service: Deactivated successfully. Dec 13 08:55:52.356395 kubelet[2608]: E1213 08:55:52.356135 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:55:53.803684 systemd[1]: run-containerd-runc-k8s.io-8999e9f403528f83607efc2ce985d5637ba4f7605c3b193aba2e1c10eeada0e4-runc.aJY2cs.mount: Deactivated successfully. Dec 13 08:55:55.356041 kubelet[2608]: E1213 08:55:55.355466 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:55:56.356682 kubelet[2608]: E1213 08:55:56.356638 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:56:08.356556 kubelet[2608]: E1213 08:56:08.356508 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:56:09.357253 kubelet[2608]: E1213 08:56:09.356240 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:56:23.789643 systemd[1]: run-containerd-runc-k8s.io-8999e9f403528f83607efc2ce985d5637ba4f7605c3b193aba2e1c10eeada0e4-runc.h0iyyw.mount: Deactivated successfully. Dec 13 08:56:28.355925 kubelet[2608]: E1213 08:56:28.355804 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:56:31.356222 kubelet[2608]: E1213 08:56:31.355603 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:56:42.837980 systemd[1]: run-containerd-runc-k8s.io-4071a20caee398909747bbcaa2dd851b1d188a2f3f840b66631527b9339ba9f1-runc.b4da3H.mount: Deactivated successfully. Dec 13 08:57:04.357687 kubelet[2608]: E1213 08:57:04.355986 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:57:04.942794 systemd[1]: Started sshd@14-144.126.221.125:22-218.92.0.219:55244.service - OpenSSH per-connection server daemon (218.92.0.219:55244). Dec 13 08:57:06.147703 systemd[1]: Started sshd@15-144.126.221.125:22-218.92.0.157:36546.service - OpenSSH per-connection server daemon (218.92.0.157:36546). Dec 13 08:57:06.311531 sshd[5586]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.219 user=root Dec 13 08:57:07.284961 sshd[5594]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.157 user=root Dec 13 08:57:08.355702 kubelet[2608]: E1213 08:57:08.355661 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:57:08.436021 sshd[5583]: PAM: Permission denied for root from 218.92.0.219 Dec 13 08:57:08.793697 sshd[5597]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.219 user=root Dec 13 08:57:09.212502 sshd[5587]: PAM: Permission denied for root from 218.92.0.157 Dec 13 08:57:11.196525 sshd[5583]: PAM: Permission denied for root from 218.92.0.219 Dec 13 08:57:11.552466 sshd[5598]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.219 user=root Dec 13 08:57:12.288134 sshd[5587]: Received disconnect from 218.92.0.157 port 36546:11: [preauth] Dec 13 08:57:12.288134 sshd[5587]: Disconnected from authenticating user root 218.92.0.157 port 36546 [preauth] Dec 13 08:57:12.289927 systemd[1]: sshd@15-144.126.221.125:22-218.92.0.157:36546.service: Deactivated successfully. Dec 13 08:57:12.356388 kubelet[2608]: E1213 08:57:12.356134 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:57:12.870168 systemd[1]: run-containerd-runc-k8s.io-4071a20caee398909747bbcaa2dd851b1d188a2f3f840b66631527b9339ba9f1-runc.707tww.mount: Deactivated successfully. Dec 13 08:57:13.695723 sshd[5583]: PAM: Permission denied for root from 218.92.0.219 Dec 13 08:57:13.873674 sshd[5583]: Received disconnect from 218.92.0.219 port 55244:11: [preauth] Dec 13 08:57:13.873674 sshd[5583]: Disconnected from authenticating user root 218.92.0.219 port 55244 [preauth] Dec 13 08:57:13.876412 systemd[1]: sshd@14-144.126.221.125:22-218.92.0.219:55244.service: Deactivated successfully. Dec 13 08:57:14.059606 systemd[1]: Started sshd@16-144.126.221.125:22-218.92.0.219:55250.service - OpenSSH per-connection server daemon (218.92.0.219:55250). Dec 13 08:57:15.330683 sshd[5624]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.219 user=root Dec 13 08:57:17.022740 sshd[5622]: PAM: Permission denied for root from 218.92.0.219 Dec 13 08:57:17.366637 kubelet[2608]: E1213 08:57:17.366297 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:57:17.368782 sshd[5625]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.219 user=root Dec 13 08:57:19.001379 sshd[5622]: PAM: Permission denied for root from 218.92.0.219 Dec 13 08:57:19.347709 sshd[5626]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.219 user=root Dec 13 08:57:21.590553 sshd[5622]: PAM: Permission denied for root from 218.92.0.219 Dec 13 08:57:21.761725 sshd[5622]: Received disconnect from 218.92.0.219 port 55250:11: [preauth] Dec 13 08:57:21.761725 sshd[5622]: Disconnected from authenticating user root 218.92.0.219 port 55250 [preauth] Dec 13 08:57:21.765863 systemd[1]: sshd@16-144.126.221.125:22-218.92.0.219:55250.service: Deactivated successfully. Dec 13 08:57:21.961804 systemd[1]: Started sshd@17-144.126.221.125:22-218.92.0.219:61034.service - OpenSSH per-connection server daemon (218.92.0.219:61034). Dec 13 08:57:23.262753 sshd[5633]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.219 user=root Dec 13 08:57:25.055060 sshd[5630]: PAM: Permission denied for root from 218.92.0.219 Dec 13 08:57:25.413474 sshd[5658]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.219 user=root Dec 13 08:57:27.358208 kubelet[2608]: E1213 08:57:27.358121 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:57:27.813701 sshd[5630]: PAM: Permission denied for root from 218.92.0.219 Dec 13 08:57:28.165339 sshd[5659]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.219 user=root Dec 13 08:57:29.838151 sshd[5630]: PAM: Permission denied for root from 218.92.0.219 Dec 13 08:57:30.016802 sshd[5630]: Received disconnect from 218.92.0.219 port 61034:11: [preauth] Dec 13 08:57:30.016802 sshd[5630]: Disconnected from authenticating user root 218.92.0.219 port 61034 [preauth] Dec 13 08:57:30.019331 systemd[1]: sshd@17-144.126.221.125:22-218.92.0.219:61034.service: Deactivated successfully. Dec 13 08:57:31.356603 kubelet[2608]: E1213 08:57:31.356154 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:57:34.355627 kubelet[2608]: E1213 08:57:34.355561 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:57:41.356384 kubelet[2608]: E1213 08:57:41.356337 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:57:53.797257 systemd[1]: run-containerd-runc-k8s.io-8999e9f403528f83607efc2ce985d5637ba4f7605c3b193aba2e1c10eeada0e4-runc.lw3hwE.mount: Deactivated successfully. Dec 13 08:58:24.356427 kubelet[2608]: E1213 08:58:24.356221 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:58:30.355835 kubelet[2608]: E1213 08:58:30.355522 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:58:31.356753 kubelet[2608]: E1213 08:58:31.356334 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:58:35.355952 kubelet[2608]: E1213 08:58:35.355681 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:58:45.356662 kubelet[2608]: E1213 08:58:45.356033 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:58:46.311775 systemd[1]: Started sshd@18-144.126.221.125:22-218.92.0.157:16687.service - OpenSSH per-connection server daemon (218.92.0.157:16687). Dec 13 08:58:46.357490 kubelet[2608]: E1213 08:58:46.356459 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:58:47.541479 sshd[5834]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.157 user=root Dec 13 08:58:49.198539 sshd[5832]: PAM: Permission denied for root from 218.92.0.157 Dec 13 08:58:49.527876 sshd[5835]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.157 user=root Dec 13 08:58:51.124718 sshd[5832]: PAM: Permission denied for root from 218.92.0.157 Dec 13 08:58:51.441542 sshd[5836]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.157 user=root Dec 13 08:58:53.314650 sshd[5832]: PAM: Permission denied for root from 218.92.0.157 Dec 13 08:58:53.479983 sshd[5832]: Received disconnect from 218.92.0.157 port 16687:11: [preauth] Dec 13 08:58:53.479983 sshd[5832]: Disconnected from authenticating user root 218.92.0.157 port 16687 [preauth] Dec 13 08:58:53.482878 systemd[1]: sshd@18-144.126.221.125:22-218.92.0.157:16687.service: Deactivated successfully. Dec 13 08:58:56.355881 kubelet[2608]: E1213 08:58:56.355754 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:59:00.356411 kubelet[2608]: E1213 08:59:00.356354 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:59:23.841882 systemd[1]: run-containerd-runc-k8s.io-8999e9f403528f83607efc2ce985d5637ba4f7605c3b193aba2e1c10eeada0e4-runc.ITid6Y.mount: Deactivated successfully. Dec 13 08:59:32.367009 kubelet[2608]: E1213 08:59:32.366919 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:59:49.356817 kubelet[2608]: E1213 08:59:49.356153 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:59:51.356787 kubelet[2608]: E1213 08:59:51.356572 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:59:55.357296 kubelet[2608]: E1213 08:59:55.356343 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 09:00:00.360217 kubelet[2608]: E1213 09:00:00.359814 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 09:00:06.357310 kubelet[2608]: E1213 09:00:06.356176 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 09:00:08.355980 kubelet[2608]: E1213 09:00:08.355806 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 09:00:21.839775 systemd[1]: Started sshd@19-144.126.221.125:22-218.92.0.157:27013.service - OpenSSH per-connection server daemon (218.92.0.157:27013). Dec 13 09:00:23.791181 systemd[1]: run-containerd-runc-k8s.io-8999e9f403528f83607efc2ce985d5637ba4f7605c3b193aba2e1c10eeada0e4-runc.sTgSpH.mount: Deactivated successfully. Dec 13 09:00:24.444615 sshd[6032]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.157 user=root Dec 13 09:00:25.946310 sshd[6030]: PAM: Permission denied for root from 218.92.0.157 Dec 13 09:00:26.356309 kubelet[2608]: E1213 09:00:26.356155 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 09:00:33.375000 update_engine[1453]: I20241213 09:00:33.374536 1453 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Dec 13 09:00:33.375000 update_engine[1453]: I20241213 09:00:33.374639 1453 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Dec 13 09:00:33.379469 update_engine[1453]: I20241213 09:00:33.379396 1453 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Dec 13 09:00:33.381058 update_engine[1453]: I20241213 09:00:33.380977 1453 omaha_request_params.cc:62] Current group set to stable Dec 13 09:00:33.381602 update_engine[1453]: I20241213 09:00:33.381245 1453 update_attempter.cc:499] Already updated boot flags. Skipping. Dec 13 09:00:33.381602 update_engine[1453]: I20241213 09:00:33.381444 1453 update_attempter.cc:643] Scheduling an action processor start. Dec 13 09:00:33.381602 update_engine[1453]: I20241213 09:00:33.381487 1453 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 13 09:00:33.381602 update_engine[1453]: I20241213 09:00:33.381563 1453 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Dec 13 09:00:33.382837 update_engine[1453]: I20241213 09:00:33.381673 1453 omaha_request_action.cc:271] Posting an Omaha request to disabled Dec 13 09:00:33.382837 update_engine[1453]: I20241213 09:00:33.381687 1453 omaha_request_action.cc:272] Request: Dec 13 09:00:33.382837 update_engine[1453]: Dec 13 09:00:33.382837 update_engine[1453]: Dec 13 09:00:33.382837 update_engine[1453]: Dec 13 09:00:33.382837 update_engine[1453]: Dec 13 09:00:33.382837 update_engine[1453]: Dec 13 09:00:33.382837 update_engine[1453]: Dec 13 09:00:33.382837 update_engine[1453]: Dec 13 09:00:33.382837 update_engine[1453]: Dec 13 09:00:33.382837 update_engine[1453]: I20241213 09:00:33.381701 1453 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 09:00:33.403626 locksmithd[1488]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Dec 13 09:00:33.407975 update_engine[1453]: I20241213 09:00:33.407480 1453 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 09:00:33.407975 update_engine[1453]: I20241213 09:00:33.407855 1453 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 09:00:33.409949 update_engine[1453]: E20241213 09:00:33.409870 1453 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 09:00:33.410131 update_engine[1453]: I20241213 09:00:33.410009 1453 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Dec 13 09:00:43.191724 update_engine[1453]: I20241213 09:00:43.191617 1453 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 09:00:43.192556 update_engine[1453]: I20241213 09:00:43.191973 1453 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 09:00:43.192556 update_engine[1453]: I20241213 09:00:43.192404 1453 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 09:00:43.193334 update_engine[1453]: E20241213 09:00:43.193264 1453 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 09:00:43.193508 update_engine[1453]: I20241213 09:00:43.193420 1453 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Dec 13 09:00:53.193539 update_engine[1453]: I20241213 09:00:53.193406 1453 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 09:00:53.194337 update_engine[1453]: I20241213 09:00:53.193760 1453 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 09:00:53.194337 update_engine[1453]: I20241213 09:00:53.194118 1453 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 09:00:53.194992 update_engine[1453]: E20241213 09:00:53.194876 1453 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 09:00:53.194992 update_engine[1453]: I20241213 09:00:53.194952 1453 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Dec 13 09:00:53.800382 systemd[1]: run-containerd-runc-k8s.io-8999e9f403528f83607efc2ce985d5637ba4f7605c3b193aba2e1c10eeada0e4-runc.NaWv4r.mount: Deactivated successfully. Dec 13 09:00:55.356675 kubelet[2608]: E1213 09:00:55.356619 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 09:00:58.356326 kubelet[2608]: E1213 09:00:58.356186 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 09:01:03.194650 update_engine[1453]: I20241213 09:01:03.194329 1453 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 09:01:03.195933 update_engine[1453]: I20241213 09:01:03.194701 1453 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 09:01:03.195933 update_engine[1453]: I20241213 09:01:03.195081 1453 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 09:01:03.196465 update_engine[1453]: E20241213 09:01:03.196341 1453 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 09:01:03.196465 update_engine[1453]: I20241213 09:01:03.196437 1453 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 13 09:01:03.200078 update_engine[1453]: I20241213 09:01:03.199716 1453 omaha_request_action.cc:617] Omaha request response: Dec 13 09:01:03.200078 update_engine[1453]: E20241213 09:01:03.199930 1453 omaha_request_action.cc:636] Omaha request network transfer failed. Dec 13 09:01:03.200078 update_engine[1453]: I20241213 09:01:03.199985 1453 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Dec 13 09:01:03.200078 update_engine[1453]: I20241213 09:01:03.199997 1453 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 09:01:03.200078 update_engine[1453]: I20241213 09:01:03.200006 1453 update_attempter.cc:306] Processing Done. Dec 13 09:01:03.204489 update_engine[1453]: E20241213 09:01:03.204387 1453 update_attempter.cc:619] Update failed. Dec 13 09:01:03.204489 update_engine[1453]: I20241213 09:01:03.204471 1453 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Dec 13 09:01:03.204489 update_engine[1453]: I20241213 09:01:03.204486 1453 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Dec 13 09:01:03.204489 update_engine[1453]: I20241213 09:01:03.204501 1453 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Dec 13 09:01:03.204954 update_engine[1453]: I20241213 09:01:03.204617 1453 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 13 09:01:03.204954 update_engine[1453]: I20241213 09:01:03.204663 1453 omaha_request_action.cc:271] Posting an Omaha request to disabled Dec 13 09:01:03.204954 update_engine[1453]: I20241213 09:01:03.204674 1453 omaha_request_action.cc:272] Request: Dec 13 09:01:03.204954 update_engine[1453]: Dec 13 09:01:03.204954 update_engine[1453]: Dec 13 09:01:03.204954 update_engine[1453]: Dec 13 09:01:03.204954 update_engine[1453]: Dec 13 09:01:03.204954 update_engine[1453]: Dec 13 09:01:03.204954 update_engine[1453]: Dec 13 09:01:03.204954 update_engine[1453]: I20241213 09:01:03.204687 1453 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 09:01:03.206746 update_engine[1453]: I20241213 09:01:03.205010 1453 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 09:01:03.206746 update_engine[1453]: I20241213 09:01:03.206003 1453 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 09:01:03.206746 update_engine[1453]: E20241213 09:01:03.206614 1453 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 09:01:03.206746 update_engine[1453]: I20241213 09:01:03.206705 1453 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 13 09:01:03.206746 update_engine[1453]: I20241213 09:01:03.206720 1453 omaha_request_action.cc:617] Omaha request response: Dec 13 09:01:03.206746 update_engine[1453]: I20241213 09:01:03.206734 1453 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 09:01:03.206746 update_engine[1453]: I20241213 09:01:03.206744 1453 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 09:01:03.206746 update_engine[1453]: I20241213 09:01:03.206753 1453 update_attempter.cc:306] Processing Done. Dec 13 09:01:03.207339 locksmithd[1488]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Dec 13 09:01:03.207903 update_engine[1453]: I20241213 09:01:03.206764 1453 update_attempter.cc:310] Error event sent. Dec 13 09:01:03.211345 update_engine[1453]: I20241213 09:01:03.210870 1453 update_check_scheduler.cc:74] Next update check in 40m42s Dec 13 09:01:03.212033 locksmithd[1488]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Dec 13 09:01:03.359072 kubelet[2608]: E1213 09:01:03.359025 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 09:01:03.364778 systemd[1]: Started sshd@20-144.126.221.125:22-218.92.0.221:52700.service - OpenSSH per-connection server daemon (218.92.0.221:52700). Dec 13 09:01:04.358159 kubelet[2608]: E1213 09:01:04.356856 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 09:01:04.501717 sshd[6125]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.221 user=root Dec 13 09:01:05.355851 kubelet[2608]: E1213 09:01:05.355642 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 09:01:06.300022 sshd[6123]: PAM: Permission denied for root from 218.92.0.221 Dec 13 09:01:06.598638 sshd[6126]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.221 user=root Dec 13 09:01:08.336751 sshd[6123]: PAM: Permission denied for root from 218.92.0.221 Dec 13 09:01:08.631379 sshd[6129]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.221 user=root Dec 13 09:01:10.644419 sshd[6123]: PAM: Permission denied for root from 218.92.0.221 Dec 13 09:01:10.791096 sshd[6123]: Received disconnect from 218.92.0.221 port 52700:11: [preauth] Dec 13 09:01:10.791096 sshd[6123]: Disconnected from authenticating user root 218.92.0.221 port 52700 [preauth] Dec 13 09:01:10.794693 systemd[1]: sshd@20-144.126.221.125:22-218.92.0.221:52700.service: Deactivated successfully. Dec 13 09:01:34.355908 kubelet[2608]: E1213 09:01:34.355769 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 09:01:35.357364 kubelet[2608]: E1213 09:01:35.357256 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 09:01:42.356179 kubelet[2608]: E1213 09:01:42.356122 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 09:01:42.841985 systemd[1]: run-containerd-runc-k8s.io-4071a20caee398909747bbcaa2dd851b1d188a2f3f840b66631527b9339ba9f1-runc.7axwbi.mount: Deactivated successfully. Dec 13 09:02:02.357705 kubelet[2608]: E1213 09:02:02.357622 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 09:02:02.359010 kubelet[2608]: E1213 09:02:02.358850 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 09:02:04.376696 systemd[1]: Started sshd@21-144.126.221.125:22-218.92.0.157:38246.service - OpenSSH per-connection server daemon (218.92.0.157:38246). Dec 13 09:02:05.487933 sshd[6256]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.157 user=root Dec 13 09:02:07.326238 sshd[6254]: PAM: Permission denied for root from 218.92.0.157 Dec 13 09:02:17.356475 kubelet[2608]: E1213 09:02:17.355896 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 09:02:19.356217 kubelet[2608]: E1213 09:02:19.355939 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 09:02:21.857259 systemd[1]: sshd@19-144.126.221.125:22-218.92.0.157:27013.service: Deactivated successfully. Dec 13 09:02:29.357215 kubelet[2608]: E1213 09:02:29.356494 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 09:02:30.585744 systemd[1]: run-containerd-runc-k8s.io-4071a20caee398909747bbcaa2dd851b1d188a2f3f840b66631527b9339ba9f1-runc.zOJIe4.mount: Deactivated successfully. Dec 13 09:02:45.357004 kubelet[2608]: E1213 09:02:45.356399 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 09:02:49.356216 kubelet[2608]: E1213 09:02:49.355882 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 09:03:00.358046 kubelet[2608]: E1213 09:03:00.356132 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 09:03:21.356678 kubelet[2608]: E1213 09:03:21.355951 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 09:03:23.358314 kubelet[2608]: E1213 09:03:23.358225 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 09:03:27.359527 kubelet[2608]: E1213 09:03:27.359422 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 09:03:31.356459 kubelet[2608]: E1213 09:03:31.355736 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 09:03:46.356333 kubelet[2608]: E1213 09:03:46.356259 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"