Dec 13 09:47:52.993871 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Dec 12 23:15:00 -00 2024 Dec 13 09:47:52.993909 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 09:47:52.993928 kernel: BIOS-provided physical RAM map: Dec 13 09:47:52.993940 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 09:47:52.993949 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 09:47:52.993989 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 09:47:52.994001 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Dec 13 09:47:52.994013 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Dec 13 09:47:52.994023 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 09:47:52.994039 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 09:47:52.994057 kernel: NX (Execute Disable) protection: active Dec 13 09:47:52.994067 kernel: APIC: Static calls initialized Dec 13 09:47:52.994078 kernel: SMBIOS 2.8 present. Dec 13 09:47:52.994089 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Dec 13 09:47:52.994104 kernel: Hypervisor detected: KVM Dec 13 09:47:52.994118 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 09:47:52.994129 kernel: kvm-clock: using sched offset of 3243429294 cycles Dec 13 09:47:52.994139 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 09:47:52.994148 kernel: tsc: Detected 2494.140 MHz processor Dec 13 09:47:52.994156 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 09:47:52.994165 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 09:47:52.994174 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Dec 13 09:47:52.994186 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 13 09:47:52.994199 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 09:47:52.994217 kernel: ACPI: Early table checksum verification disabled Dec 13 09:47:52.994230 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Dec 13 09:47:52.994239 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 09:47:52.994249 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 09:47:52.994262 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 09:47:52.994274 kernel: ACPI: FACS 0x000000007FFE0000 000040 Dec 13 09:47:52.994287 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 09:47:52.994299 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 09:47:52.994312 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 09:47:52.994329 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 09:47:52.994341 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Dec 13 09:47:52.994354 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Dec 13 09:47:52.994368 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Dec 13 09:47:52.994382 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Dec 13 09:47:52.994409 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Dec 13 09:47:52.994423 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Dec 13 09:47:52.994469 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Dec 13 09:47:52.994479 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 09:47:52.994487 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 09:47:52.994496 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 13 09:47:52.994505 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Dec 13 09:47:52.994514 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Dec 13 09:47:52.994525 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Dec 13 09:47:52.994546 kernel: Zone ranges: Dec 13 09:47:52.994561 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 09:47:52.994576 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Dec 13 09:47:52.994586 kernel: Normal empty Dec 13 09:47:52.994595 kernel: Movable zone start for each node Dec 13 09:47:52.994603 kernel: Early memory node ranges Dec 13 09:47:52.994612 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 09:47:52.994621 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Dec 13 09:47:52.994630 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Dec 13 09:47:52.994644 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 09:47:52.994662 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 09:47:52.994678 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Dec 13 09:47:52.994692 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 09:47:52.994705 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 09:47:52.994713 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 09:47:52.994722 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 09:47:52.994731 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 09:47:52.994740 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 09:47:52.994754 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 09:47:52.994824 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 09:47:52.994833 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 09:47:52.994842 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 09:47:52.994851 kernel: TSC deadline timer available Dec 13 09:47:52.994859 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 09:47:52.994868 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 13 09:47:52.994877 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Dec 13 09:47:52.994890 kernel: Booting paravirtualized kernel on KVM Dec 13 09:47:52.994899 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 09:47:52.994915 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 13 09:47:52.994924 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Dec 13 09:47:52.994933 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Dec 13 09:47:52.994941 kernel: pcpu-alloc: [0] 0 1 Dec 13 09:47:52.994950 kernel: kvm-guest: PV spinlocks disabled, no host support Dec 13 09:47:52.994960 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 09:47:52.994970 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 09:47:52.994978 kernel: random: crng init done Dec 13 09:47:52.994990 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 09:47:52.994998 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 09:47:52.995011 kernel: Fallback order for Node 0: 0 Dec 13 09:47:52.995025 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Dec 13 09:47:52.995038 kernel: Policy zone: DMA32 Dec 13 09:47:52.995053 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 09:47:52.995066 kernel: Memory: 1971204K/2096612K available (12288K kernel code, 2299K rwdata, 22724K rodata, 42844K init, 2348K bss, 125148K reserved, 0K cma-reserved) Dec 13 09:47:52.995075 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 09:47:52.995088 kernel: Kernel/User page tables isolation: enabled Dec 13 09:47:52.995097 kernel: ftrace: allocating 37902 entries in 149 pages Dec 13 09:47:52.995105 kernel: ftrace: allocated 149 pages with 4 groups Dec 13 09:47:52.995115 kernel: Dynamic Preempt: voluntary Dec 13 09:47:52.995128 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 09:47:52.995143 kernel: rcu: RCU event tracing is enabled. Dec 13 09:47:52.995157 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 09:47:52.995170 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 09:47:52.995184 kernel: Rude variant of Tasks RCU enabled. Dec 13 09:47:52.995197 kernel: Tracing variant of Tasks RCU enabled. Dec 13 09:47:52.995211 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 09:47:52.995222 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 09:47:52.995234 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 13 09:47:52.995254 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 09:47:52.995269 kernel: Console: colour VGA+ 80x25 Dec 13 09:47:52.995279 kernel: printk: console [tty0] enabled Dec 13 09:47:52.995288 kernel: printk: console [ttyS0] enabled Dec 13 09:47:52.995299 kernel: ACPI: Core revision 20230628 Dec 13 09:47:52.995308 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 13 09:47:52.995322 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 09:47:52.995330 kernel: x2apic enabled Dec 13 09:47:52.995340 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 09:47:52.995349 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 09:47:52.995358 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Dec 13 09:47:52.995366 kernel: Calibrating delay loop (skipped) preset value.. 4988.28 BogoMIPS (lpj=2494140) Dec 13 09:47:52.995375 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Dec 13 09:47:52.995385 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Dec 13 09:47:52.995408 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 09:47:52.995417 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 09:47:52.995427 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 09:47:52.995439 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 09:47:52.995448 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Dec 13 09:47:52.995457 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 09:47:52.995467 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 13 09:47:52.995476 kernel: MDS: Mitigation: Clear CPU buffers Dec 13 09:47:52.995497 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 09:47:52.995514 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 09:47:52.995525 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 09:47:52.995534 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 09:47:52.995545 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 09:47:52.995554 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 13 09:47:52.995564 kernel: Freeing SMP alternatives memory: 32K Dec 13 09:47:52.995574 kernel: pid_max: default: 32768 minimum: 301 Dec 13 09:47:52.995583 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 09:47:52.995596 kernel: landlock: Up and running. Dec 13 09:47:52.995605 kernel: SELinux: Initializing. Dec 13 09:47:52.995620 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 09:47:52.995636 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 09:47:52.995649 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Dec 13 09:47:52.995662 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 09:47:52.995675 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 09:47:52.995688 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 09:47:52.995701 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Dec 13 09:47:52.995719 kernel: signal: max sigframe size: 1776 Dec 13 09:47:52.995732 kernel: rcu: Hierarchical SRCU implementation. Dec 13 09:47:52.995746 kernel: rcu: Max phase no-delay instances is 400. Dec 13 09:47:52.995783 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 09:47:52.995798 kernel: smp: Bringing up secondary CPUs ... Dec 13 09:47:52.995818 kernel: smpboot: x86: Booting SMP configuration: Dec 13 09:47:52.995832 kernel: .... node #0, CPUs: #1 Dec 13 09:47:52.995842 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 09:47:52.995852 kernel: smpboot: Max logical packages: 1 Dec 13 09:47:52.995867 kernel: smpboot: Total of 2 processors activated (9976.56 BogoMIPS) Dec 13 09:47:52.995876 kernel: devtmpfs: initialized Dec 13 09:47:52.995888 kernel: x86/mm: Memory block size: 128MB Dec 13 09:47:52.995903 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 09:47:52.995918 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 09:47:52.995947 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 09:47:52.995964 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 09:47:52.995977 kernel: audit: initializing netlink subsys (disabled) Dec 13 09:47:52.995987 kernel: audit: type=2000 audit(1734083271.858:1): state=initialized audit_enabled=0 res=1 Dec 13 09:47:52.996002 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 09:47:52.996011 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 09:47:52.996021 kernel: cpuidle: using governor menu Dec 13 09:47:52.996031 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 09:47:52.996040 kernel: dca service started, version 1.12.1 Dec 13 09:47:52.996050 kernel: PCI: Using configuration type 1 for base access Dec 13 09:47:52.996060 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 09:47:52.996069 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 09:47:52.996079 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 09:47:52.996092 kernel: ACPI: Added _OSI(Module Device) Dec 13 09:47:52.996102 kernel: ACPI: Added _OSI(Processor Device) Dec 13 09:47:52.996115 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 09:47:52.996131 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 09:47:52.996141 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 09:47:52.996150 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 09:47:52.996160 kernel: ACPI: Interpreter enabled Dec 13 09:47:52.996169 kernel: ACPI: PM: (supports S0 S5) Dec 13 09:47:52.996185 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 09:47:52.996201 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 09:47:52.996212 kernel: PCI: Using E820 reservations for host bridge windows Dec 13 09:47:52.996228 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Dec 13 09:47:52.996238 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 09:47:52.996535 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 13 09:47:52.996656 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Dec 13 09:47:52.999853 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Dec 13 09:47:52.999920 kernel: acpiphp: Slot [3] registered Dec 13 09:47:52.999932 kernel: acpiphp: Slot [4] registered Dec 13 09:47:52.999941 kernel: acpiphp: Slot [5] registered Dec 13 09:47:52.999951 kernel: acpiphp: Slot [6] registered Dec 13 09:47:52.999960 kernel: acpiphp: Slot [7] registered Dec 13 09:47:52.999970 kernel: acpiphp: Slot [8] registered Dec 13 09:47:52.999979 kernel: acpiphp: Slot [9] registered Dec 13 09:47:52.999988 kernel: acpiphp: Slot [10] registered Dec 13 09:47:52.999997 kernel: acpiphp: Slot [11] registered Dec 13 09:47:53.000009 kernel: acpiphp: Slot [12] registered Dec 13 09:47:53.000019 kernel: acpiphp: Slot [13] registered Dec 13 09:47:53.000028 kernel: acpiphp: Slot [14] registered Dec 13 09:47:53.000037 kernel: acpiphp: Slot [15] registered Dec 13 09:47:53.000050 kernel: acpiphp: Slot [16] registered Dec 13 09:47:53.000064 kernel: acpiphp: Slot [17] registered Dec 13 09:47:53.000074 kernel: acpiphp: Slot [18] registered Dec 13 09:47:53.000082 kernel: acpiphp: Slot [19] registered Dec 13 09:47:53.000091 kernel: acpiphp: Slot [20] registered Dec 13 09:47:53.000100 kernel: acpiphp: Slot [21] registered Dec 13 09:47:53.000113 kernel: acpiphp: Slot [22] registered Dec 13 09:47:53.000122 kernel: acpiphp: Slot [23] registered Dec 13 09:47:53.000134 kernel: acpiphp: Slot [24] registered Dec 13 09:47:53.000148 kernel: acpiphp: Slot [25] registered Dec 13 09:47:53.000162 kernel: acpiphp: Slot [26] registered Dec 13 09:47:53.000171 kernel: acpiphp: Slot [27] registered Dec 13 09:47:53.000180 kernel: acpiphp: Slot [28] registered Dec 13 09:47:53.000189 kernel: acpiphp: Slot [29] registered Dec 13 09:47:53.000199 kernel: acpiphp: Slot [30] registered Dec 13 09:47:53.000212 kernel: acpiphp: Slot [31] registered Dec 13 09:47:53.000221 kernel: PCI host bridge to bus 0000:00 Dec 13 09:47:53.000483 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 09:47:53.000623 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 09:47:53.000721 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 09:47:53.000854 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Dec 13 09:47:53.000954 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Dec 13 09:47:53.001053 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 09:47:53.001273 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Dec 13 09:47:53.001470 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Dec 13 09:47:53.001617 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Dec 13 09:47:53.002978 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Dec 13 09:47:53.003161 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Dec 13 09:47:53.003634 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Dec 13 09:47:53.004146 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Dec 13 09:47:53.004328 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Dec 13 09:47:53.004551 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Dec 13 09:47:53.004664 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Dec 13 09:47:53.004917 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Dec 13 09:47:53.005043 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Dec 13 09:47:53.005156 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Dec 13 09:47:53.005269 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Dec 13 09:47:53.005401 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Dec 13 09:47:53.005554 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Dec 13 09:47:53.005811 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Dec 13 09:47:53.005997 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Dec 13 09:47:53.006178 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 09:47:53.006340 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Dec 13 09:47:53.006460 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Dec 13 09:47:53.006577 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Dec 13 09:47:53.006679 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Dec 13 09:47:53.009019 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Dec 13 09:47:53.009220 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Dec 13 09:47:53.009354 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Dec 13 09:47:53.009482 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Dec 13 09:47:53.009684 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Dec 13 09:47:53.010338 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Dec 13 09:47:53.010524 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Dec 13 09:47:53.011107 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Dec 13 09:47:53.011368 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Dec 13 09:47:53.011533 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Dec 13 09:47:53.011661 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Dec 13 09:47:53.011797 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Dec 13 09:47:53.011928 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Dec 13 09:47:53.012075 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Dec 13 09:47:53.012246 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Dec 13 09:47:53.012406 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Dec 13 09:47:53.012613 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Dec 13 09:47:53.014965 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Dec 13 09:47:53.015168 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Dec 13 09:47:53.015184 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 09:47:53.015195 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 09:47:53.015204 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 09:47:53.015214 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 09:47:53.015224 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 13 09:47:53.015244 kernel: iommu: Default domain type: Translated Dec 13 09:47:53.015253 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 09:47:53.015264 kernel: PCI: Using ACPI for IRQ routing Dec 13 09:47:53.015273 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 09:47:53.015283 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 09:47:53.015292 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Dec 13 09:47:53.015408 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Dec 13 09:47:53.015553 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Dec 13 09:47:53.015688 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 09:47:53.015702 kernel: vgaarb: loaded Dec 13 09:47:53.015713 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 13 09:47:53.015723 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 13 09:47:53.015732 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 09:47:53.015742 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 09:47:53.015752 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 09:47:53.015779 kernel: pnp: PnP ACPI init Dec 13 09:47:53.015789 kernel: pnp: PnP ACPI: found 4 devices Dec 13 09:47:53.015805 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 09:47:53.015815 kernel: NET: Registered PF_INET protocol family Dec 13 09:47:53.015825 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 09:47:53.015835 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 09:47:53.015844 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 09:47:53.015854 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 09:47:53.015864 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 13 09:47:53.015873 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 09:47:53.015883 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 09:47:53.015897 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 09:47:53.015907 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 09:47:53.015916 kernel: NET: Registered PF_XDP protocol family Dec 13 09:47:53.016035 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 09:47:53.016132 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 09:47:53.016226 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 09:47:53.016372 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Dec 13 09:47:53.016520 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Dec 13 09:47:53.016692 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Dec 13 09:47:53.019075 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 13 09:47:53.019128 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Dec 13 09:47:53.019326 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 43139 usecs Dec 13 09:47:53.019344 kernel: PCI: CLS 0 bytes, default 64 Dec 13 09:47:53.019355 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 09:47:53.019365 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Dec 13 09:47:53.019375 kernel: Initialise system trusted keyrings Dec 13 09:47:53.019397 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 09:47:53.019407 kernel: Key type asymmetric registered Dec 13 09:47:53.019417 kernel: Asymmetric key parser 'x509' registered Dec 13 09:47:53.019426 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 09:47:53.019436 kernel: io scheduler mq-deadline registered Dec 13 09:47:53.019446 kernel: io scheduler kyber registered Dec 13 09:47:53.019456 kernel: io scheduler bfq registered Dec 13 09:47:53.019466 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 09:47:53.019476 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Dec 13 09:47:53.019486 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Dec 13 09:47:53.019511 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Dec 13 09:47:53.019520 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 09:47:53.019529 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 09:47:53.019538 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 09:47:53.019548 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 09:47:53.019557 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 09:47:53.019566 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 09:47:53.019702 kernel: rtc_cmos 00:03: RTC can wake from S4 Dec 13 09:47:53.019820 kernel: rtc_cmos 00:03: registered as rtc0 Dec 13 09:47:53.019930 kernel: rtc_cmos 00:03: setting system clock to 2024-12-13T09:47:52 UTC (1734083272) Dec 13 09:47:53.020020 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Dec 13 09:47:53.020032 kernel: intel_pstate: CPU model not supported Dec 13 09:47:53.020041 kernel: NET: Registered PF_INET6 protocol family Dec 13 09:47:53.020050 kernel: Segment Routing with IPv6 Dec 13 09:47:53.020060 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 09:47:53.020069 kernel: NET: Registered PF_PACKET protocol family Dec 13 09:47:53.020084 kernel: Key type dns_resolver registered Dec 13 09:47:53.020093 kernel: IPI shorthand broadcast: enabled Dec 13 09:47:53.020103 kernel: sched_clock: Marking stable (1137005686, 95755133)->(1254154775, -21393956) Dec 13 09:47:53.020112 kernel: registered taskstats version 1 Dec 13 09:47:53.020121 kernel: Loading compiled-in X.509 certificates Dec 13 09:47:53.020130 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: c82d546f528d79a5758dcebbc47fb6daf92836a0' Dec 13 09:47:53.020139 kernel: Key type .fscrypt registered Dec 13 09:47:53.020148 kernel: Key type fscrypt-provisioning registered Dec 13 09:47:53.020157 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 09:47:53.020170 kernel: ima: Allocated hash algorithm: sha1 Dec 13 09:47:53.020179 kernel: ima: No architecture policies found Dec 13 09:47:53.020189 kernel: clk: Disabling unused clocks Dec 13 09:47:53.020198 kernel: Freeing unused kernel image (initmem) memory: 42844K Dec 13 09:47:53.020207 kernel: Write protecting the kernel read-only data: 36864k Dec 13 09:47:53.020238 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Dec 13 09:47:53.020251 kernel: Run /init as init process Dec 13 09:47:53.020261 kernel: with arguments: Dec 13 09:47:53.020271 kernel: /init Dec 13 09:47:53.020313 kernel: with environment: Dec 13 09:47:53.020328 kernel: HOME=/ Dec 13 09:47:53.020343 kernel: TERM=linux Dec 13 09:47:53.020357 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 09:47:53.020382 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 09:47:53.020399 systemd[1]: Detected virtualization kvm. Dec 13 09:47:53.020415 systemd[1]: Detected architecture x86-64. Dec 13 09:47:53.020427 systemd[1]: Running in initrd. Dec 13 09:47:53.020446 systemd[1]: No hostname configured, using default hostname. Dec 13 09:47:53.020461 systemd[1]: Hostname set to . Dec 13 09:47:53.020476 systemd[1]: Initializing machine ID from VM UUID. Dec 13 09:47:53.020490 systemd[1]: Queued start job for default target initrd.target. Dec 13 09:47:53.020505 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 09:47:53.020522 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 09:47:53.020542 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 09:47:53.020591 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 09:47:53.020615 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 09:47:53.020632 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 09:47:53.020656 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 09:47:53.020677 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 09:47:53.020697 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 09:47:53.020716 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 09:47:53.020740 systemd[1]: Reached target paths.target - Path Units. Dec 13 09:47:53.022818 systemd[1]: Reached target slices.target - Slice Units. Dec 13 09:47:53.022873 systemd[1]: Reached target swap.target - Swaps. Dec 13 09:47:53.022902 systemd[1]: Reached target timers.target - Timer Units. Dec 13 09:47:53.022919 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 09:47:53.022935 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 09:47:53.022956 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 09:47:53.022972 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 09:47:53.022988 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 09:47:53.023005 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 09:47:53.023020 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 09:47:53.023037 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 09:47:53.023053 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 09:47:53.023070 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 09:47:53.023088 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 09:47:53.023105 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 09:47:53.023119 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 09:47:53.023130 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 09:47:53.023140 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 09:47:53.023202 systemd-journald[183]: Collecting audit messages is disabled. Dec 13 09:47:53.023234 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 09:47:53.023245 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 09:47:53.023257 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 09:47:53.023272 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 09:47:53.023294 systemd-journald[183]: Journal started Dec 13 09:47:53.023326 systemd-journald[183]: Runtime Journal (/run/log/journal/f30095855d204783a0442f7e5a58b728) is 4.9M, max 39.3M, 34.4M free. Dec 13 09:47:53.009139 systemd-modules-load[184]: Inserted module 'overlay' Dec 13 09:47:53.029862 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 09:47:53.055069 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 09:47:53.069433 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 09:47:53.079697 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 09:47:53.079732 kernel: Bridge firewalling registered Dec 13 09:47:53.078873 systemd-modules-load[184]: Inserted module 'br_netfilter' Dec 13 09:47:53.080944 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 09:47:53.082175 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 09:47:53.091048 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 09:47:53.098087 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 09:47:53.101102 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 09:47:53.102826 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 09:47:53.130275 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 09:47:53.131835 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 09:47:53.142135 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 09:47:53.142939 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 09:47:53.145715 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 09:47:53.170784 dracut-cmdline[219]: dracut-dracut-053 Dec 13 09:47:53.178680 dracut-cmdline[219]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 09:47:53.187543 systemd-resolved[217]: Positive Trust Anchors: Dec 13 09:47:53.187562 systemd-resolved[217]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 09:47:53.187611 systemd-resolved[217]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 09:47:53.191554 systemd-resolved[217]: Defaulting to hostname 'linux'. Dec 13 09:47:53.193297 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 09:47:53.194086 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 09:47:53.313835 kernel: SCSI subsystem initialized Dec 13 09:47:53.325799 kernel: Loading iSCSI transport class v2.0-870. Dec 13 09:47:53.339808 kernel: iscsi: registered transport (tcp) Dec 13 09:47:53.365833 kernel: iscsi: registered transport (qla4xxx) Dec 13 09:47:53.365986 kernel: QLogic iSCSI HBA Driver Dec 13 09:47:53.442141 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 09:47:53.447144 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 09:47:53.494966 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 09:47:53.495073 kernel: device-mapper: uevent: version 1.0.3 Dec 13 09:47:53.495836 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 09:47:53.547825 kernel: raid6: avx2x4 gen() 14794 MB/s Dec 13 09:47:53.564825 kernel: raid6: avx2x2 gen() 14088 MB/s Dec 13 09:47:53.582110 kernel: raid6: avx2x1 gen() 10921 MB/s Dec 13 09:47:53.582218 kernel: raid6: using algorithm avx2x4 gen() 14794 MB/s Dec 13 09:47:53.600018 kernel: raid6: .... xor() 7846 MB/s, rmw enabled Dec 13 09:47:53.600130 kernel: raid6: using avx2x2 recovery algorithm Dec 13 09:47:53.625858 kernel: xor: automatically using best checksumming function avx Dec 13 09:47:53.851846 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 09:47:53.868713 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 09:47:53.880083 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 09:47:53.897251 systemd-udevd[402]: Using default interface naming scheme 'v255'. Dec 13 09:47:53.903182 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 09:47:53.914004 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 09:47:53.934291 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Dec 13 09:47:53.978901 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 09:47:53.985097 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 09:47:54.063218 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 09:47:54.074433 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 09:47:54.109942 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 09:47:54.114342 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 09:47:54.114984 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 09:47:54.118184 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 09:47:54.124048 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 09:47:54.158531 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 09:47:54.170784 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 09:47:54.189460 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 09:47:54.189592 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Dec 13 09:47:54.259131 kernel: AES CTR mode by8 optimization enabled Dec 13 09:47:54.259175 kernel: scsi host0: Virtio SCSI HBA Dec 13 09:47:54.259432 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Dec 13 09:47:54.259586 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 09:47:54.259602 kernel: GPT:9289727 != 125829119 Dec 13 09:47:54.259629 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 09:47:54.259643 kernel: GPT:9289727 != 125829119 Dec 13 09:47:54.259656 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 09:47:54.259669 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 09:47:54.259681 kernel: ACPI: bus type USB registered Dec 13 09:47:54.259693 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Dec 13 09:47:54.263930 kernel: virtio_blk virtio5: [vdb] 968 512-byte logical blocks (496 kB/484 KiB) Dec 13 09:47:54.232819 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 09:47:54.233012 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 09:47:54.234150 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 09:47:54.234690 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 09:47:54.234938 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 09:47:54.235553 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 09:47:54.258292 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 09:47:54.275942 kernel: usbcore: registered new interface driver usbfs Dec 13 09:47:54.277805 kernel: libata version 3.00 loaded. Dec 13 09:47:54.283923 kernel: ata_piix 0000:00:01.1: version 2.13 Dec 13 09:47:54.294581 kernel: scsi host1: ata_piix Dec 13 09:47:54.295083 kernel: scsi host2: ata_piix Dec 13 09:47:54.295238 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Dec 13 09:47:54.295254 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Dec 13 09:47:54.312333 kernel: usbcore: registered new interface driver hub Dec 13 09:47:54.312438 kernel: usbcore: registered new device driver usb Dec 13 09:47:54.360359 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 09:47:54.361659 kernel: BTRFS: device fsid c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (455) Dec 13 09:47:54.372821 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (449) Dec 13 09:47:54.382964 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 13 09:47:54.394715 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 13 09:47:54.403092 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 13 09:47:54.404720 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 13 09:47:54.409528 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 09:47:54.420137 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 09:47:54.424149 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 09:47:54.430689 disk-uuid[538]: Primary Header is updated. Dec 13 09:47:54.430689 disk-uuid[538]: Secondary Entries is updated. Dec 13 09:47:54.430689 disk-uuid[538]: Secondary Header is updated. Dec 13 09:47:54.453573 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 09:47:54.469536 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 09:47:54.476808 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 09:47:54.516602 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Dec 13 09:47:54.527801 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Dec 13 09:47:54.528034 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Dec 13 09:47:54.528219 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Dec 13 09:47:54.528922 kernel: hub 1-0:1.0: USB hub found Dec 13 09:47:54.529167 kernel: hub 1-0:1.0: 2 ports detected Dec 13 09:47:55.469823 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 09:47:55.470831 disk-uuid[539]: The operation has completed successfully. Dec 13 09:47:55.531941 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 09:47:55.532152 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 09:47:55.546185 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 09:47:55.560311 sh[558]: Success Dec 13 09:47:55.578860 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 09:47:55.644010 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 09:47:55.654999 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 09:47:55.659553 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 09:47:55.690822 kernel: BTRFS info (device dm-0): first mount of filesystem c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be Dec 13 09:47:55.690944 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 09:47:55.690966 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 09:47:55.690986 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 09:47:55.692045 kernel: BTRFS info (device dm-0): using free space tree Dec 13 09:47:55.701336 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 09:47:55.702603 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 09:47:55.717179 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 09:47:55.722109 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 09:47:55.737467 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 09:47:55.737574 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 09:47:55.737606 kernel: BTRFS info (device vda6): using free space tree Dec 13 09:47:55.741792 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 09:47:55.759652 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 09:47:55.760229 kernel: BTRFS info (device vda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 09:47:55.769053 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 09:47:55.775236 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 09:47:55.927453 ignition[647]: Ignition 2.19.0 Dec 13 09:47:55.927471 ignition[647]: Stage: fetch-offline Dec 13 09:47:55.927560 ignition[647]: no configs at "/usr/lib/ignition/base.d" Dec 13 09:47:55.927598 ignition[647]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 13 09:47:55.928927 ignition[647]: parsed url from cmdline: "" Dec 13 09:47:55.931539 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 09:47:55.928937 ignition[647]: no config URL provided Dec 13 09:47:55.928951 ignition[647]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 09:47:55.928982 ignition[647]: no config at "/usr/lib/ignition/user.ign" Dec 13 09:47:55.928993 ignition[647]: failed to fetch config: resource requires networking Dec 13 09:47:55.929338 ignition[647]: Ignition finished successfully Dec 13 09:47:55.953857 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 09:47:55.959108 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 09:47:55.996567 systemd-networkd[748]: lo: Link UP Dec 13 09:47:55.996581 systemd-networkd[748]: lo: Gained carrier Dec 13 09:47:55.999354 systemd-networkd[748]: Enumeration completed Dec 13 09:47:55.999779 systemd-networkd[748]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Dec 13 09:47:55.999783 systemd-networkd[748]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Dec 13 09:47:56.000049 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 09:47:56.000816 systemd-networkd[748]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 09:47:56.000822 systemd-networkd[748]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 09:47:56.002014 systemd-networkd[748]: eth0: Link UP Dec 13 09:47:56.002034 systemd-networkd[748]: eth0: Gained carrier Dec 13 09:47:56.002046 systemd-networkd[748]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Dec 13 09:47:56.002542 systemd[1]: Reached target network.target - Network. Dec 13 09:47:56.005262 systemd-networkd[748]: eth1: Link UP Dec 13 09:47:56.005268 systemd-networkd[748]: eth1: Gained carrier Dec 13 09:47:56.005286 systemd-networkd[748]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 09:47:56.010082 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 09:47:56.020926 systemd-networkd[748]: eth0: DHCPv4 address 159.223.206.54/20, gateway 159.223.192.1 acquired from 169.254.169.253 Dec 13 09:47:56.024926 systemd-networkd[748]: eth1: DHCPv4 address 10.124.0.3/20, gateway 10.124.0.1 acquired from 169.254.169.253 Dec 13 09:47:56.036704 ignition[750]: Ignition 2.19.0 Dec 13 09:47:56.036716 ignition[750]: Stage: fetch Dec 13 09:47:56.036984 ignition[750]: no configs at "/usr/lib/ignition/base.d" Dec 13 09:47:56.036999 ignition[750]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 13 09:47:56.037196 ignition[750]: parsed url from cmdline: "" Dec 13 09:47:56.037201 ignition[750]: no config URL provided Dec 13 09:47:56.037212 ignition[750]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 09:47:56.037224 ignition[750]: no config at "/usr/lib/ignition/user.ign" Dec 13 09:47:56.037247 ignition[750]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Dec 13 09:47:56.070652 ignition[750]: GET result: OK Dec 13 09:47:56.071413 ignition[750]: parsing config with SHA512: 341e652340d5c6bc14828094b3608a22ea49d043edda4ab0ea2eccdbf5f372aeb7e65d79de7fc291a23bb19bf6c2cec68274433c50448de3475a9436686bb920 Dec 13 09:47:56.081462 unknown[750]: fetched base config from "system" Dec 13 09:47:56.081477 unknown[750]: fetched base config from "system" Dec 13 09:47:56.081489 unknown[750]: fetched user config from "digitalocean" Dec 13 09:47:56.083420 ignition[750]: fetch: fetch complete Dec 13 09:47:56.083437 ignition[750]: fetch: fetch passed Dec 13 09:47:56.083534 ignition[750]: Ignition finished successfully Dec 13 09:47:56.086588 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 09:47:56.093166 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 09:47:56.118662 ignition[757]: Ignition 2.19.0 Dec 13 09:47:56.118683 ignition[757]: Stage: kargs Dec 13 09:47:56.119294 ignition[757]: no configs at "/usr/lib/ignition/base.d" Dec 13 09:47:56.119323 ignition[757]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 13 09:47:56.122278 ignition[757]: kargs: kargs passed Dec 13 09:47:56.122400 ignition[757]: Ignition finished successfully Dec 13 09:47:56.124948 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 09:47:56.131112 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 09:47:56.156537 ignition[764]: Ignition 2.19.0 Dec 13 09:47:56.156550 ignition[764]: Stage: disks Dec 13 09:47:56.156753 ignition[764]: no configs at "/usr/lib/ignition/base.d" Dec 13 09:47:56.156782 ignition[764]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 13 09:47:56.159924 ignition[764]: disks: disks passed Dec 13 09:47:56.163245 ignition[764]: Ignition finished successfully Dec 13 09:47:56.164745 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 09:47:56.165903 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 09:47:56.166385 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 09:47:56.167129 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 09:47:56.167848 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 09:47:56.168637 systemd[1]: Reached target basic.target - Basic System. Dec 13 09:47:56.174041 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 09:47:56.194845 systemd-fsck[772]: ROOT: clean, 14/553520 files, 52654/553472 blocks Dec 13 09:47:56.198302 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 09:47:56.206893 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 09:47:56.315808 kernel: EXT4-fs (vda9): mounted filesystem 390119fa-ab9c-4f50-b046-3b5c76c46193 r/w with ordered data mode. Quota mode: none. Dec 13 09:47:56.317350 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 09:47:56.318443 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 09:47:56.334982 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 09:47:56.338976 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 09:47:56.342045 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Dec 13 09:47:56.351801 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (780) Dec 13 09:47:56.353105 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 13 09:47:56.353741 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 09:47:56.358869 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 09:47:56.358908 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 09:47:56.358921 kernel: BTRFS info (device vda6): using free space tree Dec 13 09:47:56.353819 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 09:47:56.363795 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 09:47:56.367233 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 09:47:56.373976 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 09:47:56.384006 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 09:47:56.458803 coreos-metadata[783]: Dec 13 09:47:56.458 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Dec 13 09:47:56.473785 coreos-metadata[783]: Dec 13 09:47:56.472 INFO Fetch successful Dec 13 09:47:56.482584 coreos-metadata[782]: Dec 13 09:47:56.481 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Dec 13 09:47:56.483826 coreos-metadata[783]: Dec 13 09:47:56.483 INFO wrote hostname ci-4081.2.1-d-c5ae8496ec to /sysroot/etc/hostname Dec 13 09:47:56.486247 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 09:47:56.487880 initrd-setup-root[810]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 09:47:56.494948 initrd-setup-root[818]: cut: /sysroot/etc/group: No such file or directory Dec 13 09:47:56.497114 coreos-metadata[782]: Dec 13 09:47:56.495 INFO Fetch successful Dec 13 09:47:56.505962 initrd-setup-root[825]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 09:47:56.508066 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Dec 13 09:47:56.508316 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Dec 13 09:47:56.515480 initrd-setup-root[833]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 09:47:56.648723 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 09:47:56.661018 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 09:47:56.664018 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 09:47:56.675875 kernel: BTRFS info (device vda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 09:47:56.687423 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 09:47:56.707771 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 09:47:56.717710 ignition[900]: INFO : Ignition 2.19.0 Dec 13 09:47:56.719523 ignition[900]: INFO : Stage: mount Dec 13 09:47:56.719523 ignition[900]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 09:47:56.719523 ignition[900]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 13 09:47:56.721658 ignition[900]: INFO : mount: mount passed Dec 13 09:47:56.722023 ignition[900]: INFO : Ignition finished successfully Dec 13 09:47:56.723144 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 09:47:56.729013 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 09:47:56.753171 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 09:47:56.762801 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (912) Dec 13 09:47:56.765675 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 09:47:56.765788 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 09:47:56.765814 kernel: BTRFS info (device vda6): using free space tree Dec 13 09:47:56.769801 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 09:47:56.772927 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 09:47:56.802923 ignition[929]: INFO : Ignition 2.19.0 Dec 13 09:47:56.802923 ignition[929]: INFO : Stage: files Dec 13 09:47:56.803989 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 09:47:56.803989 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 13 09:47:56.805067 ignition[929]: DEBUG : files: compiled without relabeling support, skipping Dec 13 09:47:56.806060 ignition[929]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 09:47:56.806060 ignition[929]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 09:47:56.810121 ignition[929]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 09:47:56.811209 ignition[929]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 09:47:56.812140 unknown[929]: wrote ssh authorized keys file for user: core Dec 13 09:47:56.812942 ignition[929]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 09:47:56.815507 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 09:47:56.816577 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 09:47:56.868525 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 09:47:56.960562 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 09:47:56.960562 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 09:47:56.962551 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 09:47:56.962551 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 09:47:56.962551 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 09:47:56.962551 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 09:47:56.962551 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 09:47:56.962551 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 09:47:56.962551 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 09:47:56.962551 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 09:47:56.962551 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 09:47:56.962551 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 09:47:56.962551 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 09:47:56.962551 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 09:47:56.962551 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Dec 13 09:47:57.154032 systemd-networkd[748]: eth1: Gained IPv6LL Dec 13 09:47:57.467621 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 13 09:47:57.752743 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 09:47:57.752743 ignition[929]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 13 09:47:57.754539 ignition[929]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 09:47:57.754539 ignition[929]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 09:47:57.754539 ignition[929]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 13 09:47:57.754539 ignition[929]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 13 09:47:57.754539 ignition[929]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 09:47:57.759854 ignition[929]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 09:47:57.759854 ignition[929]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 09:47:57.759854 ignition[929]: INFO : files: files passed Dec 13 09:47:57.759854 ignition[929]: INFO : Ignition finished successfully Dec 13 09:47:57.756832 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 09:47:57.772433 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 09:47:57.776591 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 09:47:57.779226 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 09:47:57.779369 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 09:47:57.807585 initrd-setup-root-after-ignition[958]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 09:47:57.807585 initrd-setup-root-after-ignition[958]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 09:47:57.812085 initrd-setup-root-after-ignition[962]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 09:47:57.815011 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 09:47:57.815931 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 09:47:57.820128 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 09:47:57.890889 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 09:47:57.891085 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 09:47:57.892616 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 09:47:57.893386 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 09:47:57.894529 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 09:47:57.900155 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 09:47:57.935152 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 09:47:57.941151 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 09:47:57.967617 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 09:47:57.968706 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 09:47:57.973122 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 09:47:57.979961 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 09:47:57.980294 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 09:47:57.985533 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 09:47:57.987158 systemd[1]: Stopped target basic.target - Basic System. Dec 13 09:47:57.987812 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 09:47:57.988465 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 09:47:57.988616 systemd-networkd[748]: eth0: Gained IPv6LL Dec 13 09:47:57.993577 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 09:47:57.995509 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 09:47:57.997904 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 09:47:58.001519 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 09:47:58.002461 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 09:47:58.003472 systemd[1]: Stopped target swap.target - Swaps. Dec 13 09:47:58.004255 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 09:47:58.004623 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 09:47:58.015693 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 09:47:58.016679 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 09:47:58.017466 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 09:47:58.017647 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 09:47:58.020860 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 09:47:58.021188 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 09:47:58.024556 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 09:47:58.024843 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 09:47:58.025867 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 09:47:58.026076 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 09:47:58.028911 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 13 09:47:58.029190 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 09:47:58.049816 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 09:47:58.050482 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 09:47:58.050796 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 09:47:58.062394 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 09:47:58.064081 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 09:47:58.064408 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 09:47:58.066921 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 09:47:58.067172 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 09:47:58.085347 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 09:47:58.085574 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 09:47:58.096815 ignition[982]: INFO : Ignition 2.19.0 Dec 13 09:47:58.096815 ignition[982]: INFO : Stage: umount Dec 13 09:47:58.100943 ignition[982]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 09:47:58.100943 ignition[982]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 13 09:47:58.104086 ignition[982]: INFO : umount: umount passed Dec 13 09:47:58.104086 ignition[982]: INFO : Ignition finished successfully Dec 13 09:47:58.105093 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 09:47:58.105826 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 09:47:58.107713 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 09:47:58.108014 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 09:47:58.115626 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 09:47:58.116569 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 09:47:58.117919 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 09:47:58.118478 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 09:47:58.119598 systemd[1]: Stopped target network.target - Network. Dec 13 09:47:58.120881 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 09:47:58.121005 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 09:47:58.121697 systemd[1]: Stopped target paths.target - Path Units. Dec 13 09:47:58.123600 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 09:47:58.123937 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 09:47:58.124815 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 09:47:58.125806 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 09:47:58.126338 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 09:47:58.126422 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 09:47:58.128497 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 09:47:58.128625 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 09:47:58.129845 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 09:47:58.129948 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 09:47:58.130708 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 09:47:58.130821 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 09:47:58.131944 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 09:47:58.133115 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 09:47:58.136081 systemd-networkd[748]: eth1: DHCPv6 lease lost Dec 13 09:47:58.136603 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 09:47:58.140031 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 09:47:58.140264 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 09:47:58.141307 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 09:47:58.141500 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 09:47:58.141948 systemd-networkd[748]: eth0: DHCPv6 lease lost Dec 13 09:47:58.147442 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 09:47:58.147714 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 09:47:58.151571 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 09:47:58.151671 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 09:47:58.152952 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 09:47:58.153043 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 09:47:58.167153 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 09:47:58.171442 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 09:47:58.171983 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 09:47:58.173392 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 09:47:58.173497 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 09:47:58.174313 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 09:47:58.174404 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 09:47:58.175356 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 09:47:58.175426 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 09:47:58.176648 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 09:47:58.194475 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 09:47:58.194744 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 09:47:58.197651 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 09:47:58.197988 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 09:47:58.201079 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 09:47:58.201177 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 09:47:58.202639 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 09:47:58.202718 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 09:47:58.203672 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 09:47:58.203792 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 09:47:58.205281 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 09:47:58.205383 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 09:47:58.206330 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 09:47:58.206410 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 09:47:58.217268 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 09:47:58.218707 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 09:47:58.218892 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 09:47:58.220755 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 09:47:58.220906 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 09:47:58.229339 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 09:47:58.229616 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 09:47:58.231331 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 09:47:58.236114 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 09:47:58.256159 systemd[1]: Switching root. Dec 13 09:47:58.297378 systemd-journald[183]: Journal stopped Dec 13 09:47:59.850286 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Dec 13 09:47:59.850509 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 09:47:59.850545 kernel: SELinux: policy capability open_perms=1 Dec 13 09:47:59.850576 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 09:47:59.850596 kernel: SELinux: policy capability always_check_network=0 Dec 13 09:47:59.850613 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 09:47:59.850631 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 09:47:59.850664 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 09:47:59.850684 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 09:47:59.850705 kernel: audit: type=1403 audit(1734083278.475:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 09:47:59.850729 systemd[1]: Successfully loaded SELinux policy in 41.669ms. Dec 13 09:47:59.850795 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 15.217ms. Dec 13 09:47:59.850822 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 09:47:59.850849 systemd[1]: Detected virtualization kvm. Dec 13 09:47:59.850881 systemd[1]: Detected architecture x86-64. Dec 13 09:47:59.850909 systemd[1]: Detected first boot. Dec 13 09:47:59.850930 systemd[1]: Hostname set to . Dec 13 09:47:59.850948 systemd[1]: Initializing machine ID from VM UUID. Dec 13 09:47:59.850969 zram_generator::config[1025]: No configuration found. Dec 13 09:47:59.850992 systemd[1]: Populated /etc with preset unit settings. Dec 13 09:47:59.851012 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 09:47:59.851030 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 09:47:59.851049 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 09:47:59.851070 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 09:47:59.851106 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 09:47:59.851131 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 09:47:59.851151 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 09:47:59.851170 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 09:47:59.851189 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 09:47:59.851207 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 09:47:59.851228 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 09:47:59.851247 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 09:47:59.851285 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 09:47:59.851306 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 09:47:59.851328 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 09:47:59.851351 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 09:47:59.851371 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 09:47:59.851395 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 09:47:59.851416 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 09:47:59.851437 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 09:47:59.851467 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 09:47:59.851487 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 09:47:59.851557 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 09:47:59.851582 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 09:47:59.851602 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 09:47:59.851623 systemd[1]: Reached target slices.target - Slice Units. Dec 13 09:47:59.851645 systemd[1]: Reached target swap.target - Swaps. Dec 13 09:47:59.851665 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 09:47:59.851697 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 09:47:59.851719 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 09:47:59.851740 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 09:47:59.851792 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 09:47:59.851819 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 09:47:59.851838 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 09:47:59.851859 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 09:47:59.851878 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 09:47:59.851903 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 09:47:59.851937 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 09:47:59.851957 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 09:47:59.851976 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 09:47:59.851997 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 09:47:59.852018 systemd[1]: Reached target machines.target - Containers. Dec 13 09:47:59.852038 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 09:47:59.852060 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 09:47:59.852081 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 09:47:59.852106 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 09:47:59.852125 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 09:47:59.852147 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 09:47:59.852167 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 09:47:59.852254 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 09:47:59.852280 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 09:47:59.852303 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 09:47:59.852323 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 09:47:59.852353 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 09:47:59.852374 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 09:47:59.852393 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 09:47:59.852416 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 09:47:59.852440 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 09:47:59.852467 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 09:47:59.852489 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 09:47:59.852545 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 09:47:59.852568 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 09:47:59.852597 systemd[1]: Stopped verity-setup.service. Dec 13 09:47:59.852621 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 09:47:59.852642 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 09:47:59.852738 systemd-journald[1098]: Collecting audit messages is disabled. Dec 13 09:47:59.852830 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 09:47:59.852856 systemd-journald[1098]: Journal started Dec 13 09:47:59.852902 systemd-journald[1098]: Runtime Journal (/run/log/journal/f30095855d204783a0442f7e5a58b728) is 4.9M, max 39.3M, 34.4M free. Dec 13 09:47:59.448568 systemd[1]: Queued start job for default target multi-user.target. Dec 13 09:47:59.479098 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 13 09:47:59.479658 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 09:47:59.854867 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 09:47:59.857671 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 09:47:59.860215 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 09:47:59.862221 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 09:47:59.864173 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 09:47:59.867308 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 09:47:59.868603 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 09:47:59.869716 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 09:47:59.893844 kernel: loop: module loaded Dec 13 09:47:59.896285 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 09:47:59.897629 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 09:47:59.908897 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 09:47:59.909180 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 09:47:59.910263 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 09:47:59.910502 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 09:47:59.911375 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 09:47:59.920688 kernel: fuse: init (API version 7.39) Dec 13 09:47:59.923635 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 09:47:59.923944 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 09:47:59.936085 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 09:47:59.946799 kernel: ACPI: bus type drm_connector registered Dec 13 09:47:59.946928 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 09:47:59.957991 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 09:47:59.958629 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 09:47:59.958699 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 09:47:59.961617 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 09:47:59.972088 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 09:47:59.982238 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 09:47:59.983510 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 09:47:59.987429 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 09:47:59.991013 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 09:47:59.991555 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 09:47:59.999178 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 09:47:59.999809 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 09:48:00.008290 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 09:48:00.011063 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 09:48:00.013728 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 09:48:00.015163 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 09:48:00.016842 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 09:48:00.017601 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 09:48:00.019061 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 09:48:00.020287 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 09:48:00.029133 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 09:48:00.052938 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 09:48:00.064828 systemd-journald[1098]: Time spent on flushing to /var/log/journal/f30095855d204783a0442f7e5a58b728 is 128.450ms for 985 entries. Dec 13 09:48:00.064828 systemd-journald[1098]: System Journal (/var/log/journal/f30095855d204783a0442f7e5a58b728) is 8.0M, max 195.6M, 187.6M free. Dec 13 09:48:00.217297 systemd-journald[1098]: Received client request to flush runtime journal. Dec 13 09:48:00.217386 kernel: loop0: detected capacity change from 0 to 142488 Dec 13 09:48:00.217431 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 09:48:00.070989 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 09:48:00.108914 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 09:48:00.113062 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 09:48:00.128186 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 09:48:00.188932 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 09:48:00.223835 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 09:48:00.895972 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 09:48:00.901345 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 09:48:00.906205 kernel: loop1: detected capacity change from 0 to 8 Dec 13 09:48:00.924493 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 09:48:00.931279 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 09:48:00.954504 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 09:48:00.960278 kernel: loop2: detected capacity change from 0 to 140768 Dec 13 09:48:00.963167 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 09:48:01.016708 udevadm[1165]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 09:48:01.030883 kernel: loop3: detected capacity change from 0 to 210664 Dec 13 09:48:01.037505 systemd-tmpfiles[1163]: ACLs are not supported, ignoring. Dec 13 09:48:01.038241 systemd-tmpfiles[1163]: ACLs are not supported, ignoring. Dec 13 09:48:01.086725 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 09:48:01.110115 kernel: loop4: detected capacity change from 0 to 142488 Dec 13 09:48:01.130525 kernel: loop5: detected capacity change from 0 to 8 Dec 13 09:48:01.137822 kernel: loop6: detected capacity change from 0 to 140768 Dec 13 09:48:01.158869 kernel: loop7: detected capacity change from 0 to 210664 Dec 13 09:48:01.181591 (sd-merge)[1170]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Dec 13 09:48:01.182466 (sd-merge)[1170]: Merged extensions into '/usr'. Dec 13 09:48:01.200296 systemd[1]: Reloading requested from client PID 1142 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 09:48:01.200334 systemd[1]: Reloading... Dec 13 09:48:01.410913 zram_generator::config[1196]: No configuration found. Dec 13 09:48:01.746532 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 09:48:01.767797 ldconfig[1134]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 09:48:01.842585 systemd[1]: Reloading finished in 639 ms. Dec 13 09:48:01.886050 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 09:48:01.892365 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 09:48:01.902339 systemd[1]: Starting ensure-sysext.service... Dec 13 09:48:01.913158 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 09:48:01.950996 systemd[1]: Reloading requested from client PID 1239 ('systemctl') (unit ensure-sysext.service)... Dec 13 09:48:01.951029 systemd[1]: Reloading... Dec 13 09:48:02.006116 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 09:48:02.006871 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 09:48:02.008643 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 09:48:02.009167 systemd-tmpfiles[1240]: ACLs are not supported, ignoring. Dec 13 09:48:02.009302 systemd-tmpfiles[1240]: ACLs are not supported, ignoring. Dec 13 09:48:02.019135 systemd-tmpfiles[1240]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 09:48:02.019157 systemd-tmpfiles[1240]: Skipping /boot Dec 13 09:48:02.075475 systemd-tmpfiles[1240]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 09:48:02.075495 systemd-tmpfiles[1240]: Skipping /boot Dec 13 09:48:02.158802 zram_generator::config[1279]: No configuration found. Dec 13 09:48:02.336203 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 09:48:02.405851 systemd[1]: Reloading finished in 454 ms. Dec 13 09:48:02.421305 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 09:48:02.430398 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 09:48:02.445313 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 09:48:02.460127 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 09:48:02.465121 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 09:48:02.477131 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 09:48:02.481729 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 09:48:02.489262 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 09:48:02.496563 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 09:48:02.499035 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 09:48:02.504354 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 09:48:02.512684 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 09:48:02.523285 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 09:48:02.524685 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 09:48:02.524958 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 09:48:02.537113 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 09:48:02.542666 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 09:48:02.544122 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 09:48:02.544472 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 09:48:02.544681 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 09:48:02.549519 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 09:48:02.549967 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 09:48:02.563400 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 09:48:02.565090 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 09:48:02.565376 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 09:48:02.574485 systemd[1]: Finished ensure-sysext.service. Dec 13 09:48:02.588169 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 09:48:02.591096 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 09:48:02.598296 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 09:48:02.598574 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 09:48:02.601159 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 09:48:02.601435 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 09:48:02.602618 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 09:48:02.603931 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 09:48:02.605592 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 09:48:02.606434 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 09:48:02.619109 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 09:48:02.623309 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 09:48:02.623440 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 09:48:02.632342 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 09:48:02.640596 augenrules[1347]: No rules Dec 13 09:48:02.645390 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 09:48:02.671997 systemd-udevd[1321]: Using default interface naming scheme 'v255'. Dec 13 09:48:02.679225 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 09:48:02.682477 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 09:48:02.684354 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 09:48:02.708353 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 09:48:02.712902 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 09:48:02.721574 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 09:48:02.842743 systemd-resolved[1317]: Positive Trust Anchors: Dec 13 09:48:02.842773 systemd-resolved[1317]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 09:48:02.842859 systemd-resolved[1317]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 09:48:02.854594 systemd-resolved[1317]: Using system hostname 'ci-4081.2.1-d-c5ae8496ec'. Dec 13 09:48:02.857117 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 09:48:02.857848 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 09:48:02.913467 systemd-networkd[1359]: lo: Link UP Dec 13 09:48:02.914030 systemd-networkd[1359]: lo: Gained carrier Dec 13 09:48:02.915516 systemd-networkd[1359]: Enumeration completed Dec 13 09:48:02.915689 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 09:48:02.916967 systemd[1]: Reached target network.target - Network. Dec 13 09:48:02.926755 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 09:48:02.935416 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 09:48:02.937278 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 09:48:02.973825 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1363) Dec 13 09:48:02.998369 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 13 09:48:03.007802 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1363) Dec 13 09:48:03.039993 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Dec 13 09:48:03.041907 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 09:48:03.042191 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 09:48:03.052105 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 09:48:03.057027 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 09:48:03.080112 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 09:48:03.080900 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 09:48:03.080980 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 09:48:03.081008 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 09:48:03.101142 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 09:48:03.101419 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 09:48:03.113577 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1372) Dec 13 09:48:03.113153 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 09:48:03.113451 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 09:48:03.114823 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 09:48:03.123213 kernel: ISO 9660 Extensions: RRIP_1991A Dec 13 09:48:03.124849 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Dec 13 09:48:03.127283 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 09:48:03.128299 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 09:48:03.128676 systemd-networkd[1359]: eth1: Configuring with /run/systemd/network/10-6e:92:7c:cc:9f:78.network. Dec 13 09:48:03.129613 systemd-networkd[1359]: eth1: Link UP Dec 13 09:48:03.129620 systemd-networkd[1359]: eth1: Gained carrier Dec 13 09:48:03.138997 systemd-timesyncd[1336]: Network configuration changed, trying to establish connection. Dec 13 09:48:03.146169 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 09:48:03.155455 systemd-networkd[1359]: eth0: Configuring with /run/systemd/network/10-92:39:5c:56:15:53.network. Dec 13 09:48:03.156710 systemd-networkd[1359]: eth0: Link UP Dec 13 09:48:03.156722 systemd-networkd[1359]: eth0: Gained carrier Dec 13 09:48:03.160626 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Dec 13 09:48:03.166895 kernel: ACPI: button: Power Button [PWRF] Dec 13 09:48:03.204200 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Dec 13 09:48:03.264903 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Dec 13 09:48:03.264987 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Dec 13 09:48:03.269856 kernel: Console: switching to colour dummy device 80x25 Dec 13 09:48:03.269968 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Dec 13 09:48:03.269993 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Dec 13 09:48:03.270015 kernel: [drm] features: -context_init Dec 13 09:48:03.272812 kernel: [drm] number of scanouts: 1 Dec 13 09:48:03.272948 kernel: [drm] number of cap sets: 0 Dec 13 09:48:03.276556 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Dec 13 09:48:03.283214 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Dec 13 09:48:03.283350 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 09:48:03.289804 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Dec 13 09:48:03.331980 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 09:48:03.390276 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 09:48:03.400619 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 09:48:03.477079 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 09:48:03.497200 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 09:48:03.497562 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 09:48:03.513994 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 09:48:03.520920 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 09:48:03.559371 kernel: EDAC MC: Ver: 3.0.0 Dec 13 09:48:03.583524 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 09:48:03.596036 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 09:48:03.614827 lvm[1415]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 09:48:03.644768 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 09:48:03.647412 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 09:48:03.654220 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 09:48:03.655887 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 09:48:03.657548 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 09:48:03.659754 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 09:48:03.660018 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 09:48:03.660596 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 09:48:03.661091 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 09:48:03.662076 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 09:48:03.662221 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 09:48:03.662259 systemd[1]: Reached target paths.target - Path Units. Dec 13 09:48:03.662881 systemd[1]: Reached target timers.target - Timer Units. Dec 13 09:48:03.666929 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 09:48:03.671134 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 09:48:03.671528 lvm[1419]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 09:48:03.683135 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 09:48:03.686259 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 09:48:03.690471 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 09:48:03.692495 systemd[1]: Reached target basic.target - Basic System. Dec 13 09:48:03.695106 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 09:48:03.695151 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 09:48:03.699968 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 09:48:03.710077 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 13 09:48:03.727985 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 09:48:03.734275 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 09:48:03.740158 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 09:48:03.744100 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 09:48:03.754097 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 09:48:03.769894 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 09:48:03.779062 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 09:48:03.792024 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 09:48:03.802141 extend-filesystems[1429]: Found loop4 Dec 13 09:48:03.802141 extend-filesystems[1429]: Found loop5 Dec 13 09:48:03.802141 extend-filesystems[1429]: Found loop6 Dec 13 09:48:03.802141 extend-filesystems[1429]: Found loop7 Dec 13 09:48:03.802141 extend-filesystems[1429]: Found vda Dec 13 09:48:03.802141 extend-filesystems[1429]: Found vda1 Dec 13 09:48:03.802141 extend-filesystems[1429]: Found vda2 Dec 13 09:48:03.802141 extend-filesystems[1429]: Found vda3 Dec 13 09:48:03.802141 extend-filesystems[1429]: Found usr Dec 13 09:48:03.802141 extend-filesystems[1429]: Found vda4 Dec 13 09:48:03.802141 extend-filesystems[1429]: Found vda6 Dec 13 09:48:03.802141 extend-filesystems[1429]: Found vda7 Dec 13 09:48:03.802141 extend-filesystems[1429]: Found vda9 Dec 13 09:48:03.802141 extend-filesystems[1429]: Checking size of /dev/vda9 Dec 13 09:48:03.969039 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Dec 13 09:48:03.919241 dbus-daemon[1425]: [system] SELinux support is enabled Dec 13 09:48:03.807145 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 09:48:03.970807 coreos-metadata[1424]: Dec 13 09:48:03.839 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Dec 13 09:48:03.970807 coreos-metadata[1424]: Dec 13 09:48:03.871 INFO Fetch successful Dec 13 09:48:03.986455 extend-filesystems[1429]: Resized partition /dev/vda9 Dec 13 09:48:03.809443 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 09:48:03.993138 extend-filesystems[1451]: resize2fs 1.47.1 (20-May-2024) Dec 13 09:48:04.010879 jq[1426]: false Dec 13 09:48:03.810210 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 09:48:03.818103 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 09:48:04.028534 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1375) Dec 13 09:48:03.838039 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 09:48:03.860472 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 09:48:03.876713 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 09:48:03.877052 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 09:48:04.029595 jq[1438]: true Dec 13 09:48:03.919989 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 09:48:04.029799 tar[1442]: linux-amd64/helm Dec 13 09:48:03.948654 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 09:48:04.043027 update_engine[1437]: I20241213 09:48:04.038270 1437 main.cc:92] Flatcar Update Engine starting Dec 13 09:48:03.949821 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 09:48:03.988991 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 09:48:03.989038 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 09:48:03.991050 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 09:48:03.991147 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Dec 13 09:48:03.991178 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 09:48:04.039566 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 09:48:04.041380 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 09:48:04.058230 systemd[1]: Started update-engine.service - Update Engine. Dec 13 09:48:04.068344 update_engine[1437]: I20241213 09:48:04.063066 1437 update_check_scheduler.cc:74] Next update check in 8m41s Dec 13 09:48:04.067290 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 09:48:04.067457 (ntainerd)[1459]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 09:48:04.095897 jq[1458]: true Dec 13 09:48:04.180603 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 13 09:48:04.195106 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 09:48:04.209528 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Dec 13 09:48:04.236557 extend-filesystems[1451]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 09:48:04.236557 extend-filesystems[1451]: old_desc_blocks = 1, new_desc_blocks = 8 Dec 13 09:48:04.236557 extend-filesystems[1451]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Dec 13 09:48:04.245961 extend-filesystems[1429]: Resized filesystem in /dev/vda9 Dec 13 09:48:04.245961 extend-filesystems[1429]: Found vdb Dec 13 09:48:04.242972 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 09:48:04.243656 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 09:48:04.244529 systemd-logind[1435]: New seat seat0. Dec 13 09:48:04.254988 systemd-logind[1435]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 09:48:04.255016 systemd-logind[1435]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 09:48:04.256988 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 09:48:04.354263 bash[1489]: Updated "/home/core/.ssh/authorized_keys" Dec 13 09:48:04.348281 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 09:48:04.368517 systemd[1]: Starting sshkeys.service... Dec 13 09:48:04.426012 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 13 09:48:04.439233 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 13 09:48:04.540600 locksmithd[1467]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 09:48:04.553829 coreos-metadata[1496]: Dec 13 09:48:04.553 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Dec 13 09:48:04.567976 coreos-metadata[1496]: Dec 13 09:48:04.567 INFO Fetch successful Dec 13 09:48:04.595240 unknown[1496]: wrote ssh authorized keys file for user: core Dec 13 09:48:04.654081 update-ssh-keys[1504]: Updated "/home/core/.ssh/authorized_keys" Dec 13 09:48:04.656574 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 13 09:48:04.663891 systemd[1]: Finished sshkeys.service. Dec 13 09:48:04.740458 containerd[1459]: time="2024-12-13T09:48:04.740314901Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 09:48:04.742819 sshd_keygen[1450]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 09:48:04.770153 systemd-networkd[1359]: eth1: Gained IPv6LL Dec 13 09:48:04.780789 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 09:48:04.784195 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 09:48:04.795194 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 09:48:04.807265 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 09:48:04.811363 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 09:48:04.828180 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 09:48:04.851397 containerd[1459]: time="2024-12-13T09:48:04.851275774Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 09:48:04.857659 containerd[1459]: time="2024-12-13T09:48:04.856927101Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 09:48:04.857659 containerd[1459]: time="2024-12-13T09:48:04.856975069Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 09:48:04.857659 containerd[1459]: time="2024-12-13T09:48:04.856994361Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 09:48:04.857659 containerd[1459]: time="2024-12-13T09:48:04.857209100Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 09:48:04.857659 containerd[1459]: time="2024-12-13T09:48:04.857232971Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 09:48:04.857659 containerd[1459]: time="2024-12-13T09:48:04.857297125Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 09:48:04.857659 containerd[1459]: time="2024-12-13T09:48:04.857314307Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 09:48:04.859820 containerd[1459]: time="2024-12-13T09:48:04.858179411Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 09:48:04.859820 containerd[1459]: time="2024-12-13T09:48:04.859398486Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 09:48:04.859820 containerd[1459]: time="2024-12-13T09:48:04.859450661Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 09:48:04.859820 containerd[1459]: time="2024-12-13T09:48:04.859466207Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 09:48:04.859820 containerd[1459]: time="2024-12-13T09:48:04.859602409Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 09:48:04.860325 containerd[1459]: time="2024-12-13T09:48:04.860295688Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 09:48:04.860981 containerd[1459]: time="2024-12-13T09:48:04.860941509Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 09:48:04.861094 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 09:48:04.861512 containerd[1459]: time="2024-12-13T09:48:04.861140838Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 09:48:04.861373 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 09:48:04.862440 containerd[1459]: time="2024-12-13T09:48:04.861918656Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 09:48:04.862440 containerd[1459]: time="2024-12-13T09:48:04.862035894Z" level=info msg="metadata content store policy set" policy=shared Dec 13 09:48:04.868240 containerd[1459]: time="2024-12-13T09:48:04.868049021Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 09:48:04.868240 containerd[1459]: time="2024-12-13T09:48:04.868180369Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 09:48:04.868240 containerd[1459]: time="2024-12-13T09:48:04.868203111Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 09:48:04.868739 containerd[1459]: time="2024-12-13T09:48:04.868444405Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 09:48:04.868739 containerd[1459]: time="2024-12-13T09:48:04.868481737Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 09:48:04.868739 containerd[1459]: time="2024-12-13T09:48:04.868669917Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 09:48:04.869592 containerd[1459]: time="2024-12-13T09:48:04.869390870Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 09:48:04.869742 containerd[1459]: time="2024-12-13T09:48:04.869699381Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 09:48:04.869742 containerd[1459]: time="2024-12-13T09:48:04.869722524Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 09:48:04.869966 containerd[1459]: time="2024-12-13T09:48:04.869826585Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 09:48:04.869966 containerd[1459]: time="2024-12-13T09:48:04.869845816Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 09:48:04.869966 containerd[1459]: time="2024-12-13T09:48:04.869859525Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 09:48:04.869966 containerd[1459]: time="2024-12-13T09:48:04.869873115Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 09:48:04.869966 containerd[1459]: time="2024-12-13T09:48:04.869898379Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 09:48:04.869966 containerd[1459]: time="2024-12-13T09:48:04.869914986Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 09:48:04.869966 containerd[1459]: time="2024-12-13T09:48:04.869938364Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 09:48:04.870421 containerd[1459]: time="2024-12-13T09:48:04.869954199Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 09:48:04.870421 containerd[1459]: time="2024-12-13T09:48:04.870276743Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 09:48:04.870421 containerd[1459]: time="2024-12-13T09:48:04.870343511Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 09:48:04.870421 containerd[1459]: time="2024-12-13T09:48:04.870360822Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 09:48:04.870421 containerd[1459]: time="2024-12-13T09:48:04.870373611Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 09:48:04.870726 containerd[1459]: time="2024-12-13T09:48:04.870509613Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 09:48:04.870726 containerd[1459]: time="2024-12-13T09:48:04.870530217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 09:48:04.870726 containerd[1459]: time="2024-12-13T09:48:04.870546184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 09:48:04.870726 containerd[1459]: time="2024-12-13T09:48:04.870558685Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 09:48:04.871069 containerd[1459]: time="2024-12-13T09:48:04.870842505Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 09:48:04.871069 containerd[1459]: time="2024-12-13T09:48:04.870867903Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 09:48:04.871069 containerd[1459]: time="2024-12-13T09:48:04.870910148Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 09:48:04.871069 containerd[1459]: time="2024-12-13T09:48:04.870927720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 09:48:04.871069 containerd[1459]: time="2024-12-13T09:48:04.870941134Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 09:48:04.871069 containerd[1459]: time="2024-12-13T09:48:04.870955114Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 09:48:04.871672 containerd[1459]: time="2024-12-13T09:48:04.870971299Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 09:48:04.871672 containerd[1459]: time="2024-12-13T09:48:04.871382174Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 09:48:04.871672 containerd[1459]: time="2024-12-13T09:48:04.871423041Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 09:48:04.871672 containerd[1459]: time="2024-12-13T09:48:04.871439468Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 09:48:04.872156 containerd[1459]: time="2024-12-13T09:48:04.871541482Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 09:48:04.872156 containerd[1459]: time="2024-12-13T09:48:04.871945767Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 09:48:04.872156 containerd[1459]: time="2024-12-13T09:48:04.871965721Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 09:48:04.872156 containerd[1459]: time="2024-12-13T09:48:04.871998364Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 09:48:04.872156 containerd[1459]: time="2024-12-13T09:48:04.872013007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 09:48:04.872156 containerd[1459]: time="2024-12-13T09:48:04.872031350Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 09:48:04.872564 containerd[1459]: time="2024-12-13T09:48:04.872244436Z" level=info msg="NRI interface is disabled by configuration." Dec 13 09:48:04.872564 containerd[1459]: time="2024-12-13T09:48:04.872264773Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 09:48:04.873186 containerd[1459]: time="2024-12-13T09:48:04.872951458Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 09:48:04.873186 containerd[1459]: time="2024-12-13T09:48:04.873031500Z" level=info msg="Connect containerd service" Dec 13 09:48:04.873186 containerd[1459]: time="2024-12-13T09:48:04.873097206Z" level=info msg="using legacy CRI server" Dec 13 09:48:04.873186 containerd[1459]: time="2024-12-13T09:48:04.873106305Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 09:48:04.874090 containerd[1459]: time="2024-12-13T09:48:04.873672557Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 09:48:04.874508 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 09:48:04.875782 containerd[1459]: time="2024-12-13T09:48:04.875494818Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 09:48:04.875782 containerd[1459]: time="2024-12-13T09:48:04.875650239Z" level=info msg="Start subscribing containerd event" Dec 13 09:48:04.875963 containerd[1459]: time="2024-12-13T09:48:04.875749395Z" level=info msg="Start recovering state" Dec 13 09:48:04.876500 containerd[1459]: time="2024-12-13T09:48:04.876328672Z" level=info msg="Start event monitor" Dec 13 09:48:04.876500 containerd[1459]: time="2024-12-13T09:48:04.876366969Z" level=info msg="Start snapshots syncer" Dec 13 09:48:04.876500 containerd[1459]: time="2024-12-13T09:48:04.876384019Z" level=info msg="Start cni network conf syncer for default" Dec 13 09:48:04.876500 containerd[1459]: time="2024-12-13T09:48:04.876394770Z" level=info msg="Start streaming server" Dec 13 09:48:04.877559 containerd[1459]: time="2024-12-13T09:48:04.877532973Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 09:48:04.878252 containerd[1459]: time="2024-12-13T09:48:04.877922154Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 09:48:04.880978 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 09:48:04.883675 containerd[1459]: time="2024-12-13T09:48:04.883425773Z" level=info msg="containerd successfully booted in 0.148684s" Dec 13 09:48:04.910475 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 09:48:04.933508 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 09:48:04.947465 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 09:48:04.963340 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 09:48:04.964316 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 09:48:05.218747 systemd-networkd[1359]: eth0: Gained IPv6LL Dec 13 09:48:05.350159 tar[1442]: linux-amd64/LICENSE Dec 13 09:48:05.350740 tar[1442]: linux-amd64/README.md Dec 13 09:48:05.364436 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 09:48:05.728037 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 09:48:05.736428 systemd[1]: Started sshd@0-159.223.206.54:22-147.75.109.163:46946.service - OpenSSH per-connection server daemon (147.75.109.163:46946). Dec 13 09:48:05.844710 sshd[1543]: Accepted publickey for core from 147.75.109.163 port 46946 ssh2: RSA SHA256:+8uCT8SkxIXioNlPTvLVKvDkt1DQL7UiLdQc1FAbEg4 Dec 13 09:48:05.847315 sshd[1543]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:48:05.863833 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 09:48:05.875483 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 09:48:05.889720 systemd-logind[1435]: New session 1 of user core. Dec 13 09:48:05.915561 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 09:48:05.929383 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 09:48:05.947715 (systemd)[1547]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 09:48:06.086164 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 09:48:06.091523 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 09:48:06.098619 (kubelet)[1558]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 09:48:06.153734 systemd[1547]: Queued start job for default target default.target. Dec 13 09:48:06.159638 systemd[1547]: Created slice app.slice - User Application Slice. Dec 13 09:48:06.159696 systemd[1547]: Reached target paths.target - Paths. Dec 13 09:48:06.159720 systemd[1547]: Reached target timers.target - Timers. Dec 13 09:48:06.165162 systemd[1547]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 09:48:06.194590 systemd[1547]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 09:48:06.195674 systemd[1547]: Reached target sockets.target - Sockets. Dec 13 09:48:06.195703 systemd[1547]: Reached target basic.target - Basic System. Dec 13 09:48:06.195821 systemd[1547]: Reached target default.target - Main User Target. Dec 13 09:48:06.195871 systemd[1547]: Startup finished in 229ms. Dec 13 09:48:06.196350 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 09:48:06.211143 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 09:48:06.214861 systemd[1]: Startup finished in 1.284s (kernel) + 5.763s (initrd) + 7.779s (userspace) = 14.828s. Dec 13 09:48:06.308478 systemd[1]: Started sshd@1-159.223.206.54:22-147.75.109.163:46962.service - OpenSSH per-connection server daemon (147.75.109.163:46962). Dec 13 09:48:06.379692 sshd[1572]: Accepted publickey for core from 147.75.109.163 port 46962 ssh2: RSA SHA256:+8uCT8SkxIXioNlPTvLVKvDkt1DQL7UiLdQc1FAbEg4 Dec 13 09:48:06.380910 sshd[1572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:48:06.388150 systemd-logind[1435]: New session 2 of user core. Dec 13 09:48:06.398070 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 09:48:06.475091 sshd[1572]: pam_unix(sshd:session): session closed for user core Dec 13 09:48:06.485741 systemd[1]: sshd@1-159.223.206.54:22-147.75.109.163:46962.service: Deactivated successfully. Dec 13 09:48:06.491381 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 09:48:06.496807 systemd-logind[1435]: Session 2 logged out. Waiting for processes to exit. Dec 13 09:48:06.509275 systemd[1]: Started sshd@2-159.223.206.54:22-147.75.109.163:46968.service - OpenSSH per-connection server daemon (147.75.109.163:46968). Dec 13 09:48:06.512376 systemd-logind[1435]: Removed session 2. Dec 13 09:48:06.572890 sshd[1579]: Accepted publickey for core from 147.75.109.163 port 46968 ssh2: RSA SHA256:+8uCT8SkxIXioNlPTvLVKvDkt1DQL7UiLdQc1FAbEg4 Dec 13 09:48:06.575730 sshd[1579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:48:06.588267 systemd-logind[1435]: New session 3 of user core. Dec 13 09:48:06.598119 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 09:48:06.664626 sshd[1579]: pam_unix(sshd:session): session closed for user core Dec 13 09:48:06.679796 systemd[1]: sshd@2-159.223.206.54:22-147.75.109.163:46968.service: Deactivated successfully. Dec 13 09:48:06.682652 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 09:48:06.686678 systemd-logind[1435]: Session 3 logged out. Waiting for processes to exit. Dec 13 09:48:06.694414 systemd[1]: Started sshd@3-159.223.206.54:22-147.75.109.163:46976.service - OpenSSH per-connection server daemon (147.75.109.163:46976). Dec 13 09:48:06.700279 systemd-logind[1435]: Removed session 3. Dec 13 09:48:06.757903 sshd[1586]: Accepted publickey for core from 147.75.109.163 port 46976 ssh2: RSA SHA256:+8uCT8SkxIXioNlPTvLVKvDkt1DQL7UiLdQc1FAbEg4 Dec 13 09:48:06.759834 sshd[1586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:48:06.768019 systemd-logind[1435]: New session 4 of user core. Dec 13 09:48:06.777058 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 09:48:06.847428 sshd[1586]: pam_unix(sshd:session): session closed for user core Dec 13 09:48:06.857276 systemd[1]: sshd@3-159.223.206.54:22-147.75.109.163:46976.service: Deactivated successfully. Dec 13 09:48:06.860698 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 09:48:06.862864 systemd-logind[1435]: Session 4 logged out. Waiting for processes to exit. Dec 13 09:48:06.872358 systemd[1]: Started sshd@4-159.223.206.54:22-147.75.109.163:46978.service - OpenSSH per-connection server daemon (147.75.109.163:46978). Dec 13 09:48:06.876755 systemd-logind[1435]: Removed session 4. Dec 13 09:48:06.925513 sshd[1595]: Accepted publickey for core from 147.75.109.163 port 46978 ssh2: RSA SHA256:+8uCT8SkxIXioNlPTvLVKvDkt1DQL7UiLdQc1FAbEg4 Dec 13 09:48:06.926373 sshd[1595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:48:06.938694 systemd-logind[1435]: New session 5 of user core. Dec 13 09:48:06.941127 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 09:48:07.025155 sudo[1598]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 09:48:07.025640 sudo[1598]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 09:48:07.042696 sudo[1598]: pam_unix(sudo:session): session closed for user root Dec 13 09:48:07.047028 sshd[1595]: pam_unix(sshd:session): session closed for user core Dec 13 09:48:07.063264 systemd[1]: sshd@4-159.223.206.54:22-147.75.109.163:46978.service: Deactivated successfully. Dec 13 09:48:07.068036 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 09:48:07.071531 systemd-logind[1435]: Session 5 logged out. Waiting for processes to exit. Dec 13 09:48:07.079367 kubelet[1558]: E1213 09:48:07.079276 1558 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 09:48:07.082257 systemd[1]: Started sshd@5-159.223.206.54:22-147.75.109.163:46990.service - OpenSSH per-connection server daemon (147.75.109.163:46990). Dec 13 09:48:07.083038 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 09:48:07.083271 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 09:48:07.085943 systemd[1]: kubelet.service: Consumed 1.386s CPU time. Dec 13 09:48:07.096985 systemd-logind[1435]: Removed session 5. Dec 13 09:48:07.140892 sshd[1603]: Accepted publickey for core from 147.75.109.163 port 46990 ssh2: RSA SHA256:+8uCT8SkxIXioNlPTvLVKvDkt1DQL7UiLdQc1FAbEg4 Dec 13 09:48:07.143647 sshd[1603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:48:07.151862 systemd-logind[1435]: New session 6 of user core. Dec 13 09:48:07.163185 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 09:48:07.231253 sudo[1608]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 09:48:07.232618 sudo[1608]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 09:48:07.238379 sudo[1608]: pam_unix(sudo:session): session closed for user root Dec 13 09:48:07.248245 sudo[1607]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 09:48:07.248812 sudo[1607]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 09:48:07.272390 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 13 09:48:07.287875 auditctl[1611]: No rules Dec 13 09:48:07.289730 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 09:48:07.290196 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 13 09:48:07.299590 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 09:48:07.343816 augenrules[1630]: No rules Dec 13 09:48:07.345506 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 09:48:07.347805 sudo[1607]: pam_unix(sudo:session): session closed for user root Dec 13 09:48:07.355362 sshd[1603]: pam_unix(sshd:session): session closed for user core Dec 13 09:48:07.364691 systemd[1]: sshd@5-159.223.206.54:22-147.75.109.163:46990.service: Deactivated successfully. Dec 13 09:48:07.367104 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 09:48:07.369722 systemd-logind[1435]: Session 6 logged out. Waiting for processes to exit. Dec 13 09:48:07.380918 systemd[1]: Started sshd@6-159.223.206.54:22-147.75.109.163:46998.service - OpenSSH per-connection server daemon (147.75.109.163:46998). Dec 13 09:48:07.383177 systemd-logind[1435]: Removed session 6. Dec 13 09:48:07.423752 sshd[1638]: Accepted publickey for core from 147.75.109.163 port 46998 ssh2: RSA SHA256:+8uCT8SkxIXioNlPTvLVKvDkt1DQL7UiLdQc1FAbEg4 Dec 13 09:48:07.426391 sshd[1638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:48:07.433912 systemd-logind[1435]: New session 7 of user core. Dec 13 09:48:07.440248 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 09:48:07.504190 sudo[1641]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 09:48:07.504613 sudo[1641]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 09:48:08.137360 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 09:48:08.149583 (dockerd)[1657]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 09:48:08.672814 dockerd[1657]: time="2024-12-13T09:48:08.672695652Z" level=info msg="Starting up" Dec 13 09:48:08.810943 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1878299388-merged.mount: Deactivated successfully. Dec 13 09:48:08.893443 dockerd[1657]: time="2024-12-13T09:48:08.893343830Z" level=info msg="Loading containers: start." Dec 13 09:48:09.060825 kernel: Initializing XFRM netlink socket Dec 13 09:48:09.192281 systemd-networkd[1359]: docker0: Link UP Dec 13 09:48:09.221813 dockerd[1657]: time="2024-12-13T09:48:09.221698973Z" level=info msg="Loading containers: done." Dec 13 09:48:09.251176 dockerd[1657]: time="2024-12-13T09:48:09.251065865Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 09:48:09.251395 dockerd[1657]: time="2024-12-13T09:48:09.251275872Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Dec 13 09:48:09.251506 dockerd[1657]: time="2024-12-13T09:48:09.251477915Z" level=info msg="Daemon has completed initialization" Dec 13 09:48:09.305220 dockerd[1657]: time="2024-12-13T09:48:09.305108774Z" level=info msg="API listen on /run/docker.sock" Dec 13 09:48:09.306268 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 09:48:09.876769 systemd-resolved[1317]: Clock change detected. Flushing caches. Dec 13 09:48:09.877569 systemd-timesyncd[1336]: Contacted time server 51.81.209.232:123 (1.flatcar.pool.ntp.org). Dec 13 09:48:09.877663 systemd-timesyncd[1336]: Initial clock synchronization to Fri 2024-12-13 09:48:09.876210 UTC. Dec 13 09:48:10.294824 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1541544942-merged.mount: Deactivated successfully. Dec 13 09:48:11.082891 containerd[1459]: time="2024-12-13T09:48:11.082788289Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\"" Dec 13 09:48:11.773643 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount600120879.mount: Deactivated successfully. Dec 13 09:48:13.339030 containerd[1459]: time="2024-12-13T09:48:13.338911537Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:48:13.341761 containerd[1459]: time="2024-12-13T09:48:13.340944506Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.8: active requests=0, bytes read=32675642" Dec 13 09:48:13.345216 containerd[1459]: time="2024-12-13T09:48:13.345137268Z" level=info msg="ImageCreate event name:\"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:48:13.351307 containerd[1459]: time="2024-12-13T09:48:13.351227551Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:48:13.353963 containerd[1459]: time="2024-12-13T09:48:13.353827575Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.8\" with image id \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\", size \"32672442\" in 2.270936169s" Dec 13 09:48:13.353963 containerd[1459]: time="2024-12-13T09:48:13.353962237Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\" returns image reference \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\"" Dec 13 09:48:13.398262 containerd[1459]: time="2024-12-13T09:48:13.398182150Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\"" Dec 13 09:48:15.670095 containerd[1459]: time="2024-12-13T09:48:15.670014724Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:48:15.671731 containerd[1459]: time="2024-12-13T09:48:15.671652001Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.8: active requests=0, bytes read=29606409" Dec 13 09:48:15.672447 containerd[1459]: time="2024-12-13T09:48:15.672004430Z" level=info msg="ImageCreate event name:\"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:48:15.679929 containerd[1459]: time="2024-12-13T09:48:15.679792796Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:48:15.682303 containerd[1459]: time="2024-12-13T09:48:15.682073242Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.8\" with image id \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\", size \"31051521\" in 2.283824712s" Dec 13 09:48:15.682303 containerd[1459]: time="2024-12-13T09:48:15.682146959Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\" returns image reference \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\"" Dec 13 09:48:15.716153 containerd[1459]: time="2024-12-13T09:48:15.716092458Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\"" Dec 13 09:48:16.165376 systemd-resolved[1317]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Dec 13 09:48:17.216618 containerd[1459]: time="2024-12-13T09:48:17.216538962Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:48:17.219723 containerd[1459]: time="2024-12-13T09:48:17.219104601Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.8: active requests=0, bytes read=17783035" Dec 13 09:48:17.219723 containerd[1459]: time="2024-12-13T09:48:17.219622593Z" level=info msg="ImageCreate event name:\"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:48:17.225405 containerd[1459]: time="2024-12-13T09:48:17.225280360Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:48:17.227932 containerd[1459]: time="2024-12-13T09:48:17.227476281Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.8\" with image id \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\", size \"19228165\" in 1.511328794s" Dec 13 09:48:17.227932 containerd[1459]: time="2024-12-13T09:48:17.227563564Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\" returns image reference \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\"" Dec 13 09:48:17.270304 containerd[1459]: time="2024-12-13T09:48:17.269938758Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Dec 13 09:48:17.827763 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 09:48:17.834499 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 09:48:18.056220 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 09:48:18.066003 (kubelet)[1899]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 09:48:18.171534 kubelet[1899]: E1213 09:48:18.170960 1899 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 09:48:18.177929 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 09:48:18.178176 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 09:48:18.657108 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3706266747.mount: Deactivated successfully. Dec 13 09:48:19.216342 systemd-resolved[1317]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Dec 13 09:48:19.314747 containerd[1459]: time="2024-12-13T09:48:19.313262073Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:48:19.314747 containerd[1459]: time="2024-12-13T09:48:19.314564796Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.8: active requests=0, bytes read=29057470" Dec 13 09:48:19.314747 containerd[1459]: time="2024-12-13T09:48:19.314650653Z" level=info msg="ImageCreate event name:\"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:48:19.317828 containerd[1459]: time="2024-12-13T09:48:19.317751924Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:48:19.318944 containerd[1459]: time="2024-12-13T09:48:19.318890074Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.8\" with image id \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\", size \"29056489\" in 2.048891287s" Dec 13 09:48:19.318944 containerd[1459]: time="2024-12-13T09:48:19.318940649Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\"" Dec 13 09:48:19.358003 containerd[1459]: time="2024-12-13T09:48:19.357946328Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 09:48:19.946110 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1771039884.mount: Deactivated successfully. Dec 13 09:48:21.173756 containerd[1459]: time="2024-12-13T09:48:21.171929856Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:48:21.174649 containerd[1459]: time="2024-12-13T09:48:21.174588767Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Dec 13 09:48:21.175262 containerd[1459]: time="2024-12-13T09:48:21.175208258Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:48:21.179885 containerd[1459]: time="2024-12-13T09:48:21.179792119Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:48:21.181928 containerd[1459]: time="2024-12-13T09:48:21.181828338Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.823544041s" Dec 13 09:48:21.181928 containerd[1459]: time="2024-12-13T09:48:21.181925865Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 09:48:21.222558 containerd[1459]: time="2024-12-13T09:48:21.222509520Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 09:48:21.713713 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount491306318.mount: Deactivated successfully. Dec 13 09:48:21.722883 containerd[1459]: time="2024-12-13T09:48:21.721568813Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:48:21.724033 containerd[1459]: time="2024-12-13T09:48:21.723956741Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Dec 13 09:48:21.726332 containerd[1459]: time="2024-12-13T09:48:21.726242727Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:48:21.734012 containerd[1459]: time="2024-12-13T09:48:21.733938841Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:48:21.735717 containerd[1459]: time="2024-12-13T09:48:21.735640020Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 512.838194ms" Dec 13 09:48:21.736234 containerd[1459]: time="2024-12-13T09:48:21.736007055Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 09:48:21.773413 containerd[1459]: time="2024-12-13T09:48:21.773027811Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Dec 13 09:48:22.335685 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount792871040.mount: Deactivated successfully. Dec 13 09:48:24.420050 containerd[1459]: time="2024-12-13T09:48:24.419969718Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:48:24.421540 containerd[1459]: time="2024-12-13T09:48:24.421467211Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Dec 13 09:48:24.422807 containerd[1459]: time="2024-12-13T09:48:24.421961469Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:48:24.426383 containerd[1459]: time="2024-12-13T09:48:24.426325168Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:48:24.429103 containerd[1459]: time="2024-12-13T09:48:24.429024456Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.655915185s" Dec 13 09:48:24.429555 containerd[1459]: time="2024-12-13T09:48:24.429344884Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Dec 13 09:48:28.428766 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 09:48:28.439405 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 09:48:28.631896 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 09:48:28.641956 (kubelet)[2083]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 09:48:28.652771 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 09:48:28.654053 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 09:48:28.654370 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 09:48:28.665653 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 09:48:28.716482 systemd[1]: Reloading requested from client PID 2095 ('systemctl') (unit session-7.scope)... Dec 13 09:48:28.716527 systemd[1]: Reloading... Dec 13 09:48:28.905590 zram_generator::config[2137]: No configuration found. Dec 13 09:48:29.096095 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 09:48:29.228473 systemd[1]: Reloading finished in 510 ms. Dec 13 09:48:29.297995 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 09:48:29.298133 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 09:48:29.298834 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 09:48:29.305501 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 09:48:29.517040 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 09:48:29.531762 (kubelet)[2186]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 09:48:29.609019 kubelet[2186]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 09:48:29.609019 kubelet[2186]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 09:48:29.609019 kubelet[2186]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 09:48:29.609560 kubelet[2186]: I1213 09:48:29.609047 2186 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 09:48:29.986551 kubelet[2186]: I1213 09:48:29.986355 2186 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 09:48:29.986551 kubelet[2186]: I1213 09:48:29.986396 2186 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 09:48:29.986781 kubelet[2186]: I1213 09:48:29.986642 2186 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 09:48:30.049927 kubelet[2186]: I1213 09:48:30.049391 2186 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 09:48:30.056495 kubelet[2186]: E1213 09:48:30.055645 2186 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://159.223.206.54:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 159.223.206.54:6443: connect: connection refused Dec 13 09:48:30.100306 kubelet[2186]: I1213 09:48:30.099367 2186 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 09:48:30.116531 kubelet[2186]: I1213 09:48:30.116412 2186 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 09:48:30.117072 kubelet[2186]: I1213 09:48:30.116521 2186 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.2.1-d-c5ae8496ec","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 09:48:30.117072 kubelet[2186]: I1213 09:48:30.116948 2186 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 09:48:30.117072 kubelet[2186]: I1213 09:48:30.116968 2186 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 09:48:30.117626 kubelet[2186]: I1213 09:48:30.117207 2186 state_mem.go:36] "Initialized new in-memory state store" Dec 13 09:48:30.118488 kubelet[2186]: I1213 09:48:30.118439 2186 kubelet.go:400] "Attempting to sync node with API server" Dec 13 09:48:30.118488 kubelet[2186]: I1213 09:48:30.118479 2186 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 09:48:30.118646 kubelet[2186]: I1213 09:48:30.118518 2186 kubelet.go:312] "Adding apiserver pod source" Dec 13 09:48:30.118646 kubelet[2186]: I1213 09:48:30.118566 2186 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 09:48:30.133224 kubelet[2186]: W1213 09:48:30.132441 2186 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://159.223.206.54:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 159.223.206.54:6443: connect: connection refused Dec 13 09:48:30.133224 kubelet[2186]: E1213 09:48:30.132578 2186 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://159.223.206.54:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 159.223.206.54:6443: connect: connection refused Dec 13 09:48:30.133224 kubelet[2186]: W1213 09:48:30.132732 2186 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://159.223.206.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.1-d-c5ae8496ec&limit=500&resourceVersion=0": dial tcp 159.223.206.54:6443: connect: connection refused Dec 13 09:48:30.133224 kubelet[2186]: E1213 09:48:30.132782 2186 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://159.223.206.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.1-d-c5ae8496ec&limit=500&resourceVersion=0": dial tcp 159.223.206.54:6443: connect: connection refused Dec 13 09:48:30.137992 kubelet[2186]: I1213 09:48:30.137338 2186 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 09:48:30.144268 kubelet[2186]: I1213 09:48:30.142105 2186 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 09:48:30.144268 kubelet[2186]: W1213 09:48:30.144165 2186 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 09:48:30.158711 kubelet[2186]: I1213 09:48:30.155302 2186 server.go:1264] "Started kubelet" Dec 13 09:48:30.177121 kubelet[2186]: I1213 09:48:30.170930 2186 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 09:48:30.180291 kubelet[2186]: I1213 09:48:30.178709 2186 server.go:455] "Adding debug handlers to kubelet server" Dec 13 09:48:30.183497 kubelet[2186]: I1213 09:48:30.179681 2186 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 09:48:30.186331 kubelet[2186]: E1213 09:48:30.185191 2186 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://159.223.206.54:6443/api/v1/namespaces/default/events\": dial tcp 159.223.206.54:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.2.1-d-c5ae8496ec.1810b39540bcca9d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.2.1-d-c5ae8496ec,UID:ci-4081.2.1-d-c5ae8496ec,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.2.1-d-c5ae8496ec,},FirstTimestamp:2024-12-13 09:48:30.155254429 +0000 UTC m=+0.615614784,LastTimestamp:2024-12-13 09:48:30.155254429 +0000 UTC m=+0.615614784,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.2.1-d-c5ae8496ec,}" Dec 13 09:48:30.186331 kubelet[2186]: I1213 09:48:30.185788 2186 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 09:48:30.187476 kubelet[2186]: I1213 09:48:30.187367 2186 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 09:48:30.192233 kubelet[2186]: E1213 09:48:30.190710 2186 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.2.1-d-c5ae8496ec\" not found" Dec 13 09:48:30.192233 kubelet[2186]: I1213 09:48:30.190823 2186 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 09:48:30.192233 kubelet[2186]: I1213 09:48:30.191916 2186 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 09:48:30.192233 kubelet[2186]: I1213 09:48:30.192034 2186 reconciler.go:26] "Reconciler: start to sync state" Dec 13 09:48:30.195586 kubelet[2186]: W1213 09:48:30.192517 2186 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://159.223.206.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 159.223.206.54:6443: connect: connection refused Dec 13 09:48:30.195586 kubelet[2186]: E1213 09:48:30.192579 2186 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://159.223.206.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 159.223.206.54:6443: connect: connection refused Dec 13 09:48:30.195586 kubelet[2186]: E1213 09:48:30.193345 2186 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://159.223.206.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.1-d-c5ae8496ec?timeout=10s\": dial tcp 159.223.206.54:6443: connect: connection refused" interval="200ms" Dec 13 09:48:30.203820 kubelet[2186]: I1213 09:48:30.203770 2186 factory.go:221] Registration of the systemd container factory successfully Dec 13 09:48:30.204031 kubelet[2186]: I1213 09:48:30.203963 2186 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 09:48:30.207284 kubelet[2186]: E1213 09:48:30.207211 2186 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 09:48:30.208243 kubelet[2186]: I1213 09:48:30.208200 2186 factory.go:221] Registration of the containerd container factory successfully Dec 13 09:48:30.263998 kubelet[2186]: I1213 09:48:30.263615 2186 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 09:48:30.265338 kubelet[2186]: I1213 09:48:30.265108 2186 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 09:48:30.266875 kubelet[2186]: I1213 09:48:30.266297 2186 state_mem.go:36] "Initialized new in-memory state store" Dec 13 09:48:30.268282 kubelet[2186]: I1213 09:48:30.267968 2186 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 09:48:30.272967 kubelet[2186]: I1213 09:48:30.271927 2186 policy_none.go:49] "None policy: Start" Dec 13 09:48:30.275897 kubelet[2186]: I1213 09:48:30.274004 2186 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 09:48:30.275897 kubelet[2186]: I1213 09:48:30.274054 2186 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 09:48:30.275897 kubelet[2186]: I1213 09:48:30.274084 2186 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 09:48:30.275897 kubelet[2186]: E1213 09:48:30.274161 2186 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 09:48:30.275897 kubelet[2186]: I1213 09:48:30.275759 2186 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 09:48:30.275897 kubelet[2186]: I1213 09:48:30.275898 2186 state_mem.go:35] "Initializing new in-memory state store" Dec 13 09:48:30.287550 kubelet[2186]: W1213 09:48:30.287472 2186 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://159.223.206.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 159.223.206.54:6443: connect: connection refused Dec 13 09:48:30.287550 kubelet[2186]: E1213 09:48:30.287549 2186 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://159.223.206.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 159.223.206.54:6443: connect: connection refused Dec 13 09:48:30.294900 kubelet[2186]: I1213 09:48:30.294340 2186 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.1-d-c5ae8496ec" Dec 13 09:48:30.296583 kubelet[2186]: E1213 09:48:30.296519 2186 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://159.223.206.54:6443/api/v1/nodes\": dial tcp 159.223.206.54:6443: connect: connection refused" node="ci-4081.2.1-d-c5ae8496ec" Dec 13 09:48:30.298256 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 09:48:30.320142 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 09:48:30.329352 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 09:48:30.339905 kubelet[2186]: I1213 09:48:30.339822 2186 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 09:48:30.340275 kubelet[2186]: I1213 09:48:30.340202 2186 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 09:48:30.340478 kubelet[2186]: I1213 09:48:30.340446 2186 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 09:48:30.346402 kubelet[2186]: E1213 09:48:30.346115 2186 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.2.1-d-c5ae8496ec\" not found" Dec 13 09:48:30.375595 kubelet[2186]: I1213 09:48:30.374738 2186 topology_manager.go:215] "Topology Admit Handler" podUID="e8c2f97408a197ad92b107fdee021e69" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.2.1-d-c5ae8496ec" Dec 13 09:48:30.376692 kubelet[2186]: I1213 09:48:30.376644 2186 topology_manager.go:215] "Topology Admit Handler" podUID="6134cd245b865079d8fbb6c5c7ce2d92" podNamespace="kube-system" podName="kube-scheduler-ci-4081.2.1-d-c5ae8496ec" Dec 13 09:48:30.380442 kubelet[2186]: I1213 09:48:30.379732 2186 topology_manager.go:215] "Topology Admit Handler" podUID="a6a06d487fbcafface42aa4f517fc46a" podNamespace="kube-system" podName="kube-apiserver-ci-4081.2.1-d-c5ae8496ec" Dec 13 09:48:30.396332 kubelet[2186]: E1213 09:48:30.394795 2186 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://159.223.206.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.1-d-c5ae8496ec?timeout=10s\": dial tcp 159.223.206.54:6443: connect: connection refused" interval="400ms" Dec 13 09:48:30.401199 systemd[1]: Created slice kubepods-burstable-pode8c2f97408a197ad92b107fdee021e69.slice - libcontainer container kubepods-burstable-pode8c2f97408a197ad92b107fdee021e69.slice. Dec 13 09:48:30.422617 systemd[1]: Created slice kubepods-burstable-pod6134cd245b865079d8fbb6c5c7ce2d92.slice - libcontainer container kubepods-burstable-pod6134cd245b865079d8fbb6c5c7ce2d92.slice. Dec 13 09:48:30.436543 systemd[1]: Created slice kubepods-burstable-poda6a06d487fbcafface42aa4f517fc46a.slice - libcontainer container kubepods-burstable-poda6a06d487fbcafface42aa4f517fc46a.slice. Dec 13 09:48:30.493801 kubelet[2186]: I1213 09:48:30.493625 2186 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a6a06d487fbcafface42aa4f517fc46a-ca-certs\") pod \"kube-apiserver-ci-4081.2.1-d-c5ae8496ec\" (UID: \"a6a06d487fbcafface42aa4f517fc46a\") " pod="kube-system/kube-apiserver-ci-4081.2.1-d-c5ae8496ec" Dec 13 09:48:30.493801 kubelet[2186]: I1213 09:48:30.493710 2186 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e8c2f97408a197ad92b107fdee021e69-kubeconfig\") pod \"kube-controller-manager-ci-4081.2.1-d-c5ae8496ec\" (UID: \"e8c2f97408a197ad92b107fdee021e69\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-d-c5ae8496ec" Dec 13 09:48:30.493801 kubelet[2186]: I1213 09:48:30.493770 2186 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e8c2f97408a197ad92b107fdee021e69-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.2.1-d-c5ae8496ec\" (UID: \"e8c2f97408a197ad92b107fdee021e69\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-d-c5ae8496ec" Dec 13 09:48:30.493801 kubelet[2186]: I1213 09:48:30.493799 2186 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e8c2f97408a197ad92b107fdee021e69-k8s-certs\") pod \"kube-controller-manager-ci-4081.2.1-d-c5ae8496ec\" (UID: \"e8c2f97408a197ad92b107fdee021e69\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-d-c5ae8496ec" Dec 13 09:48:30.493801 kubelet[2186]: I1213 09:48:30.493828 2186 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6134cd245b865079d8fbb6c5c7ce2d92-kubeconfig\") pod \"kube-scheduler-ci-4081.2.1-d-c5ae8496ec\" (UID: \"6134cd245b865079d8fbb6c5c7ce2d92\") " pod="kube-system/kube-scheduler-ci-4081.2.1-d-c5ae8496ec" Dec 13 09:48:30.494246 kubelet[2186]: I1213 09:48:30.493880 2186 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a6a06d487fbcafface42aa4f517fc46a-k8s-certs\") pod \"kube-apiserver-ci-4081.2.1-d-c5ae8496ec\" (UID: \"a6a06d487fbcafface42aa4f517fc46a\") " pod="kube-system/kube-apiserver-ci-4081.2.1-d-c5ae8496ec" Dec 13 09:48:30.494246 kubelet[2186]: I1213 09:48:30.493910 2186 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a6a06d487fbcafface42aa4f517fc46a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.2.1-d-c5ae8496ec\" (UID: \"a6a06d487fbcafface42aa4f517fc46a\") " pod="kube-system/kube-apiserver-ci-4081.2.1-d-c5ae8496ec" Dec 13 09:48:30.494246 kubelet[2186]: I1213 09:48:30.493937 2186 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e8c2f97408a197ad92b107fdee021e69-ca-certs\") pod \"kube-controller-manager-ci-4081.2.1-d-c5ae8496ec\" (UID: \"e8c2f97408a197ad92b107fdee021e69\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-d-c5ae8496ec" Dec 13 09:48:30.494246 kubelet[2186]: I1213 09:48:30.493977 2186 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e8c2f97408a197ad92b107fdee021e69-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.2.1-d-c5ae8496ec\" (UID: \"e8c2f97408a197ad92b107fdee021e69\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-d-c5ae8496ec" Dec 13 09:48:30.498677 kubelet[2186]: I1213 09:48:30.498211 2186 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.1-d-c5ae8496ec" Dec 13 09:48:30.499167 kubelet[2186]: E1213 09:48:30.499103 2186 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://159.223.206.54:6443/api/v1/nodes\": dial tcp 159.223.206.54:6443: connect: connection refused" node="ci-4081.2.1-d-c5ae8496ec" Dec 13 09:48:30.712663 kubelet[2186]: E1213 09:48:30.712167 2186 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:48:30.713364 containerd[1459]: time="2024-12-13T09:48:30.713244088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.2.1-d-c5ae8496ec,Uid:e8c2f97408a197ad92b107fdee021e69,Namespace:kube-system,Attempt:0,}" Dec 13 09:48:30.716266 systemd-resolved[1317]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Dec 13 09:48:30.733995 kubelet[2186]: E1213 09:48:30.733287 2186 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:48:30.739419 containerd[1459]: time="2024-12-13T09:48:30.739085931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.2.1-d-c5ae8496ec,Uid:6134cd245b865079d8fbb6c5c7ce2d92,Namespace:kube-system,Attempt:0,}" Dec 13 09:48:30.744917 kubelet[2186]: E1213 09:48:30.742702 2186 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:48:30.745128 containerd[1459]: time="2024-12-13T09:48:30.743725049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.2.1-d-c5ae8496ec,Uid:a6a06d487fbcafface42aa4f517fc46a,Namespace:kube-system,Attempt:0,}" Dec 13 09:48:30.798101 kubelet[2186]: E1213 09:48:30.798034 2186 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://159.223.206.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.1-d-c5ae8496ec?timeout=10s\": dial tcp 159.223.206.54:6443: connect: connection refused" interval="800ms" Dec 13 09:48:30.900735 kubelet[2186]: I1213 09:48:30.900648 2186 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.1-d-c5ae8496ec" Dec 13 09:48:30.901456 kubelet[2186]: E1213 09:48:30.901359 2186 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://159.223.206.54:6443/api/v1/nodes\": dial tcp 159.223.206.54:6443: connect: connection refused" node="ci-4081.2.1-d-c5ae8496ec" Dec 13 09:48:31.199562 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1082617700.mount: Deactivated successfully. Dec 13 09:48:31.205991 containerd[1459]: time="2024-12-13T09:48:31.205891046Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 09:48:31.207906 containerd[1459]: time="2024-12-13T09:48:31.207716567Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 09:48:31.209059 containerd[1459]: time="2024-12-13T09:48:31.208709657Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 09:48:31.210538 containerd[1459]: time="2024-12-13T09:48:31.210453517Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 09:48:31.212266 containerd[1459]: time="2024-12-13T09:48:31.212192733Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 09:48:31.212539 containerd[1459]: time="2024-12-13T09:48:31.212498203Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Dec 13 09:48:31.215897 containerd[1459]: time="2024-12-13T09:48:31.213610270Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 09:48:31.220775 containerd[1459]: time="2024-12-13T09:48:31.220684808Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 09:48:31.222948 containerd[1459]: time="2024-12-13T09:48:31.222884478Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 479.018586ms" Dec 13 09:48:31.227736 containerd[1459]: time="2024-12-13T09:48:31.227569485Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 488.322936ms" Dec 13 09:48:31.231290 containerd[1459]: time="2024-12-13T09:48:31.231043788Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 517.674548ms" Dec 13 09:48:31.416667 kubelet[2186]: W1213 09:48:31.416598 2186 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://159.223.206.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.1-d-c5ae8496ec&limit=500&resourceVersion=0": dial tcp 159.223.206.54:6443: connect: connection refused Dec 13 09:48:31.417297 kubelet[2186]: E1213 09:48:31.417057 2186 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://159.223.206.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.1-d-c5ae8496ec&limit=500&resourceVersion=0": dial tcp 159.223.206.54:6443: connect: connection refused Dec 13 09:48:31.425307 containerd[1459]: time="2024-12-13T09:48:31.424772452Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 09:48:31.425307 containerd[1459]: time="2024-12-13T09:48:31.425008284Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 09:48:31.425307 containerd[1459]: time="2024-12-13T09:48:31.425049831Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:48:31.425307 containerd[1459]: time="2024-12-13T09:48:31.425177168Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:48:31.435691 containerd[1459]: time="2024-12-13T09:48:31.435341894Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 09:48:31.435691 containerd[1459]: time="2024-12-13T09:48:31.435536094Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 09:48:31.435691 containerd[1459]: time="2024-12-13T09:48:31.435585990Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:48:31.436143 containerd[1459]: time="2024-12-13T09:48:31.435705991Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:48:31.437881 containerd[1459]: time="2024-12-13T09:48:31.437315616Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 09:48:31.437881 containerd[1459]: time="2024-12-13T09:48:31.437395374Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 09:48:31.437881 containerd[1459]: time="2024-12-13T09:48:31.437412580Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:48:31.437881 containerd[1459]: time="2024-12-13T09:48:31.437521151Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:48:31.470154 systemd[1]: Started cri-containerd-840ea00ecc5173fc0d1707caeb7100cfa9c246dac3c8a6a6f0434dabb7d13a69.scope - libcontainer container 840ea00ecc5173fc0d1707caeb7100cfa9c246dac3c8a6a6f0434dabb7d13a69. Dec 13 09:48:31.485146 systemd[1]: Started cri-containerd-948bed68cbd7bc79ce255da4e04c46c464bfd7f9c56b5cf9df1e01a28f023bc9.scope - libcontainer container 948bed68cbd7bc79ce255da4e04c46c464bfd7f9c56b5cf9df1e01a28f023bc9. Dec 13 09:48:31.497496 systemd[1]: Started cri-containerd-9be0ac49a22cadceeb8d739e4cac4eddbec46f5b0ac2bcf4575867744d186b43.scope - libcontainer container 9be0ac49a22cadceeb8d739e4cac4eddbec46f5b0ac2bcf4575867744d186b43. Dec 13 09:48:31.527811 kubelet[2186]: W1213 09:48:31.527738 2186 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://159.223.206.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 159.223.206.54:6443: connect: connection refused Dec 13 09:48:31.527811 kubelet[2186]: E1213 09:48:31.527816 2186 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://159.223.206.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 159.223.206.54:6443: connect: connection refused Dec 13 09:48:31.542950 kubelet[2186]: W1213 09:48:31.542591 2186 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://159.223.206.54:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 159.223.206.54:6443: connect: connection refused Dec 13 09:48:31.542950 kubelet[2186]: E1213 09:48:31.542719 2186 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://159.223.206.54:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 159.223.206.54:6443: connect: connection refused Dec 13 09:48:31.591096 containerd[1459]: time="2024-12-13T09:48:31.590826238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.2.1-d-c5ae8496ec,Uid:a6a06d487fbcafface42aa4f517fc46a,Namespace:kube-system,Attempt:0,} returns sandbox id \"9be0ac49a22cadceeb8d739e4cac4eddbec46f5b0ac2bcf4575867744d186b43\"" Dec 13 09:48:31.592007 containerd[1459]: time="2024-12-13T09:48:31.591938171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.2.1-d-c5ae8496ec,Uid:6134cd245b865079d8fbb6c5c7ce2d92,Namespace:kube-system,Attempt:0,} returns sandbox id \"840ea00ecc5173fc0d1707caeb7100cfa9c246dac3c8a6a6f0434dabb7d13a69\"" Dec 13 09:48:31.600364 kubelet[2186]: E1213 09:48:31.599729 2186 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://159.223.206.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.1-d-c5ae8496ec?timeout=10s\": dial tcp 159.223.206.54:6443: connect: connection refused" interval="1.6s" Dec 13 09:48:31.610515 kubelet[2186]: E1213 09:48:31.610454 2186 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:48:31.610875 kubelet[2186]: E1213 09:48:31.610832 2186 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:48:31.627019 containerd[1459]: time="2024-12-13T09:48:31.626947333Z" level=info msg="CreateContainer within sandbox \"9be0ac49a22cadceeb8d739e4cac4eddbec46f5b0ac2bcf4575867744d186b43\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 09:48:31.627649 containerd[1459]: time="2024-12-13T09:48:31.627546972Z" level=info msg="CreateContainer within sandbox \"840ea00ecc5173fc0d1707caeb7100cfa9c246dac3c8a6a6f0434dabb7d13a69\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 09:48:31.650904 containerd[1459]: time="2024-12-13T09:48:31.650700710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.2.1-d-c5ae8496ec,Uid:e8c2f97408a197ad92b107fdee021e69,Namespace:kube-system,Attempt:0,} returns sandbox id \"948bed68cbd7bc79ce255da4e04c46c464bfd7f9c56b5cf9df1e01a28f023bc9\"" Dec 13 09:48:31.654270 kubelet[2186]: W1213 09:48:31.652054 2186 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://159.223.206.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 159.223.206.54:6443: connect: connection refused Dec 13 09:48:31.654270 kubelet[2186]: E1213 09:48:31.652228 2186 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://159.223.206.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 159.223.206.54:6443: connect: connection refused Dec 13 09:48:31.654270 kubelet[2186]: E1213 09:48:31.652401 2186 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:48:31.657159 containerd[1459]: time="2024-12-13T09:48:31.657097883Z" level=info msg="CreateContainer within sandbox \"948bed68cbd7bc79ce255da4e04c46c464bfd7f9c56b5cf9df1e01a28f023bc9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 09:48:31.668114 containerd[1459]: time="2024-12-13T09:48:31.668036790Z" level=info msg="CreateContainer within sandbox \"9be0ac49a22cadceeb8d739e4cac4eddbec46f5b0ac2bcf4575867744d186b43\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b573722994da18b0c3139f0faa1e1ce84084661301ddb7d42f3f237c01f29015\"" Dec 13 09:48:31.670455 containerd[1459]: time="2024-12-13T09:48:31.670396793Z" level=info msg="StartContainer for \"b573722994da18b0c3139f0faa1e1ce84084661301ddb7d42f3f237c01f29015\"" Dec 13 09:48:31.676386 containerd[1459]: time="2024-12-13T09:48:31.676320086Z" level=info msg="CreateContainer within sandbox \"840ea00ecc5173fc0d1707caeb7100cfa9c246dac3c8a6a6f0434dabb7d13a69\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0f93d3cfbe38d80ef1926265c61d1a3742b4b6a36a766cfa7dd5541c99d7f12a\"" Dec 13 09:48:31.678715 containerd[1459]: time="2024-12-13T09:48:31.678645702Z" level=info msg="StartContainer for \"0f93d3cfbe38d80ef1926265c61d1a3742b4b6a36a766cfa7dd5541c99d7f12a\"" Dec 13 09:48:31.690458 containerd[1459]: time="2024-12-13T09:48:31.690335601Z" level=info msg="CreateContainer within sandbox \"948bed68cbd7bc79ce255da4e04c46c464bfd7f9c56b5cf9df1e01a28f023bc9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8c9b29c94d058cec4aa0df67402418d8dc3573213d2793940970c611f3f9aded\"" Dec 13 09:48:31.691758 containerd[1459]: time="2024-12-13T09:48:31.691611311Z" level=info msg="StartContainer for \"8c9b29c94d058cec4aa0df67402418d8dc3573213d2793940970c611f3f9aded\"" Dec 13 09:48:31.705623 kubelet[2186]: I1213 09:48:31.705012 2186 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.1-d-c5ae8496ec" Dec 13 09:48:31.705623 kubelet[2186]: E1213 09:48:31.705567 2186 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://159.223.206.54:6443/api/v1/nodes\": dial tcp 159.223.206.54:6443: connect: connection refused" node="ci-4081.2.1-d-c5ae8496ec" Dec 13 09:48:31.745086 systemd[1]: Started cri-containerd-b573722994da18b0c3139f0faa1e1ce84084661301ddb7d42f3f237c01f29015.scope - libcontainer container b573722994da18b0c3139f0faa1e1ce84084661301ddb7d42f3f237c01f29015. Dec 13 09:48:31.757216 systemd[1]: Started cri-containerd-0f93d3cfbe38d80ef1926265c61d1a3742b4b6a36a766cfa7dd5541c99d7f12a.scope - libcontainer container 0f93d3cfbe38d80ef1926265c61d1a3742b4b6a36a766cfa7dd5541c99d7f12a. Dec 13 09:48:31.786271 systemd[1]: Started cri-containerd-8c9b29c94d058cec4aa0df67402418d8dc3573213d2793940970c611f3f9aded.scope - libcontainer container 8c9b29c94d058cec4aa0df67402418d8dc3573213d2793940970c611f3f9aded. Dec 13 09:48:31.862268 containerd[1459]: time="2024-12-13T09:48:31.862175492Z" level=info msg="StartContainer for \"b573722994da18b0c3139f0faa1e1ce84084661301ddb7d42f3f237c01f29015\" returns successfully" Dec 13 09:48:31.918009 containerd[1459]: time="2024-12-13T09:48:31.917311022Z" level=info msg="StartContainer for \"8c9b29c94d058cec4aa0df67402418d8dc3573213d2793940970c611f3f9aded\" returns successfully" Dec 13 09:48:31.922893 containerd[1459]: time="2024-12-13T09:48:31.921720103Z" level=info msg="StartContainer for \"0f93d3cfbe38d80ef1926265c61d1a3742b4b6a36a766cfa7dd5541c99d7f12a\" returns successfully" Dec 13 09:48:32.205251 kubelet[2186]: E1213 09:48:32.204795 2186 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://159.223.206.54:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 159.223.206.54:6443: connect: connection refused Dec 13 09:48:32.320681 kubelet[2186]: E1213 09:48:32.320633 2186 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:48:32.325844 kubelet[2186]: E1213 09:48:32.325780 2186 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:48:32.333000 kubelet[2186]: E1213 09:48:32.331989 2186 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:48:33.309419 kubelet[2186]: I1213 09:48:33.309366 2186 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.1-d-c5ae8496ec" Dec 13 09:48:33.347506 kubelet[2186]: E1213 09:48:33.347318 2186 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:48:33.350093 kubelet[2186]: E1213 09:48:33.348456 2186 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:48:34.339228 kubelet[2186]: E1213 09:48:34.339175 2186 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:48:34.718648 kubelet[2186]: E1213 09:48:34.718489 2186 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.2.1-d-c5ae8496ec\" not found" node="ci-4081.2.1-d-c5ae8496ec" Dec 13 09:48:34.869057 kubelet[2186]: I1213 09:48:34.868984 2186 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.2.1-d-c5ae8496ec" Dec 13 09:48:35.130072 kubelet[2186]: I1213 09:48:35.129814 2186 apiserver.go:52] "Watching apiserver" Dec 13 09:48:35.193182 kubelet[2186]: I1213 09:48:35.193060 2186 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 09:48:36.483778 kubelet[2186]: W1213 09:48:36.483707 2186 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 09:48:36.484559 kubelet[2186]: E1213 09:48:36.484354 2186 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:48:37.345256 kubelet[2186]: E1213 09:48:37.345158 2186 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:48:37.497848 systemd[1]: Reloading requested from client PID 2459 ('systemctl') (unit session-7.scope)... Dec 13 09:48:37.497904 systemd[1]: Reloading... Dec 13 09:48:37.773910 zram_generator::config[2501]: No configuration found. Dec 13 09:48:37.987397 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 09:48:38.160319 systemd[1]: Reloading finished in 661 ms. Dec 13 09:48:38.233418 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 09:48:38.247560 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 09:48:38.248196 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 09:48:38.248271 systemd[1]: kubelet.service: Consumed 1.147s CPU time, 110.7M memory peak, 0B memory swap peak. Dec 13 09:48:38.256415 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 09:48:38.470192 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 09:48:38.484085 (kubelet)[2549]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 09:48:38.588984 kubelet[2549]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 09:48:38.592061 kubelet[2549]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 09:48:38.592061 kubelet[2549]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 09:48:38.592061 kubelet[2549]: I1213 09:48:38.589549 2549 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 09:48:38.613907 kubelet[2549]: I1213 09:48:38.613796 2549 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 09:48:38.613907 kubelet[2549]: I1213 09:48:38.613873 2549 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 09:48:38.614234 kubelet[2549]: I1213 09:48:38.614221 2549 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 09:48:38.620943 kubelet[2549]: I1213 09:48:38.620187 2549 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 09:48:38.623625 kubelet[2549]: I1213 09:48:38.623565 2549 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 09:48:38.635917 kubelet[2549]: I1213 09:48:38.635882 2549 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 09:48:38.636660 kubelet[2549]: I1213 09:48:38.636593 2549 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 09:48:38.637891 kubelet[2549]: I1213 09:48:38.637221 2549 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.2.1-d-c5ae8496ec","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 09:48:38.639469 kubelet[2549]: I1213 09:48:38.638214 2549 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 09:48:38.639469 kubelet[2549]: I1213 09:48:38.638245 2549 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 09:48:38.639469 kubelet[2549]: I1213 09:48:38.638329 2549 state_mem.go:36] "Initialized new in-memory state store" Dec 13 09:48:38.639469 kubelet[2549]: I1213 09:48:38.638536 2549 kubelet.go:400] "Attempting to sync node with API server" Dec 13 09:48:38.639469 kubelet[2549]: I1213 09:48:38.638558 2549 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 09:48:38.639469 kubelet[2549]: I1213 09:48:38.638583 2549 kubelet.go:312] "Adding apiserver pod source" Dec 13 09:48:38.639469 kubelet[2549]: I1213 09:48:38.638627 2549 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 09:48:38.646216 kubelet[2549]: I1213 09:48:38.646000 2549 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 09:48:38.653243 kubelet[2549]: I1213 09:48:38.652142 2549 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 09:48:38.657947 kubelet[2549]: I1213 09:48:38.657770 2549 server.go:1264] "Started kubelet" Dec 13 09:48:38.658690 kubelet[2549]: I1213 09:48:38.658466 2549 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 09:48:38.662707 kubelet[2549]: I1213 09:48:38.660548 2549 server.go:455] "Adding debug handlers to kubelet server" Dec 13 09:48:38.675449 kubelet[2549]: I1213 09:48:38.674774 2549 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 09:48:38.698296 kubelet[2549]: I1213 09:48:38.658740 2549 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 09:48:38.698904 kubelet[2549]: I1213 09:48:38.698797 2549 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 09:48:38.705698 kubelet[2549]: I1213 09:48:38.705266 2549 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 09:48:38.707723 kubelet[2549]: I1213 09:48:38.707686 2549 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 09:48:38.709209 kubelet[2549]: I1213 09:48:38.709080 2549 reconciler.go:26] "Reconciler: start to sync state" Dec 13 09:48:38.728009 kubelet[2549]: I1213 09:48:38.727189 2549 factory.go:221] Registration of the systemd container factory successfully Dec 13 09:48:38.728009 kubelet[2549]: I1213 09:48:38.727329 2549 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 09:48:38.734717 kubelet[2549]: E1213 09:48:38.734631 2549 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 09:48:38.739449 kubelet[2549]: I1213 09:48:38.739071 2549 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 09:48:38.740915 kubelet[2549]: I1213 09:48:38.739663 2549 factory.go:221] Registration of the containerd container factory successfully Dec 13 09:48:38.742116 kubelet[2549]: I1213 09:48:38.742079 2549 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 09:48:38.743197 kubelet[2549]: I1213 09:48:38.743059 2549 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 09:48:38.743365 kubelet[2549]: I1213 09:48:38.743351 2549 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 09:48:38.743512 kubelet[2549]: E1213 09:48:38.743488 2549 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 09:48:38.808100 kubelet[2549]: I1213 09:48:38.807297 2549 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.1-d-c5ae8496ec" Dec 13 09:48:38.833922 kubelet[2549]: I1213 09:48:38.833757 2549 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081.2.1-d-c5ae8496ec" Dec 13 09:48:38.835787 kubelet[2549]: I1213 09:48:38.835732 2549 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.2.1-d-c5ae8496ec" Dec 13 09:48:38.844581 kubelet[2549]: E1213 09:48:38.844213 2549 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 09:48:38.880251 kubelet[2549]: I1213 09:48:38.879765 2549 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 09:48:38.880251 kubelet[2549]: I1213 09:48:38.879790 2549 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 09:48:38.880251 kubelet[2549]: I1213 09:48:38.879818 2549 state_mem.go:36] "Initialized new in-memory state store" Dec 13 09:48:38.880945 kubelet[2549]: I1213 09:48:38.880774 2549 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 09:48:38.880945 kubelet[2549]: I1213 09:48:38.880802 2549 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 09:48:38.880945 kubelet[2549]: I1213 09:48:38.880870 2549 policy_none.go:49] "None policy: Start" Dec 13 09:48:38.882632 kubelet[2549]: I1213 09:48:38.882113 2549 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 09:48:38.882632 kubelet[2549]: I1213 09:48:38.882143 2549 state_mem.go:35] "Initializing new in-memory state store" Dec 13 09:48:38.882725 kubelet[2549]: I1213 09:48:38.882651 2549 state_mem.go:75] "Updated machine memory state" Dec 13 09:48:38.897690 kubelet[2549]: I1213 09:48:38.896910 2549 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 09:48:38.897690 kubelet[2549]: I1213 09:48:38.897200 2549 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 09:48:38.901487 kubelet[2549]: I1213 09:48:38.899946 2549 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 09:48:39.046705 kubelet[2549]: I1213 09:48:39.046073 2549 topology_manager.go:215] "Topology Admit Handler" podUID="a6a06d487fbcafface42aa4f517fc46a" podNamespace="kube-system" podName="kube-apiserver-ci-4081.2.1-d-c5ae8496ec" Dec 13 09:48:39.046705 kubelet[2549]: I1213 09:48:39.046199 2549 topology_manager.go:215] "Topology Admit Handler" podUID="e8c2f97408a197ad92b107fdee021e69" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.2.1-d-c5ae8496ec" Dec 13 09:48:39.046705 kubelet[2549]: I1213 09:48:39.046262 2549 topology_manager.go:215] "Topology Admit Handler" podUID="6134cd245b865079d8fbb6c5c7ce2d92" podNamespace="kube-system" podName="kube-scheduler-ci-4081.2.1-d-c5ae8496ec" Dec 13 09:48:39.064327 kubelet[2549]: W1213 09:48:39.063398 2549 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 09:48:39.065211 kubelet[2549]: E1213 09:48:39.064808 2549 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4081.2.1-d-c5ae8496ec\" already exists" pod="kube-system/kube-scheduler-ci-4081.2.1-d-c5ae8496ec" Dec 13 09:48:39.065211 kubelet[2549]: W1213 09:48:39.063945 2549 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 09:48:39.065211 kubelet[2549]: W1213 09:48:39.063959 2549 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 09:48:39.116309 kubelet[2549]: I1213 09:48:39.114863 2549 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e8c2f97408a197ad92b107fdee021e69-kubeconfig\") pod \"kube-controller-manager-ci-4081.2.1-d-c5ae8496ec\" (UID: \"e8c2f97408a197ad92b107fdee021e69\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-d-c5ae8496ec" Dec 13 09:48:39.116309 kubelet[2549]: I1213 09:48:39.114930 2549 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6134cd245b865079d8fbb6c5c7ce2d92-kubeconfig\") pod \"kube-scheduler-ci-4081.2.1-d-c5ae8496ec\" (UID: \"6134cd245b865079d8fbb6c5c7ce2d92\") " pod="kube-system/kube-scheduler-ci-4081.2.1-d-c5ae8496ec" Dec 13 09:48:39.116309 kubelet[2549]: I1213 09:48:39.114959 2549 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a6a06d487fbcafface42aa4f517fc46a-ca-certs\") pod \"kube-apiserver-ci-4081.2.1-d-c5ae8496ec\" (UID: \"a6a06d487fbcafface42aa4f517fc46a\") " pod="kube-system/kube-apiserver-ci-4081.2.1-d-c5ae8496ec" Dec 13 09:48:39.116309 kubelet[2549]: I1213 09:48:39.114998 2549 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a6a06d487fbcafface42aa4f517fc46a-k8s-certs\") pod \"kube-apiserver-ci-4081.2.1-d-c5ae8496ec\" (UID: \"a6a06d487fbcafface42aa4f517fc46a\") " pod="kube-system/kube-apiserver-ci-4081.2.1-d-c5ae8496ec" Dec 13 09:48:39.116309 kubelet[2549]: I1213 09:48:39.115027 2549 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e8c2f97408a197ad92b107fdee021e69-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.2.1-d-c5ae8496ec\" (UID: \"e8c2f97408a197ad92b107fdee021e69\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-d-c5ae8496ec" Dec 13 09:48:39.116615 kubelet[2549]: I1213 09:48:39.115047 2549 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e8c2f97408a197ad92b107fdee021e69-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.2.1-d-c5ae8496ec\" (UID: \"e8c2f97408a197ad92b107fdee021e69\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-d-c5ae8496ec" Dec 13 09:48:39.116615 kubelet[2549]: I1213 09:48:39.115063 2549 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a6a06d487fbcafface42aa4f517fc46a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.2.1-d-c5ae8496ec\" (UID: \"a6a06d487fbcafface42aa4f517fc46a\") " pod="kube-system/kube-apiserver-ci-4081.2.1-d-c5ae8496ec" Dec 13 09:48:39.116615 kubelet[2549]: I1213 09:48:39.115115 2549 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e8c2f97408a197ad92b107fdee021e69-ca-certs\") pod \"kube-controller-manager-ci-4081.2.1-d-c5ae8496ec\" (UID: \"e8c2f97408a197ad92b107fdee021e69\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-d-c5ae8496ec" Dec 13 09:48:39.116615 kubelet[2549]: I1213 09:48:39.115139 2549 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e8c2f97408a197ad92b107fdee021e69-k8s-certs\") pod \"kube-controller-manager-ci-4081.2.1-d-c5ae8496ec\" (UID: \"e8c2f97408a197ad92b107fdee021e69\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-d-c5ae8496ec" Dec 13 09:48:39.367007 kubelet[2549]: E1213 09:48:39.366213 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:48:39.368107 kubelet[2549]: E1213 09:48:39.368050 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:48:39.368545 kubelet[2549]: E1213 09:48:39.368520 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:48:39.641831 kubelet[2549]: I1213 09:48:39.641656 2549 apiserver.go:52] "Watching apiserver" Dec 13 09:48:39.708457 kubelet[2549]: I1213 09:48:39.708388 2549 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 09:48:39.791882 kubelet[2549]: E1213 09:48:39.791040 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:48:39.793517 kubelet[2549]: E1213 09:48:39.793473 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:48:39.840240 kubelet[2549]: W1213 09:48:39.839504 2549 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 09:48:39.840240 kubelet[2549]: E1213 09:48:39.839612 2549 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.2.1-d-c5ae8496ec\" already exists" pod="kube-system/kube-apiserver-ci-4081.2.1-d-c5ae8496ec" Dec 13 09:48:39.840240 kubelet[2549]: E1213 09:48:39.840144 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:48:39.997923 kubelet[2549]: I1213 09:48:39.997693 2549 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.2.1-d-c5ae8496ec" podStartSLOduration=3.9976702299999998 podStartE2EDuration="3.99767023s" podCreationTimestamp="2024-12-13 09:48:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 09:48:39.889473249 +0000 UTC m=+1.394359087" watchObservedRunningTime="2024-12-13 09:48:39.99767023 +0000 UTC m=+1.502556070" Dec 13 09:48:40.083959 kubelet[2549]: I1213 09:48:40.083214 2549 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.2.1-d-c5ae8496ec" podStartSLOduration=1.083007435 podStartE2EDuration="1.083007435s" podCreationTimestamp="2024-12-13 09:48:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 09:48:39.99889794 +0000 UTC m=+1.503783807" watchObservedRunningTime="2024-12-13 09:48:40.083007435 +0000 UTC m=+1.587893305" Dec 13 09:48:40.084830 kubelet[2549]: I1213 09:48:40.084149 2549 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.2.1-d-c5ae8496ec" podStartSLOduration=1.084127999 podStartE2EDuration="1.084127999s" podCreationTimestamp="2024-12-13 09:48:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 09:48:40.083874609 +0000 UTC m=+1.588760473" watchObservedRunningTime="2024-12-13 09:48:40.084127999 +0000 UTC m=+1.589013892" Dec 13 09:48:40.795198 kubelet[2549]: E1213 09:48:40.795145 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:48:44.945979 sudo[1641]: pam_unix(sudo:session): session closed for user root Dec 13 09:48:44.952839 sshd[1638]: pam_unix(sshd:session): session closed for user core Dec 13 09:48:44.957923 systemd[1]: sshd@6-159.223.206.54:22-147.75.109.163:46998.service: Deactivated successfully. Dec 13 09:48:44.962711 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 09:48:44.963136 systemd[1]: session-7.scope: Consumed 7.103s CPU time, 187.6M memory peak, 0B memory swap peak. Dec 13 09:48:44.965574 systemd-logind[1435]: Session 7 logged out. Waiting for processes to exit. Dec 13 09:48:44.967454 systemd-logind[1435]: Removed session 7. Dec 13 09:48:46.087484 kubelet[2549]: E1213 09:48:46.087432 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:48:46.109614 kubelet[2549]: E1213 09:48:46.109221 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:48:46.815904 kubelet[2549]: E1213 09:48:46.815247 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:48:46.815904 kubelet[2549]: E1213 09:48:46.815264 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:48:47.494250 kubelet[2549]: E1213 09:48:47.493866 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:48:47.817014 kubelet[2549]: E1213 09:48:47.816346 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:48:49.883796 update_engine[1437]: I20241213 09:48:49.883667 1437 update_attempter.cc:509] Updating boot flags... Dec 13 09:48:49.931988 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2634) Dec 13 09:48:50.049476 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2633) Dec 13 09:48:50.151033 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2633) Dec 13 09:48:50.808237 kubelet[2549]: I1213 09:48:50.808011 2549 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 09:48:50.809531 containerd[1459]: time="2024-12-13T09:48:50.809339217Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 09:48:50.811231 kubelet[2549]: I1213 09:48:50.809733 2549 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 09:48:51.919902 kubelet[2549]: I1213 09:48:51.916566 2549 topology_manager.go:215] "Topology Admit Handler" podUID="f155f005-c552-4842-9ed1-0c094b87dca1" podNamespace="kube-system" podName="kube-proxy-fm8gq" Dec 13 09:48:51.936266 systemd[1]: Created slice kubepods-besteffort-podf155f005_c552_4842_9ed1_0c094b87dca1.slice - libcontainer container kubepods-besteffort-podf155f005_c552_4842_9ed1_0c094b87dca1.slice. Dec 13 09:48:52.003903 kubelet[2549]: I1213 09:48:52.002610 2549 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f155f005-c552-4842-9ed1-0c094b87dca1-xtables-lock\") pod \"kube-proxy-fm8gq\" (UID: \"f155f005-c552-4842-9ed1-0c094b87dca1\") " pod="kube-system/kube-proxy-fm8gq" Dec 13 09:48:52.003903 kubelet[2549]: I1213 09:48:52.002698 2549 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f155f005-c552-4842-9ed1-0c094b87dca1-kube-proxy\") pod \"kube-proxy-fm8gq\" (UID: \"f155f005-c552-4842-9ed1-0c094b87dca1\") " pod="kube-system/kube-proxy-fm8gq" Dec 13 09:48:52.003903 kubelet[2549]: I1213 09:48:52.002728 2549 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f155f005-c552-4842-9ed1-0c094b87dca1-lib-modules\") pod \"kube-proxy-fm8gq\" (UID: \"f155f005-c552-4842-9ed1-0c094b87dca1\") " pod="kube-system/kube-proxy-fm8gq" Dec 13 09:48:52.003903 kubelet[2549]: I1213 09:48:52.002755 2549 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gr2v\" (UniqueName: \"kubernetes.io/projected/f155f005-c552-4842-9ed1-0c094b87dca1-kube-api-access-9gr2v\") pod \"kube-proxy-fm8gq\" (UID: \"f155f005-c552-4842-9ed1-0c094b87dca1\") " pod="kube-system/kube-proxy-fm8gq" Dec 13 09:48:52.066981 kubelet[2549]: I1213 09:48:52.066915 2549 topology_manager.go:215] "Topology Admit Handler" podUID="15368385-ef20-4acc-a7b9-c12b624bb540" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-dxzj8" Dec 13 09:48:52.083140 systemd[1]: Created slice kubepods-besteffort-pod15368385_ef20_4acc_a7b9_c12b624bb540.slice - libcontainer container kubepods-besteffort-pod15368385_ef20_4acc_a7b9_c12b624bb540.slice. Dec 13 09:48:52.087811 kubelet[2549]: W1213 09:48:52.087707 2549 reflector.go:547] object-"tigera-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081.2.1-d-c5ae8496ec" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ci-4081.2.1-d-c5ae8496ec' and this object Dec 13 09:48:52.087811 kubelet[2549]: E1213 09:48:52.087795 2549 reflector.go:150] object-"tigera-operator"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081.2.1-d-c5ae8496ec" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ci-4081.2.1-d-c5ae8496ec' and this object Dec 13 09:48:52.092879 kubelet[2549]: W1213 09:48:52.092609 2549 reflector.go:547] object-"tigera-operator"/"kubernetes-services-endpoint": failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:ci-4081.2.1-d-c5ae8496ec" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ci-4081.2.1-d-c5ae8496ec' and this object Dec 13 09:48:52.092879 kubelet[2549]: E1213 09:48:52.092665 2549 reflector.go:150] object-"tigera-operator"/"kubernetes-services-endpoint": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:ci-4081.2.1-d-c5ae8496ec" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ci-4081.2.1-d-c5ae8496ec' and this object Dec 13 09:48:52.110791 kubelet[2549]: I1213 09:48:52.107633 2549 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/15368385-ef20-4acc-a7b9-c12b624bb540-var-lib-calico\") pod \"tigera-operator-7bc55997bb-dxzj8\" (UID: \"15368385-ef20-4acc-a7b9-c12b624bb540\") " pod="tigera-operator/tigera-operator-7bc55997bb-dxzj8" Dec 13 09:48:52.110791 kubelet[2549]: I1213 09:48:52.109130 2549 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lz6m6\" (UniqueName: \"kubernetes.io/projected/15368385-ef20-4acc-a7b9-c12b624bb540-kube-api-access-lz6m6\") pod \"tigera-operator-7bc55997bb-dxzj8\" (UID: \"15368385-ef20-4acc-a7b9-c12b624bb540\") " pod="tigera-operator/tigera-operator-7bc55997bb-dxzj8" Dec 13 09:48:52.246441 kubelet[2549]: E1213 09:48:52.246279 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:48:52.248252 containerd[1459]: time="2024-12-13T09:48:52.248196054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fm8gq,Uid:f155f005-c552-4842-9ed1-0c094b87dca1,Namespace:kube-system,Attempt:0,}" Dec 13 09:48:52.290303 containerd[1459]: time="2024-12-13T09:48:52.289619475Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 09:48:52.290303 containerd[1459]: time="2024-12-13T09:48:52.289869439Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 09:48:52.291027 containerd[1459]: time="2024-12-13T09:48:52.290723036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:48:52.291027 containerd[1459]: time="2024-12-13T09:48:52.290915688Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:48:52.326206 systemd[1]: Started cri-containerd-cf90ea74710096a754867d582790d45b3da3af8e25b136fcd0f875db10c08f0a.scope - libcontainer container cf90ea74710096a754867d582790d45b3da3af8e25b136fcd0f875db10c08f0a. Dec 13 09:48:52.366429 containerd[1459]: time="2024-12-13T09:48:52.366345357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fm8gq,Uid:f155f005-c552-4842-9ed1-0c094b87dca1,Namespace:kube-system,Attempt:0,} returns sandbox id \"cf90ea74710096a754867d582790d45b3da3af8e25b136fcd0f875db10c08f0a\"" Dec 13 09:48:52.368503 kubelet[2549]: E1213 09:48:52.368054 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:48:52.375541 containerd[1459]: time="2024-12-13T09:48:52.375461826Z" level=info msg="CreateContainer within sandbox \"cf90ea74710096a754867d582790d45b3da3af8e25b136fcd0f875db10c08f0a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 09:48:52.397581 containerd[1459]: time="2024-12-13T09:48:52.397497128Z" level=info msg="CreateContainer within sandbox \"cf90ea74710096a754867d582790d45b3da3af8e25b136fcd0f875db10c08f0a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d3d2016864b9eedffe535805f064801d39bd838734fc8074ec292cd0f26b98fe\"" Dec 13 09:48:52.398687 containerd[1459]: time="2024-12-13T09:48:52.398626762Z" level=info msg="StartContainer for \"d3d2016864b9eedffe535805f064801d39bd838734fc8074ec292cd0f26b98fe\"" Dec 13 09:48:52.438232 systemd[1]: Started cri-containerd-d3d2016864b9eedffe535805f064801d39bd838734fc8074ec292cd0f26b98fe.scope - libcontainer container d3d2016864b9eedffe535805f064801d39bd838734fc8074ec292cd0f26b98fe. Dec 13 09:48:52.481556 containerd[1459]: time="2024-12-13T09:48:52.481490207Z" level=info msg="StartContainer for \"d3d2016864b9eedffe535805f064801d39bd838734fc8074ec292cd0f26b98fe\" returns successfully" Dec 13 09:48:52.832618 kubelet[2549]: E1213 09:48:52.832143 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:48:52.996017 containerd[1459]: time="2024-12-13T09:48:52.995058869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-dxzj8,Uid:15368385-ef20-4acc-a7b9-c12b624bb540,Namespace:tigera-operator,Attempt:0,}" Dec 13 09:48:53.052445 containerd[1459]: time="2024-12-13T09:48:53.051945519Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 09:48:53.052445 containerd[1459]: time="2024-12-13T09:48:53.052054164Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 09:48:53.052445 containerd[1459]: time="2024-12-13T09:48:53.052074514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:48:53.052445 containerd[1459]: time="2024-12-13T09:48:53.052234436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:48:53.090568 systemd[1]: Started cri-containerd-0b415ba40b2aac41df3cfe525e7ebd1a1c326000317454a6b1b0bf5a5ccd9134.scope - libcontainer container 0b415ba40b2aac41df3cfe525e7ebd1a1c326000317454a6b1b0bf5a5ccd9134. Dec 13 09:48:53.155141 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount808477980.mount: Deactivated successfully. Dec 13 09:48:53.188091 containerd[1459]: time="2024-12-13T09:48:53.187198782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-dxzj8,Uid:15368385-ef20-4acc-a7b9-c12b624bb540,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"0b415ba40b2aac41df3cfe525e7ebd1a1c326000317454a6b1b0bf5a5ccd9134\"" Dec 13 09:48:53.213788 containerd[1459]: time="2024-12-13T09:48:53.213621206Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Dec 13 09:48:54.758434 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2798947077.mount: Deactivated successfully. Dec 13 09:48:55.481880 containerd[1459]: time="2024-12-13T09:48:55.480119518Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:48:55.481880 containerd[1459]: time="2024-12-13T09:48:55.481185958Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:48:55.481880 containerd[1459]: time="2024-12-13T09:48:55.481273073Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21763681" Dec 13 09:48:55.485464 containerd[1459]: time="2024-12-13T09:48:55.485396298Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:48:55.487157 containerd[1459]: time="2024-12-13T09:48:55.487097763Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 2.273418274s" Dec 13 09:48:55.487387 containerd[1459]: time="2024-12-13T09:48:55.487359569Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Dec 13 09:48:55.504001 containerd[1459]: time="2024-12-13T09:48:55.503937971Z" level=info msg="CreateContainer within sandbox \"0b415ba40b2aac41df3cfe525e7ebd1a1c326000317454a6b1b0bf5a5ccd9134\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 13 09:48:55.521476 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2834642726.mount: Deactivated successfully. Dec 13 09:48:55.525452 containerd[1459]: time="2024-12-13T09:48:55.525385726Z" level=info msg="CreateContainer within sandbox \"0b415ba40b2aac41df3cfe525e7ebd1a1c326000317454a6b1b0bf5a5ccd9134\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"5f582e1fa3755a5b322593fbd5196ea7a4f5fa12c84fe6ef54d11af199168c99\"" Dec 13 09:48:55.532440 containerd[1459]: time="2024-12-13T09:48:55.532391653Z" level=info msg="StartContainer for \"5f582e1fa3755a5b322593fbd5196ea7a4f5fa12c84fe6ef54d11af199168c99\"" Dec 13 09:48:55.600182 systemd[1]: Started cri-containerd-5f582e1fa3755a5b322593fbd5196ea7a4f5fa12c84fe6ef54d11af199168c99.scope - libcontainer container 5f582e1fa3755a5b322593fbd5196ea7a4f5fa12c84fe6ef54d11af199168c99. Dec 13 09:48:55.650836 containerd[1459]: time="2024-12-13T09:48:55.650774372Z" level=info msg="StartContainer for \"5f582e1fa3755a5b322593fbd5196ea7a4f5fa12c84fe6ef54d11af199168c99\" returns successfully" Dec 13 09:48:55.867882 kubelet[2549]: I1213 09:48:55.866550 2549 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fm8gq" podStartSLOduration=4.866525368 podStartE2EDuration="4.866525368s" podCreationTimestamp="2024-12-13 09:48:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 09:48:52.85751121 +0000 UTC m=+14.362397068" watchObservedRunningTime="2024-12-13 09:48:55.866525368 +0000 UTC m=+17.371411222" Dec 13 09:48:59.207939 kubelet[2549]: I1213 09:48:59.205976 2549 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-dxzj8" podStartSLOduration=5.929895321 podStartE2EDuration="8.205948915s" podCreationTimestamp="2024-12-13 09:48:51 +0000 UTC" firstStartedPulling="2024-12-13 09:48:53.212769244 +0000 UTC m=+14.717655099" lastFinishedPulling="2024-12-13 09:48:55.488822846 +0000 UTC m=+16.993708693" observedRunningTime="2024-12-13 09:48:55.869192606 +0000 UTC m=+17.374078464" watchObservedRunningTime="2024-12-13 09:48:59.205948915 +0000 UTC m=+20.710834780" Dec 13 09:48:59.207939 kubelet[2549]: I1213 09:48:59.206245 2549 topology_manager.go:215] "Topology Admit Handler" podUID="1a2842c0-44c1-4729-941c-c911a44ef5ed" podNamespace="calico-system" podName="calico-typha-776c847858-gd7xc" Dec 13 09:48:59.227325 systemd[1]: Created slice kubepods-besteffort-pod1a2842c0_44c1_4729_941c_c911a44ef5ed.slice - libcontainer container kubepods-besteffort-pod1a2842c0_44c1_4729_941c_c911a44ef5ed.slice. Dec 13 09:48:59.356101 kubelet[2549]: I1213 09:48:59.356055 2549 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/1a2842c0-44c1-4729-941c-c911a44ef5ed-typha-certs\") pod \"calico-typha-776c847858-gd7xc\" (UID: \"1a2842c0-44c1-4729-941c-c911a44ef5ed\") " pod="calico-system/calico-typha-776c847858-gd7xc" Dec 13 09:48:59.356430 kubelet[2549]: I1213 09:48:59.356122 2549 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1a2842c0-44c1-4729-941c-c911a44ef5ed-tigera-ca-bundle\") pod \"calico-typha-776c847858-gd7xc\" (UID: \"1a2842c0-44c1-4729-941c-c911a44ef5ed\") " pod="calico-system/calico-typha-776c847858-gd7xc" Dec 13 09:48:59.356430 kubelet[2549]: I1213 09:48:59.356159 2549 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x44q8\" (UniqueName: \"kubernetes.io/projected/1a2842c0-44c1-4729-941c-c911a44ef5ed-kube-api-access-x44q8\") pod \"calico-typha-776c847858-gd7xc\" (UID: \"1a2842c0-44c1-4729-941c-c911a44ef5ed\") " pod="calico-system/calico-typha-776c847858-gd7xc" Dec 13 09:48:59.392563 kubelet[2549]: I1213 09:48:59.392472 2549 topology_manager.go:215] "Topology Admit Handler" podUID="de0c84c1-6691-42aa-8583-551a87a664aa" podNamespace="calico-system" podName="calico-node-hdmnj" Dec 13 09:48:59.406305 systemd[1]: Created slice kubepods-besteffort-podde0c84c1_6691_42aa_8583_551a87a664aa.slice - libcontainer container kubepods-besteffort-podde0c84c1_6691_42aa_8583_551a87a664aa.slice. Dec 13 09:48:59.533893 kubelet[2549]: E1213 09:48:59.532766 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:48:59.539519 containerd[1459]: time="2024-12-13T09:48:59.537687178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-776c847858-gd7xc,Uid:1a2842c0-44c1-4729-941c-c911a44ef5ed,Namespace:calico-system,Attempt:0,}" Dec 13 09:48:59.560761 kubelet[2549]: I1213 09:48:59.559153 2549 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/de0c84c1-6691-42aa-8583-551a87a664aa-xtables-lock\") pod \"calico-node-hdmnj\" (UID: \"de0c84c1-6691-42aa-8583-551a87a664aa\") " pod="calico-system/calico-node-hdmnj" Dec 13 09:48:59.560761 kubelet[2549]: I1213 09:48:59.559209 2549 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/de0c84c1-6691-42aa-8583-551a87a664aa-var-lib-calico\") pod \"calico-node-hdmnj\" (UID: \"de0c84c1-6691-42aa-8583-551a87a664aa\") " pod="calico-system/calico-node-hdmnj" Dec 13 09:48:59.560761 kubelet[2549]: I1213 09:48:59.559243 2549 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/de0c84c1-6691-42aa-8583-551a87a664aa-cni-bin-dir\") pod \"calico-node-hdmnj\" (UID: \"de0c84c1-6691-42aa-8583-551a87a664aa\") " pod="calico-system/calico-node-hdmnj" Dec 13 09:48:59.560761 kubelet[2549]: I1213 09:48:59.559262 2549 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/de0c84c1-6691-42aa-8583-551a87a664aa-flexvol-driver-host\") pod \"calico-node-hdmnj\" (UID: \"de0c84c1-6691-42aa-8583-551a87a664aa\") " pod="calico-system/calico-node-hdmnj" Dec 13 09:48:59.560761 kubelet[2549]: I1213 09:48:59.559283 2549 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7fd5\" (UniqueName: \"kubernetes.io/projected/de0c84c1-6691-42aa-8583-551a87a664aa-kube-api-access-t7fd5\") pod \"calico-node-hdmnj\" (UID: \"de0c84c1-6691-42aa-8583-551a87a664aa\") " pod="calico-system/calico-node-hdmnj" Dec 13 09:48:59.561364 kubelet[2549]: I1213 09:48:59.559332 2549 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/de0c84c1-6691-42aa-8583-551a87a664aa-lib-modules\") pod \"calico-node-hdmnj\" (UID: \"de0c84c1-6691-42aa-8583-551a87a664aa\") " pod="calico-system/calico-node-hdmnj" Dec 13 09:48:59.563911 kubelet[2549]: I1213 09:48:59.559365 2549 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/de0c84c1-6691-42aa-8583-551a87a664aa-node-certs\") pod \"calico-node-hdmnj\" (UID: \"de0c84c1-6691-42aa-8583-551a87a664aa\") " pod="calico-system/calico-node-hdmnj" Dec 13 09:48:59.563911 kubelet[2549]: I1213 09:48:59.563606 2549 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de0c84c1-6691-42aa-8583-551a87a664aa-tigera-ca-bundle\") pod \"calico-node-hdmnj\" (UID: \"de0c84c1-6691-42aa-8583-551a87a664aa\") " pod="calico-system/calico-node-hdmnj" Dec 13 09:48:59.563911 kubelet[2549]: I1213 09:48:59.563680 2549 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/de0c84c1-6691-42aa-8583-551a87a664aa-cni-net-dir\") pod \"calico-node-hdmnj\" (UID: \"de0c84c1-6691-42aa-8583-551a87a664aa\") " pod="calico-system/calico-node-hdmnj" Dec 13 09:48:59.563911 kubelet[2549]: I1213 09:48:59.563719 2549 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/de0c84c1-6691-42aa-8583-551a87a664aa-cni-log-dir\") pod \"calico-node-hdmnj\" (UID: \"de0c84c1-6691-42aa-8583-551a87a664aa\") " pod="calico-system/calico-node-hdmnj" Dec 13 09:48:59.563911 kubelet[2549]: I1213 09:48:59.563749 2549 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/de0c84c1-6691-42aa-8583-551a87a664aa-policysync\") pod \"calico-node-hdmnj\" (UID: \"de0c84c1-6691-42aa-8583-551a87a664aa\") " pod="calico-system/calico-node-hdmnj" Dec 13 09:48:59.564210 kubelet[2549]: I1213 09:48:59.563767 2549 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/de0c84c1-6691-42aa-8583-551a87a664aa-var-run-calico\") pod \"calico-node-hdmnj\" (UID: \"de0c84c1-6691-42aa-8583-551a87a664aa\") " pod="calico-system/calico-node-hdmnj" Dec 13 09:48:59.615539 containerd[1459]: time="2024-12-13T09:48:59.615168979Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 09:48:59.615539 containerd[1459]: time="2024-12-13T09:48:59.615234045Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 09:48:59.615539 containerd[1459]: time="2024-12-13T09:48:59.615245798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:48:59.615539 containerd[1459]: time="2024-12-13T09:48:59.615366668Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:48:59.665471 systemd[1]: Started cri-containerd-38a1a37be90b9953935f01f423632540cb5ae4928d112ed7d5b944e9d5a2ceff.scope - libcontainer container 38a1a37be90b9953935f01f423632540cb5ae4928d112ed7d5b944e9d5a2ceff. Dec 13 09:48:59.683049 kubelet[2549]: E1213 09:48:59.682714 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:48:59.683049 kubelet[2549]: W1213 09:48:59.682811 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:48:59.683049 kubelet[2549]: E1213 09:48:59.682891 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:48:59.686886 kubelet[2549]: E1213 09:48:59.685474 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:48:59.686886 kubelet[2549]: W1213 09:48:59.685507 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:48:59.686886 kubelet[2549]: E1213 09:48:59.685536 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:48:59.707055 kubelet[2549]: E1213 09:48:59.707006 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:48:59.707055 kubelet[2549]: W1213 09:48:59.707047 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:48:59.707297 kubelet[2549]: E1213 09:48:59.707082 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:48:59.714748 kubelet[2549]: E1213 09:48:59.714699 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:48:59.716795 kubelet[2549]: I1213 09:48:59.715883 2549 topology_manager.go:215] "Topology Admit Handler" podUID="56d0e423-edd4-4223-a5ef-7fe3393e4271" podNamespace="calico-system" podName="csi-node-driver-z8kgk" Dec 13 09:48:59.716795 kubelet[2549]: E1213 09:48:59.716338 2549 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z8kgk" podUID="56d0e423-edd4-4223-a5ef-7fe3393e4271" Dec 13 09:48:59.718082 containerd[1459]: time="2024-12-13T09:48:59.716198077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hdmnj,Uid:de0c84c1-6691-42aa-8583-551a87a664aa,Namespace:calico-system,Attempt:0,}" Dec 13 09:48:59.764238 kubelet[2549]: E1213 09:48:59.764185 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:48:59.764238 kubelet[2549]: W1213 09:48:59.764225 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:48:59.764238 kubelet[2549]: E1213 09:48:59.764258 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:48:59.766989 kubelet[2549]: E1213 09:48:59.766918 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:48:59.766989 kubelet[2549]: W1213 09:48:59.766960 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:48:59.766989 kubelet[2549]: E1213 09:48:59.766999 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:48:59.771007 kubelet[2549]: E1213 09:48:59.769935 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:48:59.771007 kubelet[2549]: W1213 09:48:59.769988 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:48:59.771007 kubelet[2549]: E1213 09:48:59.770027 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:48:59.771526 kubelet[2549]: E1213 09:48:59.771315 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:48:59.771526 kubelet[2549]: W1213 09:48:59.771343 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:48:59.771526 kubelet[2549]: E1213 09:48:59.771369 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:48:59.773128 kubelet[2549]: E1213 09:48:59.773067 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:48:59.773128 kubelet[2549]: W1213 09:48:59.773107 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:48:59.773128 kubelet[2549]: E1213 09:48:59.773141 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:48:59.774283 kubelet[2549]: E1213 09:48:59.774239 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:48:59.774283 kubelet[2549]: W1213 09:48:59.774274 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:48:59.774544 kubelet[2549]: E1213 09:48:59.774312 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:48:59.777762 kubelet[2549]: E1213 09:48:59.777703 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:48:59.777762 kubelet[2549]: W1213 09:48:59.777740 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:48:59.777762 kubelet[2549]: E1213 09:48:59.777773 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:48:59.780549 kubelet[2549]: E1213 09:48:59.780474 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:48:59.780549 kubelet[2549]: W1213 09:48:59.780524 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:48:59.780549 kubelet[2549]: E1213 09:48:59.780559 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:48:59.784176 kubelet[2549]: E1213 09:48:59.784014 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:48:59.784176 kubelet[2549]: W1213 09:48:59.784055 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:48:59.784176 kubelet[2549]: E1213 09:48:59.784089 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:48:59.789943 kubelet[2549]: E1213 09:48:59.789878 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:48:59.789943 kubelet[2549]: W1213 09:48:59.789919 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:48:59.790341 kubelet[2549]: E1213 09:48:59.789961 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:48:59.793045 kubelet[2549]: E1213 09:48:59.792986 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:48:59.793045 kubelet[2549]: W1213 09:48:59.793034 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:48:59.794110 kubelet[2549]: E1213 09:48:59.793069 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:48:59.796381 kubelet[2549]: E1213 09:48:59.796315 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:48:59.796381 kubelet[2549]: W1213 09:48:59.796346 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:48:59.796381 kubelet[2549]: E1213 09:48:59.796375 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:48:59.799703 kubelet[2549]: E1213 09:48:59.799647 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:48:59.799703 kubelet[2549]: W1213 09:48:59.799686 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:48:59.799703 kubelet[2549]: E1213 09:48:59.799719 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:48:59.801621 kubelet[2549]: E1213 09:48:59.801191 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:48:59.801621 kubelet[2549]: W1213 09:48:59.801605 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:48:59.801794 kubelet[2549]: E1213 09:48:59.801637 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:48:59.803014 kubelet[2549]: E1213 09:48:59.802952 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:48:59.803014 kubelet[2549]: W1213 09:48:59.802985 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:48:59.803216 kubelet[2549]: E1213 09:48:59.803025 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:48:59.804253 kubelet[2549]: E1213 09:48:59.804152 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:48:59.805182 kubelet[2549]: W1213 09:48:59.804616 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:48:59.805182 kubelet[2549]: E1213 09:48:59.804665 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:48:59.807086 kubelet[2549]: E1213 09:48:59.806087 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:48:59.807086 kubelet[2549]: W1213 09:48:59.806888 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:48:59.807086 kubelet[2549]: E1213 09:48:59.806931 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:48:59.807833 kubelet[2549]: E1213 09:48:59.807655 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:48:59.807833 kubelet[2549]: W1213 09:48:59.807677 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:48:59.807833 kubelet[2549]: E1213 09:48:59.807701 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:48:59.808233 containerd[1459]: time="2024-12-13T09:48:59.807728277Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 09:48:59.808233 containerd[1459]: time="2024-12-13T09:48:59.808190029Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 09:48:59.808383 containerd[1459]: time="2024-12-13T09:48:59.808272460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:48:59.808803 kubelet[2549]: E1213 09:48:59.808482 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:48:59.808803 kubelet[2549]: W1213 09:48:59.808500 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:48:59.808803 kubelet[2549]: E1213 09:48:59.808522 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:48:59.809459 containerd[1459]: time="2024-12-13T09:48:59.808671022Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:48:59.810358 kubelet[2549]: E1213 09:48:59.809642 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:48:59.810358 kubelet[2549]: W1213 09:48:59.809667 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:48:59.810358 kubelet[2549]: E1213 09:48:59.809693 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:48:59.810744 kubelet[2549]: E1213 09:48:59.810711 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:48:59.811087 kubelet[2549]: W1213 09:48:59.811054 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:48:59.811161 kubelet[2549]: E1213 09:48:59.811089 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:48:59.811161 kubelet[2549]: I1213 09:48:59.811137 2549 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/56d0e423-edd4-4223-a5ef-7fe3393e4271-registration-dir\") pod \"csi-node-driver-z8kgk\" (UID: \"56d0e423-edd4-4223-a5ef-7fe3393e4271\") " pod="calico-system/csi-node-driver-z8kgk" Dec 13 09:48:59.811509 kubelet[2549]: E1213 09:48:59.811452 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:48:59.811509 kubelet[2549]: W1213 09:48:59.811473 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:48:59.811509 kubelet[2549]: E1213 09:48:59.811497 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:48:59.811662 kubelet[2549]: I1213 09:48:59.811522 2549 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54m4c\" (UniqueName: \"kubernetes.io/projected/56d0e423-edd4-4223-a5ef-7fe3393e4271-kube-api-access-54m4c\") pod \"csi-node-driver-z8kgk\" (UID: \"56d0e423-edd4-4223-a5ef-7fe3393e4271\") " pod="calico-system/csi-node-driver-z8kgk" Dec 13 09:48:59.812834 kubelet[2549]: E1213 09:48:59.812776 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:48:59.812834 kubelet[2549]: W1213 09:48:59.812801 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:48:59.812834 kubelet[2549]: E1213 09:48:59.812828 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:48:59.813650 kubelet[2549]: I1213 09:48:59.812873 2549 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/56d0e423-edd4-4223-a5ef-7fe3393e4271-socket-dir\") pod \"csi-node-driver-z8kgk\" (UID: \"56d0e423-edd4-4223-a5ef-7fe3393e4271\") " pod="calico-system/csi-node-driver-z8kgk" Dec 13 09:48:59.813794 kubelet[2549]: E1213 09:48:59.813774 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:48:59.813894 kubelet[2549]: W1213 09:48:59.813880 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:48:59.813978 kubelet[2549]: E1213 09:48:59.813965 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:48:59.814586 kubelet[2549]: E1213 09:48:59.814565 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:48:59.815030 kubelet[2549]: W1213 09:48:59.814677 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:48:59.815030 kubelet[2549]: E1213 09:48:59.814735 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:48:59.815964 kubelet[2549]: E1213 09:48:59.815936 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:48:59.816070 kubelet[2549]: W1213 09:48:59.816046 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:48:59.816240 kubelet[2549]: E1213 09:48:59.816197 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:48:59.816768 kubelet[2549]: E1213 09:48:59.816512 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:48:59.816768 kubelet[2549]: W1213 09:48:59.816533 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:48:59.816768 kubelet[2549]: E1213 09:48:59.816576 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:48:59.816768 kubelet[2549]: I1213 09:48:59.816622 2549 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/56d0e423-edd4-4223-a5ef-7fe3393e4271-varrun\") pod \"csi-node-driver-z8kgk\" (UID: \"56d0e423-edd4-4223-a5ef-7fe3393e4271\") " pod="calico-system/csi-node-driver-z8kgk" Dec 13 09:48:59.817457 kubelet[2549]: E1213 09:48:59.817318 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:48:59.817738 kubelet[2549]: W1213 09:48:59.817590 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:48:59.818716 kubelet[2549]: E1213 09:48:59.817950 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:48:59.819883 kubelet[2549]: E1213 09:48:59.819313 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:48:59.819883 kubelet[2549]: W1213 09:48:59.819334 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:48:59.819883 kubelet[2549]: E1213 09:48:59.819356 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:48:59.820697 kubelet[2549]: E1213 09:48:59.820336 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:48:59.820697 kubelet[2549]: W1213 09:48:59.820444 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:48:59.820697 kubelet[2549]: E1213 09:48:59.820477 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:48:59.820697 kubelet[2549]: I1213 09:48:59.820520 2549 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/56d0e423-edd4-4223-a5ef-7fe3393e4271-kubelet-dir\") pod \"csi-node-driver-z8kgk\" (UID: \"56d0e423-edd4-4223-a5ef-7fe3393e4271\") " pod="calico-system/csi-node-driver-z8kgk" Dec 13 09:48:59.822636 kubelet[2549]: E1213 09:48:59.822592 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:48:59.822636 kubelet[2549]: W1213 09:48:59.822625 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:48:59.822801 kubelet[2549]: E1213 09:48:59.822665 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:48:59.825259 kubelet[2549]: E1213 09:48:59.825215 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:48:59.825259 kubelet[2549]: W1213 09:48:59.825245 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:48:59.825259 kubelet[2549]: E1213 09:48:59.825288 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:48:59.826429 kubelet[2549]: E1213 09:48:59.825674 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:48:59.826429 kubelet[2549]: W1213 09:48:59.825686 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:48:59.826429 kubelet[2549]: E1213 09:48:59.825701 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:48:59.826429 kubelet[2549]: E1213 09:48:59.826141 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:48:59.826429 kubelet[2549]: W1213 09:48:59.826159 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:48:59.826429 kubelet[2549]: E1213 09:48:59.826178 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:48:59.827399 kubelet[2549]: E1213 09:48:59.826558 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:48:59.827399 kubelet[2549]: W1213 09:48:59.826569 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:48:59.827399 kubelet[2549]: E1213 09:48:59.826582 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:48:59.844209 systemd[1]: Started cri-containerd-b714c5aac7e0b03b0f6471b3197353931366ef973c3e2b9c754c2cf002ba6250.scope - libcontainer container b714c5aac7e0b03b0f6471b3197353931366ef973c3e2b9c754c2cf002ba6250. Dec 13 09:48:59.929458 kubelet[2549]: E1213 09:48:59.929396 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:48:59.929458 kubelet[2549]: W1213 09:48:59.929427 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:48:59.929458 kubelet[2549]: E1213 09:48:59.929458 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:48:59.932495 kubelet[2549]: E1213 09:48:59.932437 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:48:59.932495 kubelet[2549]: W1213 09:48:59.932472 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:48:59.932495 kubelet[2549]: E1213 09:48:59.932511 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:48:59.933820 kubelet[2549]: E1213 09:48:59.933772 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:48:59.933820 kubelet[2549]: W1213 09:48:59.933796 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:48:59.934792 kubelet[2549]: E1213 09:48:59.934748 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:48:59.935160 kubelet[2549]: E1213 09:48:59.934928 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:48:59.935160 kubelet[2549]: W1213 09:48:59.934940 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:48:59.935160 kubelet[2549]: E1213 09:48:59.935037 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:48:59.937059 kubelet[2549]: E1213 09:48:59.937021 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:48:59.937059 kubelet[2549]: W1213 09:48:59.937049 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:48:59.937903 kubelet[2549]: E1213 09:48:59.937808 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:48:59.938410 kubelet[2549]: E1213 09:48:59.938053 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:48:59.938410 kubelet[2549]: W1213 09:48:59.938073 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:48:59.938410 kubelet[2549]: E1213 09:48:59.938120 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:48:59.942048 kubelet[2549]: E1213 09:48:59.941999 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:48:59.942048 kubelet[2549]: W1213 09:48:59.942032 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:48:59.942048 kubelet[2549]: E1213 09:48:59.942112 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:48:59.943170 kubelet[2549]: E1213 09:48:59.942554 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:48:59.943170 kubelet[2549]: W1213 09:48:59.942576 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:48:59.943614 kubelet[2549]: E1213 09:48:59.943570 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:48:59.945108 kubelet[2549]: E1213 09:48:59.944956 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:48:59.945108 kubelet[2549]: W1213 09:48:59.944989 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:48:59.945890 kubelet[2549]: E1213 09:48:59.945230 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:48:59.946025 kubelet[2549]: E1213 09:48:59.945939 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:48:59.946025 kubelet[2549]: W1213 09:48:59.945966 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:48:59.948405 kubelet[2549]: E1213 09:48:59.948283 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:48:59.951082 kubelet[2549]: E1213 09:48:59.951020 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:48:59.951082 kubelet[2549]: W1213 09:48:59.951070 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:48:59.951519 kubelet[2549]: E1213 09:48:59.951172 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:48:59.952130 kubelet[2549]: E1213 09:48:59.952090 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:48:59.952130 kubelet[2549]: W1213 09:48:59.952116 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:48:59.952501 kubelet[2549]: E1213 09:48:59.952376 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:48:59.954464 kubelet[2549]: E1213 09:48:59.954429 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:48:59.954464 kubelet[2549]: W1213 09:48:59.954454 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:48:59.954725 kubelet[2549]: E1213 09:48:59.954561 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:48:59.954931 kubelet[2549]: E1213 09:48:59.954892 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:48:59.954931 kubelet[2549]: W1213 09:48:59.954916 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:48:59.955261 kubelet[2549]: E1213 09:48:59.955011 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:48:59.955261 kubelet[2549]: E1213 09:48:59.955145 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:48:59.955261 kubelet[2549]: W1213 09:48:59.955151 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:48:59.955261 kubelet[2549]: E1213 09:48:59.955225 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:48:59.955559 kubelet[2549]: E1213 09:48:59.955344 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:48:59.955559 kubelet[2549]: W1213 09:48:59.955353 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:48:59.955559 kubelet[2549]: E1213 09:48:59.955416 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:48:59.957142 kubelet[2549]: E1213 09:48:59.957097 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:48:59.957142 kubelet[2549]: W1213 09:48:59.957131 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:48:59.957693 kubelet[2549]: E1213 09:48:59.957390 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:48:59.959728 kubelet[2549]: E1213 09:48:59.958833 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:48:59.959728 kubelet[2549]: W1213 09:48:59.958912 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:48:59.961888 kubelet[2549]: E1213 09:48:59.961497 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:48:59.961888 kubelet[2549]: E1213 09:48:59.961899 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:48:59.962134 kubelet[2549]: W1213 09:48:59.961915 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:48:59.962796 kubelet[2549]: E1213 09:48:59.962216 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:48:59.962796 kubelet[2549]: W1213 09:48:59.962238 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:48:59.962796 kubelet[2549]: E1213 09:48:59.962405 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:48:59.962796 kubelet[2549]: E1213 09:48:59.962464 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:48:59.968413 kubelet[2549]: E1213 09:48:59.968361 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:48:59.968413 kubelet[2549]: W1213 09:48:59.968396 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:48:59.969038 kubelet[2549]: E1213 09:48:59.968610 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:48:59.969038 kubelet[2549]: E1213 09:48:59.968956 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:48:59.969038 kubelet[2549]: W1213 09:48:59.968975 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:48:59.969689 kubelet[2549]: E1213 09:48:59.969499 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:48:59.969689 kubelet[2549]: W1213 09:48:59.969518 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:48:59.970252 kubelet[2549]: E1213 09:48:59.969971 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:48:59.970252 kubelet[2549]: E1213 09:48:59.970036 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:48:59.970616 kubelet[2549]: E1213 09:48:59.970386 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:48:59.970616 kubelet[2549]: W1213 09:48:59.970412 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:48:59.970616 kubelet[2549]: E1213 09:48:59.970430 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:48:59.971696 kubelet[2549]: E1213 09:48:59.971660 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:48:59.971696 kubelet[2549]: W1213 09:48:59.971679 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:48:59.971696 kubelet[2549]: E1213 09:48:59.971698 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:48:59.995938 containerd[1459]: time="2024-12-13T09:48:59.995606092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hdmnj,Uid:de0c84c1-6691-42aa-8583-551a87a664aa,Namespace:calico-system,Attempt:0,} returns sandbox id \"b714c5aac7e0b03b0f6471b3197353931366ef973c3e2b9c754c2cf002ba6250\"" Dec 13 09:48:59.998937 kubelet[2549]: E1213 09:48:59.998025 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:48:59.998937 kubelet[2549]: W1213 09:48:59.998054 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:48:59.998937 kubelet[2549]: E1213 09:48:59.998082 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:48:59.998937 kubelet[2549]: E1213 09:48:59.998177 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:49:00.000157 containerd[1459]: time="2024-12-13T09:49:00.000099603Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Dec 13 09:49:00.167979 containerd[1459]: time="2024-12-13T09:49:00.167673311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-776c847858-gd7xc,Uid:1a2842c0-44c1-4729-941c-c911a44ef5ed,Namespace:calico-system,Attempt:0,} returns sandbox id \"38a1a37be90b9953935f01f423632540cb5ae4928d112ed7d5b944e9d5a2ceff\"" Dec 13 09:49:00.174728 kubelet[2549]: E1213 09:49:00.173623 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:49:01.637691 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4107511709.mount: Deactivated successfully. Dec 13 09:49:01.745153 kubelet[2549]: E1213 09:49:01.745059 2549 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z8kgk" podUID="56d0e423-edd4-4223-a5ef-7fe3393e4271" Dec 13 09:49:01.891910 containerd[1459]: time="2024-12-13T09:49:01.890293185Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:49:01.891910 containerd[1459]: time="2024-12-13T09:49:01.891745268Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Dec 13 09:49:01.893987 containerd[1459]: time="2024-12-13T09:49:01.893890134Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:49:01.909853 containerd[1459]: time="2024-12-13T09:49:01.909767844Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:49:01.911748 containerd[1459]: time="2024-12-13T09:49:01.911488099Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.911320901s" Dec 13 09:49:01.911748 containerd[1459]: time="2024-12-13T09:49:01.911566072Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Dec 13 09:49:01.914052 containerd[1459]: time="2024-12-13T09:49:01.913522059Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Dec 13 09:49:01.917227 containerd[1459]: time="2024-12-13T09:49:01.916751079Z" level=info msg="CreateContainer within sandbox \"b714c5aac7e0b03b0f6471b3197353931366ef973c3e2b9c754c2cf002ba6250\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 09:49:01.946191 containerd[1459]: time="2024-12-13T09:49:01.945976810Z" level=info msg="CreateContainer within sandbox \"b714c5aac7e0b03b0f6471b3197353931366ef973c3e2b9c754c2cf002ba6250\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6c33ac8ce526a8367c412830fc1e596814bda58e8d26c3ce5c6cb73e431915e1\"" Dec 13 09:49:01.948341 containerd[1459]: time="2024-12-13T09:49:01.948247586Z" level=info msg="StartContainer for \"6c33ac8ce526a8367c412830fc1e596814bda58e8d26c3ce5c6cb73e431915e1\"" Dec 13 09:49:02.025354 systemd[1]: Started cri-containerd-6c33ac8ce526a8367c412830fc1e596814bda58e8d26c3ce5c6cb73e431915e1.scope - libcontainer container 6c33ac8ce526a8367c412830fc1e596814bda58e8d26c3ce5c6cb73e431915e1. Dec 13 09:49:02.080311 containerd[1459]: time="2024-12-13T09:49:02.080235802Z" level=info msg="StartContainer for \"6c33ac8ce526a8367c412830fc1e596814bda58e8d26c3ce5c6cb73e431915e1\" returns successfully" Dec 13 09:49:02.111698 systemd[1]: cri-containerd-6c33ac8ce526a8367c412830fc1e596814bda58e8d26c3ce5c6cb73e431915e1.scope: Deactivated successfully. Dec 13 09:49:02.158692 containerd[1459]: time="2024-12-13T09:49:02.158481443Z" level=info msg="shim disconnected" id=6c33ac8ce526a8367c412830fc1e596814bda58e8d26c3ce5c6cb73e431915e1 namespace=k8s.io Dec 13 09:49:02.158692 containerd[1459]: time="2024-12-13T09:49:02.158554332Z" level=warning msg="cleaning up after shim disconnected" id=6c33ac8ce526a8367c412830fc1e596814bda58e8d26c3ce5c6cb73e431915e1 namespace=k8s.io Dec 13 09:49:02.158692 containerd[1459]: time="2024-12-13T09:49:02.158568225Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 09:49:02.557970 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6c33ac8ce526a8367c412830fc1e596814bda58e8d26c3ce5c6cb73e431915e1-rootfs.mount: Deactivated successfully. Dec 13 09:49:02.884639 kubelet[2549]: E1213 09:49:02.883679 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:49:03.744744 kubelet[2549]: E1213 09:49:03.744637 2549 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z8kgk" podUID="56d0e423-edd4-4223-a5ef-7fe3393e4271" Dec 13 09:49:04.982311 containerd[1459]: time="2024-12-13T09:49:04.982228315Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:49:04.983737 containerd[1459]: time="2024-12-13T09:49:04.983227770Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29850141" Dec 13 09:49:04.987942 containerd[1459]: time="2024-12-13T09:49:04.986587002Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:49:04.990606 containerd[1459]: time="2024-12-13T09:49:04.990442596Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:49:04.992201 containerd[1459]: time="2024-12-13T09:49:04.992116478Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 3.07853585s" Dec 13 09:49:04.992201 containerd[1459]: time="2024-12-13T09:49:04.992194611Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Dec 13 09:49:04.996759 containerd[1459]: time="2024-12-13T09:49:04.996045637Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Dec 13 09:49:05.024558 containerd[1459]: time="2024-12-13T09:49:05.024334071Z" level=info msg="CreateContainer within sandbox \"38a1a37be90b9953935f01f423632540cb5ae4928d112ed7d5b944e9d5a2ceff\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 13 09:49:05.053199 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2881172239.mount: Deactivated successfully. Dec 13 09:49:05.066466 containerd[1459]: time="2024-12-13T09:49:05.066378978Z" level=info msg="CreateContainer within sandbox \"38a1a37be90b9953935f01f423632540cb5ae4928d112ed7d5b944e9d5a2ceff\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"cef121cc60baaae65859db8ed015e054f5104a8a47dd8bec81747f8ebf1dfd3a\"" Dec 13 09:49:05.068386 containerd[1459]: time="2024-12-13T09:49:05.067229705Z" level=info msg="StartContainer for \"cef121cc60baaae65859db8ed015e054f5104a8a47dd8bec81747f8ebf1dfd3a\"" Dec 13 09:49:05.156280 systemd[1]: Started cri-containerd-cef121cc60baaae65859db8ed015e054f5104a8a47dd8bec81747f8ebf1dfd3a.scope - libcontainer container cef121cc60baaae65859db8ed015e054f5104a8a47dd8bec81747f8ebf1dfd3a. Dec 13 09:49:05.264246 containerd[1459]: time="2024-12-13T09:49:05.263730253Z" level=info msg="StartContainer for \"cef121cc60baaae65859db8ed015e054f5104a8a47dd8bec81747f8ebf1dfd3a\" returns successfully" Dec 13 09:49:05.744978 kubelet[2549]: E1213 09:49:05.744293 2549 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z8kgk" podUID="56d0e423-edd4-4223-a5ef-7fe3393e4271" Dec 13 09:49:05.915418 kubelet[2549]: E1213 09:49:05.915349 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:49:05.936607 kubelet[2549]: I1213 09:49:05.935500 2549 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-776c847858-gd7xc" podStartSLOduration=2.122719079 podStartE2EDuration="6.935481305s" podCreationTimestamp="2024-12-13 09:48:59 +0000 UTC" firstStartedPulling="2024-12-13 09:49:00.182037853 +0000 UTC m=+21.686923709" lastFinishedPulling="2024-12-13 09:49:04.994800099 +0000 UTC m=+26.499685935" observedRunningTime="2024-12-13 09:49:05.935148942 +0000 UTC m=+27.440034818" watchObservedRunningTime="2024-12-13 09:49:05.935481305 +0000 UTC m=+27.440367165" Dec 13 09:49:06.916522 kubelet[2549]: I1213 09:49:06.916477 2549 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 09:49:06.919137 kubelet[2549]: E1213 09:49:06.918552 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:49:07.744995 kubelet[2549]: E1213 09:49:07.744711 2549 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z8kgk" podUID="56d0e423-edd4-4223-a5ef-7fe3393e4271" Dec 13 09:49:09.745272 kubelet[2549]: E1213 09:49:09.745062 2549 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z8kgk" podUID="56d0e423-edd4-4223-a5ef-7fe3393e4271" Dec 13 09:49:10.842692 containerd[1459]: time="2024-12-13T09:49:10.842621929Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:49:10.844424 containerd[1459]: time="2024-12-13T09:49:10.844361223Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Dec 13 09:49:10.845167 containerd[1459]: time="2024-12-13T09:49:10.845125503Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:49:10.848598 containerd[1459]: time="2024-12-13T09:49:10.848526001Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:49:10.850004 containerd[1459]: time="2024-12-13T09:49:10.849925357Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 5.853297848s" Dec 13 09:49:10.850004 containerd[1459]: time="2024-12-13T09:49:10.849991876Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Dec 13 09:49:10.856328 containerd[1459]: time="2024-12-13T09:49:10.856256804Z" level=info msg="CreateContainer within sandbox \"b714c5aac7e0b03b0f6471b3197353931366ef973c3e2b9c754c2cf002ba6250\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 09:49:10.879572 containerd[1459]: time="2024-12-13T09:49:10.879238292Z" level=info msg="CreateContainer within sandbox \"b714c5aac7e0b03b0f6471b3197353931366ef973c3e2b9c754c2cf002ba6250\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"4af54249f6f2db8d00a3c3bd263e938af3ebc156065ba4781fb74fd44bf1c972\"" Dec 13 09:49:10.881033 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2096996895.mount: Deactivated successfully. Dec 13 09:49:10.882547 containerd[1459]: time="2024-12-13T09:49:10.881649976Z" level=info msg="StartContainer for \"4af54249f6f2db8d00a3c3bd263e938af3ebc156065ba4781fb74fd44bf1c972\"" Dec 13 09:49:10.977165 systemd[1]: Started cri-containerd-4af54249f6f2db8d00a3c3bd263e938af3ebc156065ba4781fb74fd44bf1c972.scope - libcontainer container 4af54249f6f2db8d00a3c3bd263e938af3ebc156065ba4781fb74fd44bf1c972. Dec 13 09:49:11.030365 containerd[1459]: time="2024-12-13T09:49:11.030262336Z" level=info msg="StartContainer for \"4af54249f6f2db8d00a3c3bd263e938af3ebc156065ba4781fb74fd44bf1c972\" returns successfully" Dec 13 09:49:11.663731 systemd[1]: cri-containerd-4af54249f6f2db8d00a3c3bd263e938af3ebc156065ba4781fb74fd44bf1c972.scope: Deactivated successfully. Dec 13 09:49:11.711351 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4af54249f6f2db8d00a3c3bd263e938af3ebc156065ba4781fb74fd44bf1c972-rootfs.mount: Deactivated successfully. Dec 13 09:49:11.738496 containerd[1459]: time="2024-12-13T09:49:11.738410926Z" level=info msg="shim disconnected" id=4af54249f6f2db8d00a3c3bd263e938af3ebc156065ba4781fb74fd44bf1c972 namespace=k8s.io Dec 13 09:49:11.738496 containerd[1459]: time="2024-12-13T09:49:11.738469748Z" level=warning msg="cleaning up after shim disconnected" id=4af54249f6f2db8d00a3c3bd263e938af3ebc156065ba4781fb74fd44bf1c972 namespace=k8s.io Dec 13 09:49:11.738496 containerd[1459]: time="2024-12-13T09:49:11.738509032Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 09:49:11.740399 kubelet[2549]: I1213 09:49:11.739514 2549 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 09:49:11.759326 systemd[1]: Created slice kubepods-besteffort-pod56d0e423_edd4_4223_a5ef_7fe3393e4271.slice - libcontainer container kubepods-besteffort-pod56d0e423_edd4_4223_a5ef_7fe3393e4271.slice. Dec 13 09:49:11.770912 containerd[1459]: time="2024-12-13T09:49:11.770291080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z8kgk,Uid:56d0e423-edd4-4223-a5ef-7fe3393e4271,Namespace:calico-system,Attempt:0,}" Dec 13 09:49:11.816623 kubelet[2549]: I1213 09:49:11.814138 2549 topology_manager.go:215] "Topology Admit Handler" podUID="9046cd15-11c7-4a60-ba00-3642ddd7002a" podNamespace="kube-system" podName="coredns-7db6d8ff4d-4xm55" Dec 13 09:49:11.818078 kubelet[2549]: I1213 09:49:11.817661 2549 topology_manager.go:215] "Topology Admit Handler" podUID="89c56140-f295-40c7-ae2a-952e41b9599a" podNamespace="kube-system" podName="coredns-7db6d8ff4d-464d2" Dec 13 09:49:11.838078 kubelet[2549]: I1213 09:49:11.838033 2549 topology_manager.go:215] "Topology Admit Handler" podUID="e9d7b383-6d7b-4fc2-8e54-c423fd3aaee5" podNamespace="calico-system" podName="calico-kube-controllers-5bf6c6b877-7fph7" Dec 13 09:49:11.838749 kubelet[2549]: I1213 09:49:11.838711 2549 topology_manager.go:215] "Topology Admit Handler" podUID="97522c65-0ab9-4890-ab5c-998cdfc7bb0c" podNamespace="calico-apiserver" podName="calico-apiserver-86c9dd4fbf-d9pn9" Dec 13 09:49:11.841970 kubelet[2549]: I1213 09:49:11.841422 2549 topology_manager.go:215] "Topology Admit Handler" podUID="c34b1b21-b5b0-4dd4-928a-ce99fd8dbbbc" podNamespace="calico-apiserver" podName="calico-apiserver-86c9dd4fbf-lcxl7" Dec 13 09:49:11.844116 systemd[1]: Created slice kubepods-burstable-pod9046cd15_11c7_4a60_ba00_3642ddd7002a.slice - libcontainer container kubepods-burstable-pod9046cd15_11c7_4a60_ba00_3642ddd7002a.slice. Dec 13 09:49:11.858704 kubelet[2549]: I1213 09:49:11.858659 2549 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqnck\" (UniqueName: \"kubernetes.io/projected/9046cd15-11c7-4a60-ba00-3642ddd7002a-kube-api-access-xqnck\") pod \"coredns-7db6d8ff4d-4xm55\" (UID: \"9046cd15-11c7-4a60-ba00-3642ddd7002a\") " pod="kube-system/coredns-7db6d8ff4d-4xm55" Dec 13 09:49:11.858704 kubelet[2549]: I1213 09:49:11.858706 2549 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9046cd15-11c7-4a60-ba00-3642ddd7002a-config-volume\") pod \"coredns-7db6d8ff4d-4xm55\" (UID: \"9046cd15-11c7-4a60-ba00-3642ddd7002a\") " pod="kube-system/coredns-7db6d8ff4d-4xm55" Dec 13 09:49:11.858916 kubelet[2549]: I1213 09:49:11.858736 2549 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/89c56140-f295-40c7-ae2a-952e41b9599a-config-volume\") pod \"coredns-7db6d8ff4d-464d2\" (UID: \"89c56140-f295-40c7-ae2a-952e41b9599a\") " pod="kube-system/coredns-7db6d8ff4d-464d2" Dec 13 09:49:11.858916 kubelet[2549]: I1213 09:49:11.858753 2549 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6gd8\" (UniqueName: \"kubernetes.io/projected/89c56140-f295-40c7-ae2a-952e41b9599a-kube-api-access-v6gd8\") pod \"coredns-7db6d8ff4d-464d2\" (UID: \"89c56140-f295-40c7-ae2a-952e41b9599a\") " pod="kube-system/coredns-7db6d8ff4d-464d2" Dec 13 09:49:11.865122 systemd[1]: Created slice kubepods-burstable-pod89c56140_f295_40c7_ae2a_952e41b9599a.slice - libcontainer container kubepods-burstable-pod89c56140_f295_40c7_ae2a_952e41b9599a.slice. Dec 13 09:49:11.893969 systemd[1]: Created slice kubepods-besteffort-pode9d7b383_6d7b_4fc2_8e54_c423fd3aaee5.slice - libcontainer container kubepods-besteffort-pode9d7b383_6d7b_4fc2_8e54_c423fd3aaee5.slice. Dec 13 09:49:11.918801 systemd[1]: Created slice kubepods-besteffort-pod97522c65_0ab9_4890_ab5c_998cdfc7bb0c.slice - libcontainer container kubepods-besteffort-pod97522c65_0ab9_4890_ab5c_998cdfc7bb0c.slice. Dec 13 09:49:11.954982 kubelet[2549]: E1213 09:49:11.954948 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:49:11.957209 containerd[1459]: time="2024-12-13T09:49:11.956885023Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Dec 13 09:49:11.958327 systemd[1]: Created slice kubepods-besteffort-podc34b1b21_b5b0_4dd4_928a_ce99fd8dbbbc.slice - libcontainer container kubepods-besteffort-podc34b1b21_b5b0_4dd4_928a_ce99fd8dbbbc.slice. Dec 13 09:49:11.959973 kubelet[2549]: I1213 09:49:11.959081 2549 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hl7rd\" (UniqueName: \"kubernetes.io/projected/97522c65-0ab9-4890-ab5c-998cdfc7bb0c-kube-api-access-hl7rd\") pod \"calico-apiserver-86c9dd4fbf-d9pn9\" (UID: \"97522c65-0ab9-4890-ab5c-998cdfc7bb0c\") " pod="calico-apiserver/calico-apiserver-86c9dd4fbf-d9pn9" Dec 13 09:49:11.959973 kubelet[2549]: I1213 09:49:11.959165 2549 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/97522c65-0ab9-4890-ab5c-998cdfc7bb0c-calico-apiserver-certs\") pod \"calico-apiserver-86c9dd4fbf-d9pn9\" (UID: \"97522c65-0ab9-4890-ab5c-998cdfc7bb0c\") " pod="calico-apiserver/calico-apiserver-86c9dd4fbf-d9pn9" Dec 13 09:49:11.959973 kubelet[2549]: I1213 09:49:11.959276 2549 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c34b1b21-b5b0-4dd4-928a-ce99fd8dbbbc-calico-apiserver-certs\") pod \"calico-apiserver-86c9dd4fbf-lcxl7\" (UID: \"c34b1b21-b5b0-4dd4-928a-ce99fd8dbbbc\") " pod="calico-apiserver/calico-apiserver-86c9dd4fbf-lcxl7" Dec 13 09:49:11.960356 kubelet[2549]: I1213 09:49:11.960282 2549 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mz7jf\" (UniqueName: \"kubernetes.io/projected/c34b1b21-b5b0-4dd4-928a-ce99fd8dbbbc-kube-api-access-mz7jf\") pod \"calico-apiserver-86c9dd4fbf-lcxl7\" (UID: \"c34b1b21-b5b0-4dd4-928a-ce99fd8dbbbc\") " pod="calico-apiserver/calico-apiserver-86c9dd4fbf-lcxl7" Dec 13 09:49:11.960401 kubelet[2549]: I1213 09:49:11.960358 2549 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9d7b383-6d7b-4fc2-8e54-c423fd3aaee5-tigera-ca-bundle\") pod \"calico-kube-controllers-5bf6c6b877-7fph7\" (UID: \"e9d7b383-6d7b-4fc2-8e54-c423fd3aaee5\") " pod="calico-system/calico-kube-controllers-5bf6c6b877-7fph7" Dec 13 09:49:11.960496 kubelet[2549]: I1213 09:49:11.960388 2549 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkjbg\" (UniqueName: \"kubernetes.io/projected/e9d7b383-6d7b-4fc2-8e54-c423fd3aaee5-kube-api-access-qkjbg\") pod \"calico-kube-controllers-5bf6c6b877-7fph7\" (UID: \"e9d7b383-6d7b-4fc2-8e54-c423fd3aaee5\") " pod="calico-system/calico-kube-controllers-5bf6c6b877-7fph7" Dec 13 09:49:12.155433 kubelet[2549]: E1213 09:49:12.155390 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:49:12.159312 containerd[1459]: time="2024-12-13T09:49:12.158641718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4xm55,Uid:9046cd15-11c7-4a60-ba00-3642ddd7002a,Namespace:kube-system,Attempt:0,}" Dec 13 09:49:12.186599 kubelet[2549]: E1213 09:49:12.186012 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:49:12.244984 containerd[1459]: time="2024-12-13T09:49:12.244008085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86c9dd4fbf-d9pn9,Uid:97522c65-0ab9-4890-ab5c-998cdfc7bb0c,Namespace:calico-apiserver,Attempt:0,}" Dec 13 09:49:12.265455 containerd[1459]: time="2024-12-13T09:49:12.265244271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5bf6c6b877-7fph7,Uid:e9d7b383-6d7b-4fc2-8e54-c423fd3aaee5,Namespace:calico-system,Attempt:0,}" Dec 13 09:49:12.265789 containerd[1459]: time="2024-12-13T09:49:12.265746941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-464d2,Uid:89c56140-f295-40c7-ae2a-952e41b9599a,Namespace:kube-system,Attempt:0,}" Dec 13 09:49:12.274955 containerd[1459]: time="2024-12-13T09:49:12.274887199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86c9dd4fbf-lcxl7,Uid:c34b1b21-b5b0-4dd4-928a-ce99fd8dbbbc,Namespace:calico-apiserver,Attempt:0,}" Dec 13 09:49:12.320365 containerd[1459]: time="2024-12-13T09:49:12.320289712Z" level=error msg="Failed to destroy network for sandbox \"5c75e6e80546eb168b5765424bba8d3fff071e062eeb36b1ea0dbcc09848e032\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:49:12.330246 containerd[1459]: time="2024-12-13T09:49:12.330164290Z" level=error msg="encountered an error cleaning up failed sandbox \"5c75e6e80546eb168b5765424bba8d3fff071e062eeb36b1ea0dbcc09848e032\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:49:12.331288 containerd[1459]: time="2024-12-13T09:49:12.330981763Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z8kgk,Uid:56d0e423-edd4-4223-a5ef-7fe3393e4271,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5c75e6e80546eb168b5765424bba8d3fff071e062eeb36b1ea0dbcc09848e032\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:49:12.344018 kubelet[2549]: E1213 09:49:12.343924 2549 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c75e6e80546eb168b5765424bba8d3fff071e062eeb36b1ea0dbcc09848e032\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:49:12.345114 kubelet[2549]: E1213 09:49:12.344515 2549 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c75e6e80546eb168b5765424bba8d3fff071e062eeb36b1ea0dbcc09848e032\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z8kgk" Dec 13 09:49:12.345114 kubelet[2549]: E1213 09:49:12.344656 2549 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c75e6e80546eb168b5765424bba8d3fff071e062eeb36b1ea0dbcc09848e032\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z8kgk" Dec 13 09:49:12.345114 kubelet[2549]: E1213 09:49:12.344712 2549 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-z8kgk_calico-system(56d0e423-edd4-4223-a5ef-7fe3393e4271)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-z8kgk_calico-system(56d0e423-edd4-4223-a5ef-7fe3393e4271)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5c75e6e80546eb168b5765424bba8d3fff071e062eeb36b1ea0dbcc09848e032\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-z8kgk" podUID="56d0e423-edd4-4223-a5ef-7fe3393e4271" Dec 13 09:49:12.379623 containerd[1459]: time="2024-12-13T09:49:12.379462789Z" level=error msg="Failed to destroy network for sandbox \"c2fc255fd4e942f4f893bb52f8456827bd455aca6a60de4b7f00b6042b49a4df\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:49:12.383906 containerd[1459]: time="2024-12-13T09:49:12.383176747Z" level=error msg="encountered an error cleaning up failed sandbox \"c2fc255fd4e942f4f893bb52f8456827bd455aca6a60de4b7f00b6042b49a4df\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:49:12.385962 containerd[1459]: time="2024-12-13T09:49:12.384465850Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4xm55,Uid:9046cd15-11c7-4a60-ba00-3642ddd7002a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c2fc255fd4e942f4f893bb52f8456827bd455aca6a60de4b7f00b6042b49a4df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:49:12.387446 kubelet[2549]: E1213 09:49:12.386964 2549 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2fc255fd4e942f4f893bb52f8456827bd455aca6a60de4b7f00b6042b49a4df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:49:12.387446 kubelet[2549]: E1213 09:49:12.387036 2549 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2fc255fd4e942f4f893bb52f8456827bd455aca6a60de4b7f00b6042b49a4df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-4xm55" Dec 13 09:49:12.387446 kubelet[2549]: E1213 09:49:12.387077 2549 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2fc255fd4e942f4f893bb52f8456827bd455aca6a60de4b7f00b6042b49a4df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-4xm55" Dec 13 09:49:12.387671 kubelet[2549]: E1213 09:49:12.387136 2549 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-4xm55_kube-system(9046cd15-11c7-4a60-ba00-3642ddd7002a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-4xm55_kube-system(9046cd15-11c7-4a60-ba00-3642ddd7002a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c2fc255fd4e942f4f893bb52f8456827bd455aca6a60de4b7f00b6042b49a4df\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-4xm55" podUID="9046cd15-11c7-4a60-ba00-3642ddd7002a" Dec 13 09:49:12.557265 containerd[1459]: time="2024-12-13T09:49:12.557197992Z" level=error msg="Failed to destroy network for sandbox \"5aae01cbde5a234023a54c3b41d360b07ffda50c3fdd1730a2a86bfe0d163135\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:49:12.559323 containerd[1459]: time="2024-12-13T09:49:12.558739737Z" level=error msg="encountered an error cleaning up failed sandbox \"5aae01cbde5a234023a54c3b41d360b07ffda50c3fdd1730a2a86bfe0d163135\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:49:12.559993 containerd[1459]: time="2024-12-13T09:49:12.559670539Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86c9dd4fbf-d9pn9,Uid:97522c65-0ab9-4890-ab5c-998cdfc7bb0c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5aae01cbde5a234023a54c3b41d360b07ffda50c3fdd1730a2a86bfe0d163135\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:49:12.561145 kubelet[2549]: E1213 09:49:12.560312 2549 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5aae01cbde5a234023a54c3b41d360b07ffda50c3fdd1730a2a86bfe0d163135\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:49:12.561145 kubelet[2549]: E1213 09:49:12.560388 2549 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5aae01cbde5a234023a54c3b41d360b07ffda50c3fdd1730a2a86bfe0d163135\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-86c9dd4fbf-d9pn9" Dec 13 09:49:12.561145 kubelet[2549]: E1213 09:49:12.560421 2549 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5aae01cbde5a234023a54c3b41d360b07ffda50c3fdd1730a2a86bfe0d163135\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-86c9dd4fbf-d9pn9" Dec 13 09:49:12.561382 kubelet[2549]: E1213 09:49:12.560481 2549 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-86c9dd4fbf-d9pn9_calico-apiserver(97522c65-0ab9-4890-ab5c-998cdfc7bb0c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-86c9dd4fbf-d9pn9_calico-apiserver(97522c65-0ab9-4890-ab5c-998cdfc7bb0c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5aae01cbde5a234023a54c3b41d360b07ffda50c3fdd1730a2a86bfe0d163135\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-86c9dd4fbf-d9pn9" podUID="97522c65-0ab9-4890-ab5c-998cdfc7bb0c" Dec 13 09:49:12.581444 containerd[1459]: time="2024-12-13T09:49:12.578162101Z" level=error msg="Failed to destroy network for sandbox \"a745741dd6fea6bfc613c4c6cc0844f2d02b642c65e77cd08ec0e9f3160ede60\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:49:12.581444 containerd[1459]: time="2024-12-13T09:49:12.578559273Z" level=error msg="encountered an error cleaning up failed sandbox \"a745741dd6fea6bfc613c4c6cc0844f2d02b642c65e77cd08ec0e9f3160ede60\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:49:12.581444 containerd[1459]: time="2024-12-13T09:49:12.578615287Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5bf6c6b877-7fph7,Uid:e9d7b383-6d7b-4fc2-8e54-c423fd3aaee5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a745741dd6fea6bfc613c4c6cc0844f2d02b642c65e77cd08ec0e9f3160ede60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:49:12.581803 kubelet[2549]: E1213 09:49:12.578986 2549 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a745741dd6fea6bfc613c4c6cc0844f2d02b642c65e77cd08ec0e9f3160ede60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:49:12.581803 kubelet[2549]: E1213 09:49:12.579089 2549 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a745741dd6fea6bfc613c4c6cc0844f2d02b642c65e77cd08ec0e9f3160ede60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5bf6c6b877-7fph7" Dec 13 09:49:12.581803 kubelet[2549]: E1213 09:49:12.579119 2549 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a745741dd6fea6bfc613c4c6cc0844f2d02b642c65e77cd08ec0e9f3160ede60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5bf6c6b877-7fph7" Dec 13 09:49:12.582047 kubelet[2549]: E1213 09:49:12.579184 2549 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5bf6c6b877-7fph7_calico-system(e9d7b383-6d7b-4fc2-8e54-c423fd3aaee5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5bf6c6b877-7fph7_calico-system(e9d7b383-6d7b-4fc2-8e54-c423fd3aaee5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a745741dd6fea6bfc613c4c6cc0844f2d02b642c65e77cd08ec0e9f3160ede60\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5bf6c6b877-7fph7" podUID="e9d7b383-6d7b-4fc2-8e54-c423fd3aaee5" Dec 13 09:49:12.604802 containerd[1459]: time="2024-12-13T09:49:12.604363616Z" level=error msg="Failed to destroy network for sandbox \"057efe23f5275971e6dfc7a081f92c63f2006e578fa2b36680b092d2778da766\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:49:12.605716 containerd[1459]: time="2024-12-13T09:49:12.605382772Z" level=error msg="encountered an error cleaning up failed sandbox \"057efe23f5275971e6dfc7a081f92c63f2006e578fa2b36680b092d2778da766\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:49:12.606147 containerd[1459]: time="2024-12-13T09:49:12.605678574Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86c9dd4fbf-lcxl7,Uid:c34b1b21-b5b0-4dd4-928a-ce99fd8dbbbc,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"057efe23f5275971e6dfc7a081f92c63f2006e578fa2b36680b092d2778da766\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:49:12.607896 kubelet[2549]: E1213 09:49:12.606815 2549 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"057efe23f5275971e6dfc7a081f92c63f2006e578fa2b36680b092d2778da766\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:49:12.607896 kubelet[2549]: E1213 09:49:12.606937 2549 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"057efe23f5275971e6dfc7a081f92c63f2006e578fa2b36680b092d2778da766\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-86c9dd4fbf-lcxl7" Dec 13 09:49:12.607896 kubelet[2549]: E1213 09:49:12.606970 2549 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"057efe23f5275971e6dfc7a081f92c63f2006e578fa2b36680b092d2778da766\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-86c9dd4fbf-lcxl7" Dec 13 09:49:12.608147 kubelet[2549]: E1213 09:49:12.607054 2549 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-86c9dd4fbf-lcxl7_calico-apiserver(c34b1b21-b5b0-4dd4-928a-ce99fd8dbbbc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-86c9dd4fbf-lcxl7_calico-apiserver(c34b1b21-b5b0-4dd4-928a-ce99fd8dbbbc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"057efe23f5275971e6dfc7a081f92c63f2006e578fa2b36680b092d2778da766\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-86c9dd4fbf-lcxl7" podUID="c34b1b21-b5b0-4dd4-928a-ce99fd8dbbbc" Dec 13 09:49:12.618256 containerd[1459]: time="2024-12-13T09:49:12.618175626Z" level=error msg="Failed to destroy network for sandbox \"af951e7a358aff50950246461650e0a7adfd33f19366a09954150c87eb74ab6f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:49:12.618760 containerd[1459]: time="2024-12-13T09:49:12.618700565Z" level=error msg="encountered an error cleaning up failed sandbox \"af951e7a358aff50950246461650e0a7adfd33f19366a09954150c87eb74ab6f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:49:12.619617 containerd[1459]: time="2024-12-13T09:49:12.618801661Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-464d2,Uid:89c56140-f295-40c7-ae2a-952e41b9599a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"af951e7a358aff50950246461650e0a7adfd33f19366a09954150c87eb74ab6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:49:12.619715 kubelet[2549]: E1213 09:49:12.619104 2549 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af951e7a358aff50950246461650e0a7adfd33f19366a09954150c87eb74ab6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:49:12.619715 kubelet[2549]: E1213 09:49:12.619182 2549 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af951e7a358aff50950246461650e0a7adfd33f19366a09954150c87eb74ab6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-464d2" Dec 13 09:49:12.619715 kubelet[2549]: E1213 09:49:12.619215 2549 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af951e7a358aff50950246461650e0a7adfd33f19366a09954150c87eb74ab6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-464d2" Dec 13 09:49:12.619939 kubelet[2549]: E1213 09:49:12.619270 2549 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-464d2_kube-system(89c56140-f295-40c7-ae2a-952e41b9599a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-464d2_kube-system(89c56140-f295-40c7-ae2a-952e41b9599a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"af951e7a358aff50950246461650e0a7adfd33f19366a09954150c87eb74ab6f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-464d2" podUID="89c56140-f295-40c7-ae2a-952e41b9599a" Dec 13 09:49:12.881052 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5c75e6e80546eb168b5765424bba8d3fff071e062eeb36b1ea0dbcc09848e032-shm.mount: Deactivated successfully. Dec 13 09:49:12.959144 kubelet[2549]: I1213 09:49:12.959096 2549 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af951e7a358aff50950246461650e0a7adfd33f19366a09954150c87eb74ab6f" Dec 13 09:49:12.966017 kubelet[2549]: I1213 09:49:12.964628 2549 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c75e6e80546eb168b5765424bba8d3fff071e062eeb36b1ea0dbcc09848e032" Dec 13 09:49:12.968549 containerd[1459]: time="2024-12-13T09:49:12.967642970Z" level=info msg="StopPodSandbox for \"5c75e6e80546eb168b5765424bba8d3fff071e062eeb36b1ea0dbcc09848e032\"" Dec 13 09:49:12.970252 containerd[1459]: time="2024-12-13T09:49:12.970162087Z" level=info msg="Ensure that sandbox 5c75e6e80546eb168b5765424bba8d3fff071e062eeb36b1ea0dbcc09848e032 in task-service has been cleanup successfully" Dec 13 09:49:12.974011 containerd[1459]: time="2024-12-13T09:49:12.973539929Z" level=info msg="StopPodSandbox for \"af951e7a358aff50950246461650e0a7adfd33f19366a09954150c87eb74ab6f\"" Dec 13 09:49:12.976529 containerd[1459]: time="2024-12-13T09:49:12.974601661Z" level=info msg="Ensure that sandbox af951e7a358aff50950246461650e0a7adfd33f19366a09954150c87eb74ab6f in task-service has been cleanup successfully" Dec 13 09:49:12.979737 kubelet[2549]: I1213 09:49:12.979327 2549 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a745741dd6fea6bfc613c4c6cc0844f2d02b642c65e77cd08ec0e9f3160ede60" Dec 13 09:49:12.983781 containerd[1459]: time="2024-12-13T09:49:12.983628633Z" level=info msg="StopPodSandbox for \"a745741dd6fea6bfc613c4c6cc0844f2d02b642c65e77cd08ec0e9f3160ede60\"" Dec 13 09:49:12.985161 containerd[1459]: time="2024-12-13T09:49:12.985088658Z" level=info msg="Ensure that sandbox a745741dd6fea6bfc613c4c6cc0844f2d02b642c65e77cd08ec0e9f3160ede60 in task-service has been cleanup successfully" Dec 13 09:49:12.986789 kubelet[2549]: I1213 09:49:12.986736 2549 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5aae01cbde5a234023a54c3b41d360b07ffda50c3fdd1730a2a86bfe0d163135" Dec 13 09:49:12.991460 containerd[1459]: time="2024-12-13T09:49:12.991137832Z" level=info msg="StopPodSandbox for \"5aae01cbde5a234023a54c3b41d360b07ffda50c3fdd1730a2a86bfe0d163135\"" Dec 13 09:49:12.996141 containerd[1459]: time="2024-12-13T09:49:12.996074463Z" level=info msg="Ensure that sandbox 5aae01cbde5a234023a54c3b41d360b07ffda50c3fdd1730a2a86bfe0d163135 in task-service has been cleanup successfully" Dec 13 09:49:13.009840 kubelet[2549]: I1213 09:49:13.009108 2549 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2fc255fd4e942f4f893bb52f8456827bd455aca6a60de4b7f00b6042b49a4df" Dec 13 09:49:13.011841 containerd[1459]: time="2024-12-13T09:49:13.011558319Z" level=info msg="StopPodSandbox for \"c2fc255fd4e942f4f893bb52f8456827bd455aca6a60de4b7f00b6042b49a4df\"" Dec 13 09:49:13.013474 containerd[1459]: time="2024-12-13T09:49:13.013376456Z" level=info msg="Ensure that sandbox c2fc255fd4e942f4f893bb52f8456827bd455aca6a60de4b7f00b6042b49a4df in task-service has been cleanup successfully" Dec 13 09:49:13.019518 kubelet[2549]: I1213 09:49:13.019486 2549 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="057efe23f5275971e6dfc7a081f92c63f2006e578fa2b36680b092d2778da766" Dec 13 09:49:13.024710 containerd[1459]: time="2024-12-13T09:49:13.024135511Z" level=info msg="StopPodSandbox for \"057efe23f5275971e6dfc7a081f92c63f2006e578fa2b36680b092d2778da766\"" Dec 13 09:49:13.024710 containerd[1459]: time="2024-12-13T09:49:13.024385825Z" level=info msg="Ensure that sandbox 057efe23f5275971e6dfc7a081f92c63f2006e578fa2b36680b092d2778da766 in task-service has been cleanup successfully" Dec 13 09:49:13.093620 containerd[1459]: time="2024-12-13T09:49:13.093400224Z" level=error msg="StopPodSandbox for \"5c75e6e80546eb168b5765424bba8d3fff071e062eeb36b1ea0dbcc09848e032\" failed" error="failed to destroy network for sandbox \"5c75e6e80546eb168b5765424bba8d3fff071e062eeb36b1ea0dbcc09848e032\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:49:13.094667 kubelet[2549]: E1213 09:49:13.094618 2549 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5c75e6e80546eb168b5765424bba8d3fff071e062eeb36b1ea0dbcc09848e032\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5c75e6e80546eb168b5765424bba8d3fff071e062eeb36b1ea0dbcc09848e032" Dec 13 09:49:13.095333 kubelet[2549]: E1213 09:49:13.094990 2549 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5c75e6e80546eb168b5765424bba8d3fff071e062eeb36b1ea0dbcc09848e032"} Dec 13 09:49:13.095333 kubelet[2549]: E1213 09:49:13.095171 2549 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"56d0e423-edd4-4223-a5ef-7fe3393e4271\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5c75e6e80546eb168b5765424bba8d3fff071e062eeb36b1ea0dbcc09848e032\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 09:49:13.095333 kubelet[2549]: E1213 09:49:13.095196 2549 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"56d0e423-edd4-4223-a5ef-7fe3393e4271\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5c75e6e80546eb168b5765424bba8d3fff071e062eeb36b1ea0dbcc09848e032\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-z8kgk" podUID="56d0e423-edd4-4223-a5ef-7fe3393e4271" Dec 13 09:49:13.110197 containerd[1459]: time="2024-12-13T09:49:13.110044876Z" level=error msg="StopPodSandbox for \"5aae01cbde5a234023a54c3b41d360b07ffda50c3fdd1730a2a86bfe0d163135\" failed" error="failed to destroy network for sandbox \"5aae01cbde5a234023a54c3b41d360b07ffda50c3fdd1730a2a86bfe0d163135\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:49:13.110927 kubelet[2549]: E1213 09:49:13.110520 2549 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5aae01cbde5a234023a54c3b41d360b07ffda50c3fdd1730a2a86bfe0d163135\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5aae01cbde5a234023a54c3b41d360b07ffda50c3fdd1730a2a86bfe0d163135" Dec 13 09:49:13.110927 kubelet[2549]: E1213 09:49:13.110572 2549 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5aae01cbde5a234023a54c3b41d360b07ffda50c3fdd1730a2a86bfe0d163135"} Dec 13 09:49:13.110927 kubelet[2549]: E1213 09:49:13.110606 2549 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"97522c65-0ab9-4890-ab5c-998cdfc7bb0c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5aae01cbde5a234023a54c3b41d360b07ffda50c3fdd1730a2a86bfe0d163135\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 09:49:13.110927 kubelet[2549]: E1213 09:49:13.110636 2549 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"97522c65-0ab9-4890-ab5c-998cdfc7bb0c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5aae01cbde5a234023a54c3b41d360b07ffda50c3fdd1730a2a86bfe0d163135\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-86c9dd4fbf-d9pn9" podUID="97522c65-0ab9-4890-ab5c-998cdfc7bb0c" Dec 13 09:49:13.148225 containerd[1459]: time="2024-12-13T09:49:13.147897940Z" level=error msg="StopPodSandbox for \"af951e7a358aff50950246461650e0a7adfd33f19366a09954150c87eb74ab6f\" failed" error="failed to destroy network for sandbox \"af951e7a358aff50950246461650e0a7adfd33f19366a09954150c87eb74ab6f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:49:13.151169 kubelet[2549]: E1213 09:49:13.148755 2549 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"af951e7a358aff50950246461650e0a7adfd33f19366a09954150c87eb74ab6f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="af951e7a358aff50950246461650e0a7adfd33f19366a09954150c87eb74ab6f" Dec 13 09:49:13.151169 kubelet[2549]: E1213 09:49:13.148827 2549 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"af951e7a358aff50950246461650e0a7adfd33f19366a09954150c87eb74ab6f"} Dec 13 09:49:13.151169 kubelet[2549]: E1213 09:49:13.151027 2549 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"89c56140-f295-40c7-ae2a-952e41b9599a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"af951e7a358aff50950246461650e0a7adfd33f19366a09954150c87eb74ab6f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 09:49:13.151169 kubelet[2549]: E1213 09:49:13.151093 2549 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"89c56140-f295-40c7-ae2a-952e41b9599a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"af951e7a358aff50950246461650e0a7adfd33f19366a09954150c87eb74ab6f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-464d2" podUID="89c56140-f295-40c7-ae2a-952e41b9599a" Dec 13 09:49:13.165640 containerd[1459]: time="2024-12-13T09:49:13.165204255Z" level=error msg="StopPodSandbox for \"c2fc255fd4e942f4f893bb52f8456827bd455aca6a60de4b7f00b6042b49a4df\" failed" error="failed to destroy network for sandbox \"c2fc255fd4e942f4f893bb52f8456827bd455aca6a60de4b7f00b6042b49a4df\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:49:13.166564 kubelet[2549]: E1213 09:49:13.166515 2549 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c2fc255fd4e942f4f893bb52f8456827bd455aca6a60de4b7f00b6042b49a4df\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c2fc255fd4e942f4f893bb52f8456827bd455aca6a60de4b7f00b6042b49a4df" Dec 13 09:49:13.166972 kubelet[2549]: E1213 09:49:13.166775 2549 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c2fc255fd4e942f4f893bb52f8456827bd455aca6a60de4b7f00b6042b49a4df"} Dec 13 09:49:13.166972 kubelet[2549]: E1213 09:49:13.166834 2549 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9046cd15-11c7-4a60-ba00-3642ddd7002a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c2fc255fd4e942f4f893bb52f8456827bd455aca6a60de4b7f00b6042b49a4df\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 09:49:13.166972 kubelet[2549]: E1213 09:49:13.166927 2549 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9046cd15-11c7-4a60-ba00-3642ddd7002a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c2fc255fd4e942f4f893bb52f8456827bd455aca6a60de4b7f00b6042b49a4df\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-4xm55" podUID="9046cd15-11c7-4a60-ba00-3642ddd7002a" Dec 13 09:49:13.174714 containerd[1459]: time="2024-12-13T09:49:13.174631875Z" level=error msg="StopPodSandbox for \"a745741dd6fea6bfc613c4c6cc0844f2d02b642c65e77cd08ec0e9f3160ede60\" failed" error="failed to destroy network for sandbox \"a745741dd6fea6bfc613c4c6cc0844f2d02b642c65e77cd08ec0e9f3160ede60\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:49:13.175932 kubelet[2549]: E1213 09:49:13.175287 2549 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a745741dd6fea6bfc613c4c6cc0844f2d02b642c65e77cd08ec0e9f3160ede60\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a745741dd6fea6bfc613c4c6cc0844f2d02b642c65e77cd08ec0e9f3160ede60" Dec 13 09:49:13.175932 kubelet[2549]: E1213 09:49:13.175346 2549 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a745741dd6fea6bfc613c4c6cc0844f2d02b642c65e77cd08ec0e9f3160ede60"} Dec 13 09:49:13.175932 kubelet[2549]: E1213 09:49:13.175380 2549 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e9d7b383-6d7b-4fc2-8e54-c423fd3aaee5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a745741dd6fea6bfc613c4c6cc0844f2d02b642c65e77cd08ec0e9f3160ede60\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 09:49:13.175932 kubelet[2549]: E1213 09:49:13.175404 2549 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e9d7b383-6d7b-4fc2-8e54-c423fd3aaee5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a745741dd6fea6bfc613c4c6cc0844f2d02b642c65e77cd08ec0e9f3160ede60\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5bf6c6b877-7fph7" podUID="e9d7b383-6d7b-4fc2-8e54-c423fd3aaee5" Dec 13 09:49:13.184515 containerd[1459]: time="2024-12-13T09:49:13.184460766Z" level=error msg="StopPodSandbox for \"057efe23f5275971e6dfc7a081f92c63f2006e578fa2b36680b092d2778da766\" failed" error="failed to destroy network for sandbox \"057efe23f5275971e6dfc7a081f92c63f2006e578fa2b36680b092d2778da766\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:49:13.185242 kubelet[2549]: E1213 09:49:13.185193 2549 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"057efe23f5275971e6dfc7a081f92c63f2006e578fa2b36680b092d2778da766\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="057efe23f5275971e6dfc7a081f92c63f2006e578fa2b36680b092d2778da766" Dec 13 09:49:13.185465 kubelet[2549]: E1213 09:49:13.185434 2549 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"057efe23f5275971e6dfc7a081f92c63f2006e578fa2b36680b092d2778da766"} Dec 13 09:49:13.185560 kubelet[2549]: E1213 09:49:13.185546 2549 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c34b1b21-b5b0-4dd4-928a-ce99fd8dbbbc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"057efe23f5275971e6dfc7a081f92c63f2006e578fa2b36680b092d2778da766\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 09:49:13.185752 kubelet[2549]: E1213 09:49:13.185692 2549 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c34b1b21-b5b0-4dd4-928a-ce99fd8dbbbc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"057efe23f5275971e6dfc7a081f92c63f2006e578fa2b36680b092d2778da766\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-86c9dd4fbf-lcxl7" podUID="c34b1b21-b5b0-4dd4-928a-ce99fd8dbbbc" Dec 13 09:49:14.651156 kubelet[2549]: I1213 09:49:14.651091 2549 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 09:49:14.653952 kubelet[2549]: E1213 09:49:14.652284 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:49:15.027792 kubelet[2549]: E1213 09:49:15.027662 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:49:20.603584 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2461425075.mount: Deactivated successfully. Dec 13 09:49:20.778109 containerd[1459]: time="2024-12-13T09:49:20.730715325Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Dec 13 09:49:20.784498 containerd[1459]: time="2024-12-13T09:49:20.776075844Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 8.808497308s" Dec 13 09:49:20.784498 containerd[1459]: time="2024-12-13T09:49:20.784060283Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Dec 13 09:49:20.821537 containerd[1459]: time="2024-12-13T09:49:20.820674443Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:49:20.857336 containerd[1459]: time="2024-12-13T09:49:20.857093087Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:49:20.860284 containerd[1459]: time="2024-12-13T09:49:20.859572646Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:49:20.863840 containerd[1459]: time="2024-12-13T09:49:20.863777493Z" level=info msg="CreateContainer within sandbox \"b714c5aac7e0b03b0f6471b3197353931366ef973c3e2b9c754c2cf002ba6250\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 09:49:20.908074 containerd[1459]: time="2024-12-13T09:49:20.908011321Z" level=info msg="CreateContainer within sandbox \"b714c5aac7e0b03b0f6471b3197353931366ef973c3e2b9c754c2cf002ba6250\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"3c2ac133f4b95de4f6f3a60af59d0d5cc5d13882360ddcda12f2895ab51349d6\"" Dec 13 09:49:20.910386 containerd[1459]: time="2024-12-13T09:49:20.910264839Z" level=info msg="StartContainer for \"3c2ac133f4b95de4f6f3a60af59d0d5cc5d13882360ddcda12f2895ab51349d6\"" Dec 13 09:49:21.052144 systemd[1]: Started cri-containerd-3c2ac133f4b95de4f6f3a60af59d0d5cc5d13882360ddcda12f2895ab51349d6.scope - libcontainer container 3c2ac133f4b95de4f6f3a60af59d0d5cc5d13882360ddcda12f2895ab51349d6. Dec 13 09:49:21.133108 containerd[1459]: time="2024-12-13T09:49:21.132211222Z" level=info msg="StartContainer for \"3c2ac133f4b95de4f6f3a60af59d0d5cc5d13882360ddcda12f2895ab51349d6\" returns successfully" Dec 13 09:49:21.274036 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 13 09:49:21.274226 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 13 09:49:22.070315 kubelet[2549]: E1213 09:49:22.070265 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:49:23.081948 kubelet[2549]: E1213 09:49:23.079432 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:49:23.141552 systemd[1]: run-containerd-runc-k8s.io-3c2ac133f4b95de4f6f3a60af59d0d5cc5d13882360ddcda12f2895ab51349d6-runc.bi8LJw.mount: Deactivated successfully. Dec 13 09:49:23.376939 kernel: bpftool[3821]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Dec 13 09:49:23.750878 containerd[1459]: time="2024-12-13T09:49:23.746897697Z" level=info msg="StopPodSandbox for \"5aae01cbde5a234023a54c3b41d360b07ffda50c3fdd1730a2a86bfe0d163135\"" Dec 13 09:49:23.750878 containerd[1459]: time="2024-12-13T09:49:23.748083361Z" level=info msg="StopPodSandbox for \"a745741dd6fea6bfc613c4c6cc0844f2d02b642c65e77cd08ec0e9f3160ede60\"" Dec 13 09:49:23.764072 systemd-networkd[1359]: vxlan.calico: Link UP Dec 13 09:49:23.764079 systemd-networkd[1359]: vxlan.calico: Gained carrier Dec 13 09:49:24.006560 kubelet[2549]: I1213 09:49:23.992913 2549 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-hdmnj" podStartSLOduration=4.186331615 podStartE2EDuration="24.985026367s" podCreationTimestamp="2024-12-13 09:48:59 +0000 UTC" firstStartedPulling="2024-12-13 09:48:59.999534494 +0000 UTC m=+21.504420340" lastFinishedPulling="2024-12-13 09:49:20.798229249 +0000 UTC m=+42.303115092" observedRunningTime="2024-12-13 09:49:22.136630239 +0000 UTC m=+43.641516099" watchObservedRunningTime="2024-12-13 09:49:23.985026367 +0000 UTC m=+45.489912219" Dec 13 09:49:24.094945 kubelet[2549]: E1213 09:49:24.094723 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:49:24.173208 systemd[1]: run-containerd-runc-k8s.io-3c2ac133f4b95de4f6f3a60af59d0d5cc5d13882360ddcda12f2895ab51349d6-runc.QHEiNE.mount: Deactivated successfully. Dec 13 09:49:24.246558 containerd[1459]: 2024-12-13 09:49:23.993 [INFO][3876] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a745741dd6fea6bfc613c4c6cc0844f2d02b642c65e77cd08ec0e9f3160ede60" Dec 13 09:49:24.246558 containerd[1459]: 2024-12-13 09:49:23.993 [INFO][3876] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a745741dd6fea6bfc613c4c6cc0844f2d02b642c65e77cd08ec0e9f3160ede60" iface="eth0" netns="/var/run/netns/cni-b89c8abe-8e00-d1fc-60e1-3202b5fbdc1b" Dec 13 09:49:24.246558 containerd[1459]: 2024-12-13 09:49:23.993 [INFO][3876] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a745741dd6fea6bfc613c4c6cc0844f2d02b642c65e77cd08ec0e9f3160ede60" iface="eth0" netns="/var/run/netns/cni-b89c8abe-8e00-d1fc-60e1-3202b5fbdc1b" Dec 13 09:49:24.246558 containerd[1459]: 2024-12-13 09:49:23.994 [INFO][3876] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a745741dd6fea6bfc613c4c6cc0844f2d02b642c65e77cd08ec0e9f3160ede60" iface="eth0" netns="/var/run/netns/cni-b89c8abe-8e00-d1fc-60e1-3202b5fbdc1b" Dec 13 09:49:24.246558 containerd[1459]: 2024-12-13 09:49:23.994 [INFO][3876] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a745741dd6fea6bfc613c4c6cc0844f2d02b642c65e77cd08ec0e9f3160ede60" Dec 13 09:49:24.246558 containerd[1459]: 2024-12-13 09:49:23.994 [INFO][3876] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a745741dd6fea6bfc613c4c6cc0844f2d02b642c65e77cd08ec0e9f3160ede60" Dec 13 09:49:24.246558 containerd[1459]: 2024-12-13 09:49:24.207 [INFO][3899] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a745741dd6fea6bfc613c4c6cc0844f2d02b642c65e77cd08ec0e9f3160ede60" HandleID="k8s-pod-network.a745741dd6fea6bfc613c4c6cc0844f2d02b642c65e77cd08ec0e9f3160ede60" Workload="ci--4081.2.1--d--c5ae8496ec-k8s-calico--kube--controllers--5bf6c6b877--7fph7-eth0" Dec 13 09:49:24.246558 containerd[1459]: 2024-12-13 09:49:24.212 [INFO][3899] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 09:49:24.246558 containerd[1459]: 2024-12-13 09:49:24.213 [INFO][3899] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 09:49:24.246558 containerd[1459]: 2024-12-13 09:49:24.231 [WARNING][3899] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a745741dd6fea6bfc613c4c6cc0844f2d02b642c65e77cd08ec0e9f3160ede60" HandleID="k8s-pod-network.a745741dd6fea6bfc613c4c6cc0844f2d02b642c65e77cd08ec0e9f3160ede60" Workload="ci--4081.2.1--d--c5ae8496ec-k8s-calico--kube--controllers--5bf6c6b877--7fph7-eth0" Dec 13 09:49:24.246558 containerd[1459]: 2024-12-13 09:49:24.232 [INFO][3899] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a745741dd6fea6bfc613c4c6cc0844f2d02b642c65e77cd08ec0e9f3160ede60" HandleID="k8s-pod-network.a745741dd6fea6bfc613c4c6cc0844f2d02b642c65e77cd08ec0e9f3160ede60" Workload="ci--4081.2.1--d--c5ae8496ec-k8s-calico--kube--controllers--5bf6c6b877--7fph7-eth0" Dec 13 09:49:24.246558 containerd[1459]: 2024-12-13 09:49:24.238 [INFO][3899] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 09:49:24.246558 containerd[1459]: 2024-12-13 09:49:24.241 [INFO][3876] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a745741dd6fea6bfc613c4c6cc0844f2d02b642c65e77cd08ec0e9f3160ede60" Dec 13 09:49:24.254828 systemd[1]: run-netns-cni\x2db89c8abe\x2d8e00\x2dd1fc\x2d60e1\x2d3202b5fbdc1b.mount: Deactivated successfully. Dec 13 09:49:24.263622 containerd[1459]: time="2024-12-13T09:49:24.263051310Z" level=info msg="TearDown network for sandbox \"a745741dd6fea6bfc613c4c6cc0844f2d02b642c65e77cd08ec0e9f3160ede60\" successfully" Dec 13 09:49:24.263622 containerd[1459]: time="2024-12-13T09:49:24.263106561Z" level=info msg="StopPodSandbox for \"a745741dd6fea6bfc613c4c6cc0844f2d02b642c65e77cd08ec0e9f3160ede60\" returns successfully" Dec 13 09:49:24.270609 containerd[1459]: time="2024-12-13T09:49:24.270076895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5bf6c6b877-7fph7,Uid:e9d7b383-6d7b-4fc2-8e54-c423fd3aaee5,Namespace:calico-system,Attempt:1,}" Dec 13 09:49:24.280046 containerd[1459]: 2024-12-13 09:49:23.983 [INFO][3877] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5aae01cbde5a234023a54c3b41d360b07ffda50c3fdd1730a2a86bfe0d163135" Dec 13 09:49:24.280046 containerd[1459]: 2024-12-13 09:49:23.986 [INFO][3877] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5aae01cbde5a234023a54c3b41d360b07ffda50c3fdd1730a2a86bfe0d163135" iface="eth0" netns="/var/run/netns/cni-782ee611-465c-fcc2-3a25-2964c573c2ed" Dec 13 09:49:24.280046 containerd[1459]: 2024-12-13 09:49:23.986 [INFO][3877] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5aae01cbde5a234023a54c3b41d360b07ffda50c3fdd1730a2a86bfe0d163135" iface="eth0" netns="/var/run/netns/cni-782ee611-465c-fcc2-3a25-2964c573c2ed" Dec 13 09:49:24.280046 containerd[1459]: 2024-12-13 09:49:23.988 [INFO][3877] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5aae01cbde5a234023a54c3b41d360b07ffda50c3fdd1730a2a86bfe0d163135" iface="eth0" netns="/var/run/netns/cni-782ee611-465c-fcc2-3a25-2964c573c2ed" Dec 13 09:49:24.280046 containerd[1459]: 2024-12-13 09:49:23.989 [INFO][3877] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5aae01cbde5a234023a54c3b41d360b07ffda50c3fdd1730a2a86bfe0d163135" Dec 13 09:49:24.280046 containerd[1459]: 2024-12-13 09:49:23.989 [INFO][3877] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5aae01cbde5a234023a54c3b41d360b07ffda50c3fdd1730a2a86bfe0d163135" Dec 13 09:49:24.280046 containerd[1459]: 2024-12-13 09:49:24.208 [INFO][3898] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5aae01cbde5a234023a54c3b41d360b07ffda50c3fdd1730a2a86bfe0d163135" HandleID="k8s-pod-network.5aae01cbde5a234023a54c3b41d360b07ffda50c3fdd1730a2a86bfe0d163135" Workload="ci--4081.2.1--d--c5ae8496ec-k8s-calico--apiserver--86c9dd4fbf--d9pn9-eth0" Dec 13 09:49:24.280046 containerd[1459]: 2024-12-13 09:49:24.213 [INFO][3898] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 09:49:24.280046 containerd[1459]: 2024-12-13 09:49:24.239 [INFO][3898] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 09:49:24.280046 containerd[1459]: 2024-12-13 09:49:24.256 [WARNING][3898] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5aae01cbde5a234023a54c3b41d360b07ffda50c3fdd1730a2a86bfe0d163135" HandleID="k8s-pod-network.5aae01cbde5a234023a54c3b41d360b07ffda50c3fdd1730a2a86bfe0d163135" Workload="ci--4081.2.1--d--c5ae8496ec-k8s-calico--apiserver--86c9dd4fbf--d9pn9-eth0" Dec 13 09:49:24.280046 containerd[1459]: 2024-12-13 09:49:24.256 [INFO][3898] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5aae01cbde5a234023a54c3b41d360b07ffda50c3fdd1730a2a86bfe0d163135" HandleID="k8s-pod-network.5aae01cbde5a234023a54c3b41d360b07ffda50c3fdd1730a2a86bfe0d163135" Workload="ci--4081.2.1--d--c5ae8496ec-k8s-calico--apiserver--86c9dd4fbf--d9pn9-eth0" Dec 13 09:49:24.280046 containerd[1459]: 2024-12-13 09:49:24.262 [INFO][3898] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 09:49:24.280046 containerd[1459]: 2024-12-13 09:49:24.273 [INFO][3877] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5aae01cbde5a234023a54c3b41d360b07ffda50c3fdd1730a2a86bfe0d163135" Dec 13 09:49:24.281999 containerd[1459]: time="2024-12-13T09:49:24.281729346Z" level=info msg="TearDown network for sandbox \"5aae01cbde5a234023a54c3b41d360b07ffda50c3fdd1730a2a86bfe0d163135\" successfully" Dec 13 09:49:24.281999 containerd[1459]: time="2024-12-13T09:49:24.281777230Z" level=info msg="StopPodSandbox for \"5aae01cbde5a234023a54c3b41d360b07ffda50c3fdd1730a2a86bfe0d163135\" returns successfully" Dec 13 09:49:24.285213 containerd[1459]: time="2024-12-13T09:49:24.284552946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86c9dd4fbf-d9pn9,Uid:97522c65-0ab9-4890-ab5c-998cdfc7bb0c,Namespace:calico-apiserver,Attempt:1,}" Dec 13 09:49:24.293954 systemd[1]: run-netns-cni\x2d782ee611\x2d465c\x2dfcc2\x2d3a25\x2d2964c573c2ed.mount: Deactivated successfully. Dec 13 09:49:24.694764 systemd-networkd[1359]: cali51203558a73: Link UP Dec 13 09:49:24.697827 systemd-networkd[1359]: cali51203558a73: Gained carrier Dec 13 09:49:24.735708 containerd[1459]: 2024-12-13 09:49:24.469 [INFO][3941] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.1--d--c5ae8496ec-k8s-calico--kube--controllers--5bf6c6b877--7fph7-eth0 calico-kube-controllers-5bf6c6b877- calico-system e9d7b383-6d7b-4fc2-8e54-c423fd3aaee5 832 0 2024-12-13 09:48:59 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5bf6c6b877 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.2.1-d-c5ae8496ec calico-kube-controllers-5bf6c6b877-7fph7 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali51203558a73 [] []}} ContainerID="f88e87eb5b1f403602a10b43447a4705894aa3f36b486f6a850f5ed9024cb8da" Namespace="calico-system" Pod="calico-kube-controllers-5bf6c6b877-7fph7" WorkloadEndpoint="ci--4081.2.1--d--c5ae8496ec-k8s-calico--kube--controllers--5bf6c6b877--7fph7-" Dec 13 09:49:24.735708 containerd[1459]: 2024-12-13 09:49:24.469 [INFO][3941] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f88e87eb5b1f403602a10b43447a4705894aa3f36b486f6a850f5ed9024cb8da" Namespace="calico-system" Pod="calico-kube-controllers-5bf6c6b877-7fph7" WorkloadEndpoint="ci--4081.2.1--d--c5ae8496ec-k8s-calico--kube--controllers--5bf6c6b877--7fph7-eth0" Dec 13 09:49:24.735708 containerd[1459]: 2024-12-13 09:49:24.594 [INFO][3977] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f88e87eb5b1f403602a10b43447a4705894aa3f36b486f6a850f5ed9024cb8da" HandleID="k8s-pod-network.f88e87eb5b1f403602a10b43447a4705894aa3f36b486f6a850f5ed9024cb8da" Workload="ci--4081.2.1--d--c5ae8496ec-k8s-calico--kube--controllers--5bf6c6b877--7fph7-eth0" Dec 13 09:49:24.735708 containerd[1459]: 2024-12-13 09:49:24.623 [INFO][3977] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f88e87eb5b1f403602a10b43447a4705894aa3f36b486f6a850f5ed9024cb8da" HandleID="k8s-pod-network.f88e87eb5b1f403602a10b43447a4705894aa3f36b486f6a850f5ed9024cb8da" Workload="ci--4081.2.1--d--c5ae8496ec-k8s-calico--kube--controllers--5bf6c6b877--7fph7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000291680), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.2.1-d-c5ae8496ec", "pod":"calico-kube-controllers-5bf6c6b877-7fph7", "timestamp":"2024-12-13 09:49:24.594232845 +0000 UTC"}, Hostname:"ci-4081.2.1-d-c5ae8496ec", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 09:49:24.735708 containerd[1459]: 2024-12-13 09:49:24.623 [INFO][3977] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 09:49:24.735708 containerd[1459]: 2024-12-13 09:49:24.624 [INFO][3977] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 09:49:24.735708 containerd[1459]: 2024-12-13 09:49:24.624 [INFO][3977] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.1-d-c5ae8496ec' Dec 13 09:49:24.735708 containerd[1459]: 2024-12-13 09:49:24.627 [INFO][3977] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f88e87eb5b1f403602a10b43447a4705894aa3f36b486f6a850f5ed9024cb8da" host="ci-4081.2.1-d-c5ae8496ec" Dec 13 09:49:24.735708 containerd[1459]: 2024-12-13 09:49:24.641 [INFO][3977] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.1-d-c5ae8496ec" Dec 13 09:49:24.735708 containerd[1459]: 2024-12-13 09:49:24.649 [INFO][3977] ipam/ipam.go 489: Trying affinity for 192.168.0.0/26 host="ci-4081.2.1-d-c5ae8496ec" Dec 13 09:49:24.735708 containerd[1459]: 2024-12-13 09:49:24.652 [INFO][3977] ipam/ipam.go 155: Attempting to load block cidr=192.168.0.0/26 host="ci-4081.2.1-d-c5ae8496ec" Dec 13 09:49:24.735708 containerd[1459]: 2024-12-13 09:49:24.656 [INFO][3977] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.0.0/26 host="ci-4081.2.1-d-c5ae8496ec" Dec 13 09:49:24.735708 containerd[1459]: 2024-12-13 09:49:24.656 [INFO][3977] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.0.0/26 handle="k8s-pod-network.f88e87eb5b1f403602a10b43447a4705894aa3f36b486f6a850f5ed9024cb8da" host="ci-4081.2.1-d-c5ae8496ec" Dec 13 09:49:24.735708 containerd[1459]: 2024-12-13 09:49:24.659 [INFO][3977] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f88e87eb5b1f403602a10b43447a4705894aa3f36b486f6a850f5ed9024cb8da Dec 13 09:49:24.735708 containerd[1459]: 2024-12-13 09:49:24.670 [INFO][3977] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.0.0/26 handle="k8s-pod-network.f88e87eb5b1f403602a10b43447a4705894aa3f36b486f6a850f5ed9024cb8da" host="ci-4081.2.1-d-c5ae8496ec" Dec 13 09:49:24.735708 containerd[1459]: 2024-12-13 09:49:24.680 [INFO][3977] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.0.1/26] block=192.168.0.0/26 handle="k8s-pod-network.f88e87eb5b1f403602a10b43447a4705894aa3f36b486f6a850f5ed9024cb8da" host="ci-4081.2.1-d-c5ae8496ec" Dec 13 09:49:24.735708 containerd[1459]: 2024-12-13 09:49:24.681 [INFO][3977] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.0.1/26] handle="k8s-pod-network.f88e87eb5b1f403602a10b43447a4705894aa3f36b486f6a850f5ed9024cb8da" host="ci-4081.2.1-d-c5ae8496ec" Dec 13 09:49:24.735708 containerd[1459]: 2024-12-13 09:49:24.681 [INFO][3977] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 09:49:24.735708 containerd[1459]: 2024-12-13 09:49:24.682 [INFO][3977] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.0.1/26] IPv6=[] ContainerID="f88e87eb5b1f403602a10b43447a4705894aa3f36b486f6a850f5ed9024cb8da" HandleID="k8s-pod-network.f88e87eb5b1f403602a10b43447a4705894aa3f36b486f6a850f5ed9024cb8da" Workload="ci--4081.2.1--d--c5ae8496ec-k8s-calico--kube--controllers--5bf6c6b877--7fph7-eth0" Dec 13 09:49:24.737044 containerd[1459]: 2024-12-13 09:49:24.687 [INFO][3941] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f88e87eb5b1f403602a10b43447a4705894aa3f36b486f6a850f5ed9024cb8da" Namespace="calico-system" Pod="calico-kube-controllers-5bf6c6b877-7fph7" WorkloadEndpoint="ci--4081.2.1--d--c5ae8496ec-k8s-calico--kube--controllers--5bf6c6b877--7fph7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--d--c5ae8496ec-k8s-calico--kube--controllers--5bf6c6b877--7fph7-eth0", GenerateName:"calico-kube-controllers-5bf6c6b877-", Namespace:"calico-system", SelfLink:"", UID:"e9d7b383-6d7b-4fc2-8e54-c423fd3aaee5", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 48, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5bf6c6b877", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-d-c5ae8496ec", ContainerID:"", Pod:"calico-kube-controllers-5bf6c6b877-7fph7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.0.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali51203558a73", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 09:49:24.737044 containerd[1459]: 2024-12-13 09:49:24.688 [INFO][3941] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.0.1/32] ContainerID="f88e87eb5b1f403602a10b43447a4705894aa3f36b486f6a850f5ed9024cb8da" Namespace="calico-system" Pod="calico-kube-controllers-5bf6c6b877-7fph7" WorkloadEndpoint="ci--4081.2.1--d--c5ae8496ec-k8s-calico--kube--controllers--5bf6c6b877--7fph7-eth0" Dec 13 09:49:24.737044 containerd[1459]: 2024-12-13 09:49:24.688 [INFO][3941] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali51203558a73 ContainerID="f88e87eb5b1f403602a10b43447a4705894aa3f36b486f6a850f5ed9024cb8da" Namespace="calico-system" Pod="calico-kube-controllers-5bf6c6b877-7fph7" WorkloadEndpoint="ci--4081.2.1--d--c5ae8496ec-k8s-calico--kube--controllers--5bf6c6b877--7fph7-eth0" Dec 13 09:49:24.737044 containerd[1459]: 2024-12-13 09:49:24.701 [INFO][3941] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f88e87eb5b1f403602a10b43447a4705894aa3f36b486f6a850f5ed9024cb8da" Namespace="calico-system" Pod="calico-kube-controllers-5bf6c6b877-7fph7" WorkloadEndpoint="ci--4081.2.1--d--c5ae8496ec-k8s-calico--kube--controllers--5bf6c6b877--7fph7-eth0" Dec 13 09:49:24.737044 containerd[1459]: 2024-12-13 09:49:24.702 [INFO][3941] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f88e87eb5b1f403602a10b43447a4705894aa3f36b486f6a850f5ed9024cb8da" Namespace="calico-system" Pod="calico-kube-controllers-5bf6c6b877-7fph7" WorkloadEndpoint="ci--4081.2.1--d--c5ae8496ec-k8s-calico--kube--controllers--5bf6c6b877--7fph7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--d--c5ae8496ec-k8s-calico--kube--controllers--5bf6c6b877--7fph7-eth0", GenerateName:"calico-kube-controllers-5bf6c6b877-", Namespace:"calico-system", SelfLink:"", UID:"e9d7b383-6d7b-4fc2-8e54-c423fd3aaee5", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 48, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5bf6c6b877", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-d-c5ae8496ec", ContainerID:"f88e87eb5b1f403602a10b43447a4705894aa3f36b486f6a850f5ed9024cb8da", Pod:"calico-kube-controllers-5bf6c6b877-7fph7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.0.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali51203558a73", MAC:"1e:6e:d7:f7:a5:12", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 09:49:24.737044 containerd[1459]: 2024-12-13 09:49:24.729 [INFO][3941] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f88e87eb5b1f403602a10b43447a4705894aa3f36b486f6a850f5ed9024cb8da" Namespace="calico-system" Pod="calico-kube-controllers-5bf6c6b877-7fph7" WorkloadEndpoint="ci--4081.2.1--d--c5ae8496ec-k8s-calico--kube--controllers--5bf6c6b877--7fph7-eth0" Dec 13 09:49:24.750641 containerd[1459]: time="2024-12-13T09:49:24.746389222Z" level=info msg="StopPodSandbox for \"c2fc255fd4e942f4f893bb52f8456827bd455aca6a60de4b7f00b6042b49a4df\"" Dec 13 09:49:24.833804 systemd-networkd[1359]: cali6588cede904: Link UP Dec 13 09:49:24.840213 systemd-networkd[1359]: cali6588cede904: Gained carrier Dec 13 09:49:24.863845 containerd[1459]: time="2024-12-13T09:49:24.862913394Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 09:49:24.864414 containerd[1459]: time="2024-12-13T09:49:24.863573490Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 09:49:24.864414 containerd[1459]: time="2024-12-13T09:49:24.863653159Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:49:24.864414 containerd[1459]: time="2024-12-13T09:49:24.863907640Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:49:24.897409 containerd[1459]: 2024-12-13 09:49:24.499 [INFO][3943] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.1--d--c5ae8496ec-k8s-calico--apiserver--86c9dd4fbf--d9pn9-eth0 calico-apiserver-86c9dd4fbf- calico-apiserver 97522c65-0ab9-4890-ab5c-998cdfc7bb0c 831 0 2024-12-13 09:48:59 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:86c9dd4fbf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.2.1-d-c5ae8496ec calico-apiserver-86c9dd4fbf-d9pn9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6588cede904 [] []}} ContainerID="89c083e9147355c49128996ae783464bc7c240ac9797d672d663b346d739bcbf" Namespace="calico-apiserver" Pod="calico-apiserver-86c9dd4fbf-d9pn9" WorkloadEndpoint="ci--4081.2.1--d--c5ae8496ec-k8s-calico--apiserver--86c9dd4fbf--d9pn9-" Dec 13 09:49:24.897409 containerd[1459]: 2024-12-13 09:49:24.499 [INFO][3943] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="89c083e9147355c49128996ae783464bc7c240ac9797d672d663b346d739bcbf" Namespace="calico-apiserver" Pod="calico-apiserver-86c9dd4fbf-d9pn9" WorkloadEndpoint="ci--4081.2.1--d--c5ae8496ec-k8s-calico--apiserver--86c9dd4fbf--d9pn9-eth0" Dec 13 09:49:24.897409 containerd[1459]: 2024-12-13 09:49:24.610 [INFO][3983] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="89c083e9147355c49128996ae783464bc7c240ac9797d672d663b346d739bcbf" HandleID="k8s-pod-network.89c083e9147355c49128996ae783464bc7c240ac9797d672d663b346d739bcbf" Workload="ci--4081.2.1--d--c5ae8496ec-k8s-calico--apiserver--86c9dd4fbf--d9pn9-eth0" Dec 13 09:49:24.897409 containerd[1459]: 2024-12-13 09:49:24.630 [INFO][3983] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="89c083e9147355c49128996ae783464bc7c240ac9797d672d663b346d739bcbf" HandleID="k8s-pod-network.89c083e9147355c49128996ae783464bc7c240ac9797d672d663b346d739bcbf" Workload="ci--4081.2.1--d--c5ae8496ec-k8s-calico--apiserver--86c9dd4fbf--d9pn9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000334c80), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.2.1-d-c5ae8496ec", "pod":"calico-apiserver-86c9dd4fbf-d9pn9", "timestamp":"2024-12-13 09:49:24.610649964 +0000 UTC"}, Hostname:"ci-4081.2.1-d-c5ae8496ec", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 09:49:24.897409 containerd[1459]: 2024-12-13 09:49:24.631 [INFO][3983] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 09:49:24.897409 containerd[1459]: 2024-12-13 09:49:24.681 [INFO][3983] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 09:49:24.897409 containerd[1459]: 2024-12-13 09:49:24.681 [INFO][3983] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.1-d-c5ae8496ec' Dec 13 09:49:24.897409 containerd[1459]: 2024-12-13 09:49:24.687 [INFO][3983] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.89c083e9147355c49128996ae783464bc7c240ac9797d672d663b346d739bcbf" host="ci-4081.2.1-d-c5ae8496ec" Dec 13 09:49:24.897409 containerd[1459]: 2024-12-13 09:49:24.703 [INFO][3983] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.1-d-c5ae8496ec" Dec 13 09:49:24.897409 containerd[1459]: 2024-12-13 09:49:24.722 [INFO][3983] ipam/ipam.go 489: Trying affinity for 192.168.0.0/26 host="ci-4081.2.1-d-c5ae8496ec" Dec 13 09:49:24.897409 containerd[1459]: 2024-12-13 09:49:24.729 [INFO][3983] ipam/ipam.go 155: Attempting to load block cidr=192.168.0.0/26 host="ci-4081.2.1-d-c5ae8496ec" Dec 13 09:49:24.897409 containerd[1459]: 2024-12-13 09:49:24.741 [INFO][3983] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.0.0/26 host="ci-4081.2.1-d-c5ae8496ec" Dec 13 09:49:24.897409 containerd[1459]: 2024-12-13 09:49:24.742 [INFO][3983] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.0.0/26 handle="k8s-pod-network.89c083e9147355c49128996ae783464bc7c240ac9797d672d663b346d739bcbf" host="ci-4081.2.1-d-c5ae8496ec" Dec 13 09:49:24.897409 containerd[1459]: 2024-12-13 09:49:24.751 [INFO][3983] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.89c083e9147355c49128996ae783464bc7c240ac9797d672d663b346d739bcbf Dec 13 09:49:24.897409 containerd[1459]: 2024-12-13 09:49:24.768 [INFO][3983] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.0.0/26 handle="k8s-pod-network.89c083e9147355c49128996ae783464bc7c240ac9797d672d663b346d739bcbf" host="ci-4081.2.1-d-c5ae8496ec" Dec 13 09:49:24.897409 containerd[1459]: 2024-12-13 09:49:24.788 [INFO][3983] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.0.2/26] block=192.168.0.0/26 handle="k8s-pod-network.89c083e9147355c49128996ae783464bc7c240ac9797d672d663b346d739bcbf" host="ci-4081.2.1-d-c5ae8496ec" Dec 13 09:49:24.897409 containerd[1459]: 2024-12-13 09:49:24.789 [INFO][3983] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.0.2/26] handle="k8s-pod-network.89c083e9147355c49128996ae783464bc7c240ac9797d672d663b346d739bcbf" host="ci-4081.2.1-d-c5ae8496ec" Dec 13 09:49:24.897409 containerd[1459]: 2024-12-13 09:49:24.789 [INFO][3983] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 09:49:24.897409 containerd[1459]: 2024-12-13 09:49:24.789 [INFO][3983] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.0.2/26] IPv6=[] ContainerID="89c083e9147355c49128996ae783464bc7c240ac9797d672d663b346d739bcbf" HandleID="k8s-pod-network.89c083e9147355c49128996ae783464bc7c240ac9797d672d663b346d739bcbf" Workload="ci--4081.2.1--d--c5ae8496ec-k8s-calico--apiserver--86c9dd4fbf--d9pn9-eth0" Dec 13 09:49:24.900682 containerd[1459]: 2024-12-13 09:49:24.811 [INFO][3943] cni-plugin/k8s.go 386: Populated endpoint ContainerID="89c083e9147355c49128996ae783464bc7c240ac9797d672d663b346d739bcbf" Namespace="calico-apiserver" Pod="calico-apiserver-86c9dd4fbf-d9pn9" WorkloadEndpoint="ci--4081.2.1--d--c5ae8496ec-k8s-calico--apiserver--86c9dd4fbf--d9pn9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--d--c5ae8496ec-k8s-calico--apiserver--86c9dd4fbf--d9pn9-eth0", GenerateName:"calico-apiserver-86c9dd4fbf-", Namespace:"calico-apiserver", SelfLink:"", UID:"97522c65-0ab9-4890-ab5c-998cdfc7bb0c", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 48, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86c9dd4fbf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-d-c5ae8496ec", ContainerID:"", Pod:"calico-apiserver-86c9dd4fbf-d9pn9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.0.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6588cede904", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 09:49:24.900682 containerd[1459]: 2024-12-13 09:49:24.815 [INFO][3943] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.0.2/32] ContainerID="89c083e9147355c49128996ae783464bc7c240ac9797d672d663b346d739bcbf" Namespace="calico-apiserver" Pod="calico-apiserver-86c9dd4fbf-d9pn9" WorkloadEndpoint="ci--4081.2.1--d--c5ae8496ec-k8s-calico--apiserver--86c9dd4fbf--d9pn9-eth0" Dec 13 09:49:24.900682 containerd[1459]: 2024-12-13 09:49:24.815 [INFO][3943] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6588cede904 ContainerID="89c083e9147355c49128996ae783464bc7c240ac9797d672d663b346d739bcbf" Namespace="calico-apiserver" Pod="calico-apiserver-86c9dd4fbf-d9pn9" WorkloadEndpoint="ci--4081.2.1--d--c5ae8496ec-k8s-calico--apiserver--86c9dd4fbf--d9pn9-eth0" Dec 13 09:49:24.900682 containerd[1459]: 2024-12-13 09:49:24.845 [INFO][3943] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="89c083e9147355c49128996ae783464bc7c240ac9797d672d663b346d739bcbf" Namespace="calico-apiserver" Pod="calico-apiserver-86c9dd4fbf-d9pn9" WorkloadEndpoint="ci--4081.2.1--d--c5ae8496ec-k8s-calico--apiserver--86c9dd4fbf--d9pn9-eth0" Dec 13 09:49:24.900682 containerd[1459]: 2024-12-13 09:49:24.857 [INFO][3943] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="89c083e9147355c49128996ae783464bc7c240ac9797d672d663b346d739bcbf" Namespace="calico-apiserver" Pod="calico-apiserver-86c9dd4fbf-d9pn9" WorkloadEndpoint="ci--4081.2.1--d--c5ae8496ec-k8s-calico--apiserver--86c9dd4fbf--d9pn9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--d--c5ae8496ec-k8s-calico--apiserver--86c9dd4fbf--d9pn9-eth0", GenerateName:"calico-apiserver-86c9dd4fbf-", Namespace:"calico-apiserver", SelfLink:"", UID:"97522c65-0ab9-4890-ab5c-998cdfc7bb0c", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 48, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86c9dd4fbf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-d-c5ae8496ec", ContainerID:"89c083e9147355c49128996ae783464bc7c240ac9797d672d663b346d739bcbf", Pod:"calico-apiserver-86c9dd4fbf-d9pn9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.0.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6588cede904", MAC:"36:d2:94:7b:85:cf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 09:49:24.900682 containerd[1459]: 2024-12-13 09:49:24.885 [INFO][3943] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="89c083e9147355c49128996ae783464bc7c240ac9797d672d663b346d739bcbf" Namespace="calico-apiserver" Pod="calico-apiserver-86c9dd4fbf-d9pn9" WorkloadEndpoint="ci--4081.2.1--d--c5ae8496ec-k8s-calico--apiserver--86c9dd4fbf--d9pn9-eth0" Dec 13 09:49:24.943411 systemd[1]: Started cri-containerd-f88e87eb5b1f403602a10b43447a4705894aa3f36b486f6a850f5ed9024cb8da.scope - libcontainer container f88e87eb5b1f403602a10b43447a4705894aa3f36b486f6a850f5ed9024cb8da. Dec 13 09:49:25.007997 systemd-networkd[1359]: vxlan.calico: Gained IPv6LL Dec 13 09:49:25.032759 containerd[1459]: time="2024-12-13T09:49:25.032449521Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 09:49:25.036933 containerd[1459]: time="2024-12-13T09:49:25.032571075Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 09:49:25.036933 containerd[1459]: time="2024-12-13T09:49:25.034240416Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:49:25.036933 containerd[1459]: time="2024-12-13T09:49:25.034429858Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:49:25.101577 systemd[1]: Started cri-containerd-89c083e9147355c49128996ae783464bc7c240ac9797d672d663b346d739bcbf.scope - libcontainer container 89c083e9147355c49128996ae783464bc7c240ac9797d672d663b346d739bcbf. Dec 13 09:49:25.110157 containerd[1459]: 2024-12-13 09:49:24.955 [INFO][4033] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c2fc255fd4e942f4f893bb52f8456827bd455aca6a60de4b7f00b6042b49a4df" Dec 13 09:49:25.110157 containerd[1459]: 2024-12-13 09:49:24.960 [INFO][4033] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c2fc255fd4e942f4f893bb52f8456827bd455aca6a60de4b7f00b6042b49a4df" iface="eth0" netns="/var/run/netns/cni-53d3422c-f571-a738-c638-814770a9ccee" Dec 13 09:49:25.110157 containerd[1459]: 2024-12-13 09:49:24.961 [INFO][4033] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c2fc255fd4e942f4f893bb52f8456827bd455aca6a60de4b7f00b6042b49a4df" iface="eth0" netns="/var/run/netns/cni-53d3422c-f571-a738-c638-814770a9ccee" Dec 13 09:49:25.110157 containerd[1459]: 2024-12-13 09:49:24.962 [INFO][4033] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c2fc255fd4e942f4f893bb52f8456827bd455aca6a60de4b7f00b6042b49a4df" iface="eth0" netns="/var/run/netns/cni-53d3422c-f571-a738-c638-814770a9ccee" Dec 13 09:49:25.110157 containerd[1459]: 2024-12-13 09:49:24.962 [INFO][4033] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c2fc255fd4e942f4f893bb52f8456827bd455aca6a60de4b7f00b6042b49a4df" Dec 13 09:49:25.110157 containerd[1459]: 2024-12-13 09:49:24.962 [INFO][4033] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c2fc255fd4e942f4f893bb52f8456827bd455aca6a60de4b7f00b6042b49a4df" Dec 13 09:49:25.110157 containerd[1459]: 2024-12-13 09:49:25.061 [INFO][4081] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c2fc255fd4e942f4f893bb52f8456827bd455aca6a60de4b7f00b6042b49a4df" HandleID="k8s-pod-network.c2fc255fd4e942f4f893bb52f8456827bd455aca6a60de4b7f00b6042b49a4df" Workload="ci--4081.2.1--d--c5ae8496ec-k8s-coredns--7db6d8ff4d--4xm55-eth0" Dec 13 09:49:25.110157 containerd[1459]: 2024-12-13 09:49:25.062 [INFO][4081] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 09:49:25.110157 containerd[1459]: 2024-12-13 09:49:25.063 [INFO][4081] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 09:49:25.110157 containerd[1459]: 2024-12-13 09:49:25.077 [WARNING][4081] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c2fc255fd4e942f4f893bb52f8456827bd455aca6a60de4b7f00b6042b49a4df" HandleID="k8s-pod-network.c2fc255fd4e942f4f893bb52f8456827bd455aca6a60de4b7f00b6042b49a4df" Workload="ci--4081.2.1--d--c5ae8496ec-k8s-coredns--7db6d8ff4d--4xm55-eth0" Dec 13 09:49:25.110157 containerd[1459]: 2024-12-13 09:49:25.077 [INFO][4081] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c2fc255fd4e942f4f893bb52f8456827bd455aca6a60de4b7f00b6042b49a4df" HandleID="k8s-pod-network.c2fc255fd4e942f4f893bb52f8456827bd455aca6a60de4b7f00b6042b49a4df" Workload="ci--4081.2.1--d--c5ae8496ec-k8s-coredns--7db6d8ff4d--4xm55-eth0" Dec 13 09:49:25.110157 containerd[1459]: 2024-12-13 09:49:25.082 [INFO][4081] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 09:49:25.110157 containerd[1459]: 2024-12-13 09:49:25.101 [INFO][4033] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c2fc255fd4e942f4f893bb52f8456827bd455aca6a60de4b7f00b6042b49a4df" Dec 13 09:49:25.112535 containerd[1459]: time="2024-12-13T09:49:25.112453235Z" level=info msg="TearDown network for sandbox \"c2fc255fd4e942f4f893bb52f8456827bd455aca6a60de4b7f00b6042b49a4df\" successfully" Dec 13 09:49:25.112535 containerd[1459]: time="2024-12-13T09:49:25.112498084Z" level=info msg="StopPodSandbox for \"c2fc255fd4e942f4f893bb52f8456827bd455aca6a60de4b7f00b6042b49a4df\" returns successfully" Dec 13 09:49:25.114003 kubelet[2549]: E1213 09:49:25.113672 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:49:25.119288 containerd[1459]: time="2024-12-13T09:49:25.118146049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4xm55,Uid:9046cd15-11c7-4a60-ba00-3642ddd7002a,Namespace:kube-system,Attempt:1,}" Dec 13 09:49:25.176039 systemd[1]: run-netns-cni\x2d53d3422c\x2df571\x2da738\x2dc638\x2d814770a9ccee.mount: Deactivated successfully. Dec 13 09:49:25.187122 containerd[1459]: time="2024-12-13T09:49:25.186798974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5bf6c6b877-7fph7,Uid:e9d7b383-6d7b-4fc2-8e54-c423fd3aaee5,Namespace:calico-system,Attempt:1,} returns sandbox id \"f88e87eb5b1f403602a10b43447a4705894aa3f36b486f6a850f5ed9024cb8da\"" Dec 13 09:49:25.214237 containerd[1459]: time="2024-12-13T09:49:25.213158453Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Dec 13 09:49:25.290001 containerd[1459]: time="2024-12-13T09:49:25.287817997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86c9dd4fbf-d9pn9,Uid:97522c65-0ab9-4890-ab5c-998cdfc7bb0c,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"89c083e9147355c49128996ae783464bc7c240ac9797d672d663b346d739bcbf\"" Dec 13 09:49:25.433242 systemd-networkd[1359]: cali5e4c2066ecf: Link UP Dec 13 09:49:25.434602 systemd-networkd[1359]: cali5e4c2066ecf: Gained carrier Dec 13 09:49:25.463343 containerd[1459]: 2024-12-13 09:49:25.271 [INFO][4130] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.1--d--c5ae8496ec-k8s-coredns--7db6d8ff4d--4xm55-eth0 coredns-7db6d8ff4d- kube-system 9046cd15-11c7-4a60-ba00-3642ddd7002a 843 0 2024-12-13 09:48:51 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.2.1-d-c5ae8496ec coredns-7db6d8ff4d-4xm55 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5e4c2066ecf [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="5b3420cc8a40cbdddfe53c4fc687a309ad11d7b909a2b0a0c5f72da4874e7713" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4xm55" WorkloadEndpoint="ci--4081.2.1--d--c5ae8496ec-k8s-coredns--7db6d8ff4d--4xm55-" Dec 13 09:49:25.463343 containerd[1459]: 2024-12-13 09:49:25.271 [INFO][4130] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5b3420cc8a40cbdddfe53c4fc687a309ad11d7b909a2b0a0c5f72da4874e7713" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4xm55" WorkloadEndpoint="ci--4081.2.1--d--c5ae8496ec-k8s-coredns--7db6d8ff4d--4xm55-eth0" Dec 13 09:49:25.463343 containerd[1459]: 2024-12-13 09:49:25.325 [INFO][4147] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5b3420cc8a40cbdddfe53c4fc687a309ad11d7b909a2b0a0c5f72da4874e7713" HandleID="k8s-pod-network.5b3420cc8a40cbdddfe53c4fc687a309ad11d7b909a2b0a0c5f72da4874e7713" Workload="ci--4081.2.1--d--c5ae8496ec-k8s-coredns--7db6d8ff4d--4xm55-eth0" Dec 13 09:49:25.463343 containerd[1459]: 2024-12-13 09:49:25.342 [INFO][4147] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5b3420cc8a40cbdddfe53c4fc687a309ad11d7b909a2b0a0c5f72da4874e7713" HandleID="k8s-pod-network.5b3420cc8a40cbdddfe53c4fc687a309ad11d7b909a2b0a0c5f72da4874e7713" Workload="ci--4081.2.1--d--c5ae8496ec-k8s-coredns--7db6d8ff4d--4xm55-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000313940), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.2.1-d-c5ae8496ec", "pod":"coredns-7db6d8ff4d-4xm55", "timestamp":"2024-12-13 09:49:25.32584366 +0000 UTC"}, Hostname:"ci-4081.2.1-d-c5ae8496ec", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 09:49:25.463343 containerd[1459]: 2024-12-13 09:49:25.342 [INFO][4147] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 09:49:25.463343 containerd[1459]: 2024-12-13 09:49:25.342 [INFO][4147] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 09:49:25.463343 containerd[1459]: 2024-12-13 09:49:25.342 [INFO][4147] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.1-d-c5ae8496ec' Dec 13 09:49:25.463343 containerd[1459]: 2024-12-13 09:49:25.346 [INFO][4147] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5b3420cc8a40cbdddfe53c4fc687a309ad11d7b909a2b0a0c5f72da4874e7713" host="ci-4081.2.1-d-c5ae8496ec" Dec 13 09:49:25.463343 containerd[1459]: 2024-12-13 09:49:25.355 [INFO][4147] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.1-d-c5ae8496ec" Dec 13 09:49:25.463343 containerd[1459]: 2024-12-13 09:49:25.365 [INFO][4147] ipam/ipam.go 489: Trying affinity for 192.168.0.0/26 host="ci-4081.2.1-d-c5ae8496ec" Dec 13 09:49:25.463343 containerd[1459]: 2024-12-13 09:49:25.371 [INFO][4147] ipam/ipam.go 155: Attempting to load block cidr=192.168.0.0/26 host="ci-4081.2.1-d-c5ae8496ec" Dec 13 09:49:25.463343 containerd[1459]: 2024-12-13 09:49:25.377 [INFO][4147] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.0.0/26 host="ci-4081.2.1-d-c5ae8496ec" Dec 13 09:49:25.463343 containerd[1459]: 2024-12-13 09:49:25.377 [INFO][4147] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.0.0/26 handle="k8s-pod-network.5b3420cc8a40cbdddfe53c4fc687a309ad11d7b909a2b0a0c5f72da4874e7713" host="ci-4081.2.1-d-c5ae8496ec" Dec 13 09:49:25.463343 containerd[1459]: 2024-12-13 09:49:25.381 [INFO][4147] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.5b3420cc8a40cbdddfe53c4fc687a309ad11d7b909a2b0a0c5f72da4874e7713 Dec 13 09:49:25.463343 containerd[1459]: 2024-12-13 09:49:25.409 [INFO][4147] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.0.0/26 handle="k8s-pod-network.5b3420cc8a40cbdddfe53c4fc687a309ad11d7b909a2b0a0c5f72da4874e7713" host="ci-4081.2.1-d-c5ae8496ec" Dec 13 09:49:25.463343 containerd[1459]: 2024-12-13 09:49:25.422 [INFO][4147] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.0.3/26] block=192.168.0.0/26 handle="k8s-pod-network.5b3420cc8a40cbdddfe53c4fc687a309ad11d7b909a2b0a0c5f72da4874e7713" host="ci-4081.2.1-d-c5ae8496ec" Dec 13 09:49:25.463343 containerd[1459]: 2024-12-13 09:49:25.422 [INFO][4147] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.0.3/26] handle="k8s-pod-network.5b3420cc8a40cbdddfe53c4fc687a309ad11d7b909a2b0a0c5f72da4874e7713" host="ci-4081.2.1-d-c5ae8496ec" Dec 13 09:49:25.463343 containerd[1459]: 2024-12-13 09:49:25.422 [INFO][4147] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 09:49:25.463343 containerd[1459]: 2024-12-13 09:49:25.422 [INFO][4147] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.0.3/26] IPv6=[] ContainerID="5b3420cc8a40cbdddfe53c4fc687a309ad11d7b909a2b0a0c5f72da4874e7713" HandleID="k8s-pod-network.5b3420cc8a40cbdddfe53c4fc687a309ad11d7b909a2b0a0c5f72da4874e7713" Workload="ci--4081.2.1--d--c5ae8496ec-k8s-coredns--7db6d8ff4d--4xm55-eth0" Dec 13 09:49:25.465229 containerd[1459]: 2024-12-13 09:49:25.426 [INFO][4130] cni-plugin/k8s.go 386: Populated endpoint ContainerID="5b3420cc8a40cbdddfe53c4fc687a309ad11d7b909a2b0a0c5f72da4874e7713" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4xm55" WorkloadEndpoint="ci--4081.2.1--d--c5ae8496ec-k8s-coredns--7db6d8ff4d--4xm55-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--d--c5ae8496ec-k8s-coredns--7db6d8ff4d--4xm55-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"9046cd15-11c7-4a60-ba00-3642ddd7002a", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 48, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-d-c5ae8496ec", ContainerID:"", Pod:"coredns-7db6d8ff4d-4xm55", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.0.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5e4c2066ecf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 09:49:25.465229 containerd[1459]: 2024-12-13 09:49:25.426 [INFO][4130] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.0.3/32] ContainerID="5b3420cc8a40cbdddfe53c4fc687a309ad11d7b909a2b0a0c5f72da4874e7713" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4xm55" WorkloadEndpoint="ci--4081.2.1--d--c5ae8496ec-k8s-coredns--7db6d8ff4d--4xm55-eth0" Dec 13 09:49:25.465229 containerd[1459]: 2024-12-13 09:49:25.426 [INFO][4130] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5e4c2066ecf ContainerID="5b3420cc8a40cbdddfe53c4fc687a309ad11d7b909a2b0a0c5f72da4874e7713" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4xm55" WorkloadEndpoint="ci--4081.2.1--d--c5ae8496ec-k8s-coredns--7db6d8ff4d--4xm55-eth0" Dec 13 09:49:25.465229 containerd[1459]: 2024-12-13 09:49:25.435 [INFO][4130] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5b3420cc8a40cbdddfe53c4fc687a309ad11d7b909a2b0a0c5f72da4874e7713" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4xm55" WorkloadEndpoint="ci--4081.2.1--d--c5ae8496ec-k8s-coredns--7db6d8ff4d--4xm55-eth0" Dec 13 09:49:25.465229 containerd[1459]: 2024-12-13 09:49:25.435 [INFO][4130] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5b3420cc8a40cbdddfe53c4fc687a309ad11d7b909a2b0a0c5f72da4874e7713" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4xm55" WorkloadEndpoint="ci--4081.2.1--d--c5ae8496ec-k8s-coredns--7db6d8ff4d--4xm55-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--d--c5ae8496ec-k8s-coredns--7db6d8ff4d--4xm55-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"9046cd15-11c7-4a60-ba00-3642ddd7002a", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 48, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-d-c5ae8496ec", ContainerID:"5b3420cc8a40cbdddfe53c4fc687a309ad11d7b909a2b0a0c5f72da4874e7713", Pod:"coredns-7db6d8ff4d-4xm55", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.0.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5e4c2066ecf", MAC:"4a:c7:99:e8:29:ae", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 09:49:25.465229 containerd[1459]: 2024-12-13 09:49:25.460 [INFO][4130] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="5b3420cc8a40cbdddfe53c4fc687a309ad11d7b909a2b0a0c5f72da4874e7713" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4xm55" WorkloadEndpoint="ci--4081.2.1--d--c5ae8496ec-k8s-coredns--7db6d8ff4d--4xm55-eth0" Dec 13 09:49:25.511396 containerd[1459]: time="2024-12-13T09:49:25.510954807Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 09:49:25.511396 containerd[1459]: time="2024-12-13T09:49:25.511035906Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 09:49:25.511396 containerd[1459]: time="2024-12-13T09:49:25.511061601Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:49:25.511396 containerd[1459]: time="2024-12-13T09:49:25.511239324Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:49:25.562227 systemd[1]: Started cri-containerd-5b3420cc8a40cbdddfe53c4fc687a309ad11d7b909a2b0a0c5f72da4874e7713.scope - libcontainer container 5b3420cc8a40cbdddfe53c4fc687a309ad11d7b909a2b0a0c5f72da4874e7713. Dec 13 09:49:25.637698 containerd[1459]: time="2024-12-13T09:49:25.637511814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4xm55,Uid:9046cd15-11c7-4a60-ba00-3642ddd7002a,Namespace:kube-system,Attempt:1,} returns sandbox id \"5b3420cc8a40cbdddfe53c4fc687a309ad11d7b909a2b0a0c5f72da4874e7713\"" Dec 13 09:49:25.639242 kubelet[2549]: E1213 09:49:25.639185 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:49:25.646329 containerd[1459]: time="2024-12-13T09:49:25.646263756Z" level=info msg="CreateContainer within sandbox \"5b3420cc8a40cbdddfe53c4fc687a309ad11d7b909a2b0a0c5f72da4874e7713\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 09:49:25.684496 containerd[1459]: time="2024-12-13T09:49:25.684291470Z" level=info msg="CreateContainer within sandbox \"5b3420cc8a40cbdddfe53c4fc687a309ad11d7b909a2b0a0c5f72da4874e7713\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c482241ef57cd86ce30915f75b481f4f792bc8ef9f49b9ec501f499066731398\"" Dec 13 09:49:25.688898 containerd[1459]: time="2024-12-13T09:49:25.688682336Z" level=info msg="StartContainer for \"c482241ef57cd86ce30915f75b481f4f792bc8ef9f49b9ec501f499066731398\"" Dec 13 09:49:25.741282 systemd[1]: Started cri-containerd-c482241ef57cd86ce30915f75b481f4f792bc8ef9f49b9ec501f499066731398.scope - libcontainer container c482241ef57cd86ce30915f75b481f4f792bc8ef9f49b9ec501f499066731398. Dec 13 09:49:25.794100 containerd[1459]: time="2024-12-13T09:49:25.793644387Z" level=info msg="StartContainer for \"c482241ef57cd86ce30915f75b481f4f792bc8ef9f49b9ec501f499066731398\" returns successfully" Dec 13 09:49:25.968350 systemd-networkd[1359]: cali51203558a73: Gained IPv6LL Dec 13 09:49:26.114584 kubelet[2549]: E1213 09:49:26.113498 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:49:26.154489 kubelet[2549]: I1213 09:49:26.154263 2549 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-4xm55" podStartSLOduration=35.154236664 podStartE2EDuration="35.154236664s" podCreationTimestamp="2024-12-13 09:48:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 09:49:26.152993837 +0000 UTC m=+47.657879714" watchObservedRunningTime="2024-12-13 09:49:26.154236664 +0000 UTC m=+47.659122520" Dec 13 09:49:26.167498 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2068341592.mount: Deactivated successfully. Dec 13 09:49:26.479146 systemd-networkd[1359]: cali6588cede904: Gained IPv6LL Dec 13 09:49:26.672015 systemd-networkd[1359]: cali5e4c2066ecf: Gained IPv6LL Dec 13 09:49:26.747819 containerd[1459]: time="2024-12-13T09:49:26.747625405Z" level=info msg="StopPodSandbox for \"057efe23f5275971e6dfc7a081f92c63f2006e578fa2b36680b092d2778da766\"" Dec 13 09:49:26.749294 containerd[1459]: time="2024-12-13T09:49:26.748313015Z" level=info msg="StopPodSandbox for \"af951e7a358aff50950246461650e0a7adfd33f19366a09954150c87eb74ab6f\"" Dec 13 09:49:26.752385 containerd[1459]: time="2024-12-13T09:49:26.751250156Z" level=info msg="StopPodSandbox for \"5c75e6e80546eb168b5765424bba8d3fff071e062eeb36b1ea0dbcc09848e032\"" Dec 13 09:49:27.069903 containerd[1459]: 2024-12-13 09:49:26.948 [INFO][4297] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5c75e6e80546eb168b5765424bba8d3fff071e062eeb36b1ea0dbcc09848e032" Dec 13 09:49:27.069903 containerd[1459]: 2024-12-13 09:49:26.948 [INFO][4297] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5c75e6e80546eb168b5765424bba8d3fff071e062eeb36b1ea0dbcc09848e032" iface="eth0" netns="/var/run/netns/cni-f0233552-3790-7856-c513-9e1657c344c8" Dec 13 09:49:27.069903 containerd[1459]: 2024-12-13 09:49:26.949 [INFO][4297] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5c75e6e80546eb168b5765424bba8d3fff071e062eeb36b1ea0dbcc09848e032" iface="eth0" netns="/var/run/netns/cni-f0233552-3790-7856-c513-9e1657c344c8" Dec 13 09:49:27.069903 containerd[1459]: 2024-12-13 09:49:26.950 [INFO][4297] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5c75e6e80546eb168b5765424bba8d3fff071e062eeb36b1ea0dbcc09848e032" iface="eth0" netns="/var/run/netns/cni-f0233552-3790-7856-c513-9e1657c344c8" Dec 13 09:49:27.069903 containerd[1459]: 2024-12-13 09:49:26.950 [INFO][4297] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5c75e6e80546eb168b5765424bba8d3fff071e062eeb36b1ea0dbcc09848e032" Dec 13 09:49:27.069903 containerd[1459]: 2024-12-13 09:49:26.950 [INFO][4297] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5c75e6e80546eb168b5765424bba8d3fff071e062eeb36b1ea0dbcc09848e032" Dec 13 09:49:27.069903 containerd[1459]: 2024-12-13 09:49:27.042 [INFO][4312] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5c75e6e80546eb168b5765424bba8d3fff071e062eeb36b1ea0dbcc09848e032" HandleID="k8s-pod-network.5c75e6e80546eb168b5765424bba8d3fff071e062eeb36b1ea0dbcc09848e032" Workload="ci--4081.2.1--d--c5ae8496ec-k8s-csi--node--driver--z8kgk-eth0" Dec 13 09:49:27.069903 containerd[1459]: 2024-12-13 09:49:27.042 [INFO][4312] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 09:49:27.069903 containerd[1459]: 2024-12-13 09:49:27.042 [INFO][4312] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 09:49:27.069903 containerd[1459]: 2024-12-13 09:49:27.058 [WARNING][4312] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5c75e6e80546eb168b5765424bba8d3fff071e062eeb36b1ea0dbcc09848e032" HandleID="k8s-pod-network.5c75e6e80546eb168b5765424bba8d3fff071e062eeb36b1ea0dbcc09848e032" Workload="ci--4081.2.1--d--c5ae8496ec-k8s-csi--node--driver--z8kgk-eth0" Dec 13 09:49:27.069903 containerd[1459]: 2024-12-13 09:49:27.058 [INFO][4312] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5c75e6e80546eb168b5765424bba8d3fff071e062eeb36b1ea0dbcc09848e032" HandleID="k8s-pod-network.5c75e6e80546eb168b5765424bba8d3fff071e062eeb36b1ea0dbcc09848e032" Workload="ci--4081.2.1--d--c5ae8496ec-k8s-csi--node--driver--z8kgk-eth0" Dec 13 09:49:27.069903 containerd[1459]: 2024-12-13 09:49:27.062 [INFO][4312] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 09:49:27.069903 containerd[1459]: 2024-12-13 09:49:27.064 [INFO][4297] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5c75e6e80546eb168b5765424bba8d3fff071e062eeb36b1ea0dbcc09848e032" Dec 13 09:49:27.073637 containerd[1459]: time="2024-12-13T09:49:27.072068295Z" level=info msg="TearDown network for sandbox \"5c75e6e80546eb168b5765424bba8d3fff071e062eeb36b1ea0dbcc09848e032\" successfully" Dec 13 09:49:27.073637 containerd[1459]: time="2024-12-13T09:49:27.072124021Z" level=info msg="StopPodSandbox for \"5c75e6e80546eb168b5765424bba8d3fff071e062eeb36b1ea0dbcc09848e032\" returns successfully" Dec 13 09:49:27.079080 systemd[1]: run-netns-cni\x2df0233552\x2d3790\x2d7856\x2dc513\x2d9e1657c344c8.mount: Deactivated successfully. Dec 13 09:49:27.101975 containerd[1459]: 2024-12-13 09:49:26.917 [INFO][4280] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="af951e7a358aff50950246461650e0a7adfd33f19366a09954150c87eb74ab6f" Dec 13 09:49:27.101975 containerd[1459]: 2024-12-13 09:49:26.918 [INFO][4280] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="af951e7a358aff50950246461650e0a7adfd33f19366a09954150c87eb74ab6f" iface="eth0" netns="/var/run/netns/cni-ee7e8960-a5ba-94f0-2f61-2b9be9d1f639" Dec 13 09:49:27.101975 containerd[1459]: 2024-12-13 09:49:26.918 [INFO][4280] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="af951e7a358aff50950246461650e0a7adfd33f19366a09954150c87eb74ab6f" iface="eth0" netns="/var/run/netns/cni-ee7e8960-a5ba-94f0-2f61-2b9be9d1f639" Dec 13 09:49:27.101975 containerd[1459]: 2024-12-13 09:49:26.919 [INFO][4280] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="af951e7a358aff50950246461650e0a7adfd33f19366a09954150c87eb74ab6f" iface="eth0" netns="/var/run/netns/cni-ee7e8960-a5ba-94f0-2f61-2b9be9d1f639" Dec 13 09:49:27.101975 containerd[1459]: 2024-12-13 09:49:26.919 [INFO][4280] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="af951e7a358aff50950246461650e0a7adfd33f19366a09954150c87eb74ab6f" Dec 13 09:49:27.101975 containerd[1459]: 2024-12-13 09:49:26.919 [INFO][4280] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="af951e7a358aff50950246461650e0a7adfd33f19366a09954150c87eb74ab6f" Dec 13 09:49:27.101975 containerd[1459]: 2024-12-13 09:49:27.048 [INFO][4307] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="af951e7a358aff50950246461650e0a7adfd33f19366a09954150c87eb74ab6f" HandleID="k8s-pod-network.af951e7a358aff50950246461650e0a7adfd33f19366a09954150c87eb74ab6f" Workload="ci--4081.2.1--d--c5ae8496ec-k8s-coredns--7db6d8ff4d--464d2-eth0" Dec 13 09:49:27.101975 containerd[1459]: 2024-12-13 09:49:27.049 [INFO][4307] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 09:49:27.101975 containerd[1459]: 2024-12-13 09:49:27.064 [INFO][4307] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 09:49:27.101975 containerd[1459]: 2024-12-13 09:49:27.087 [WARNING][4307] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="af951e7a358aff50950246461650e0a7adfd33f19366a09954150c87eb74ab6f" HandleID="k8s-pod-network.af951e7a358aff50950246461650e0a7adfd33f19366a09954150c87eb74ab6f" Workload="ci--4081.2.1--d--c5ae8496ec-k8s-coredns--7db6d8ff4d--464d2-eth0" Dec 13 09:49:27.101975 containerd[1459]: 2024-12-13 09:49:27.087 [INFO][4307] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="af951e7a358aff50950246461650e0a7adfd33f19366a09954150c87eb74ab6f" HandleID="k8s-pod-network.af951e7a358aff50950246461650e0a7adfd33f19366a09954150c87eb74ab6f" Workload="ci--4081.2.1--d--c5ae8496ec-k8s-coredns--7db6d8ff4d--464d2-eth0" Dec 13 09:49:27.101975 containerd[1459]: 2024-12-13 09:49:27.091 [INFO][4307] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 09:49:27.101975 containerd[1459]: 2024-12-13 09:49:27.099 [INFO][4280] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="af951e7a358aff50950246461650e0a7adfd33f19366a09954150c87eb74ab6f" Dec 13 09:49:27.104031 containerd[1459]: time="2024-12-13T09:49:27.103361005Z" level=info msg="TearDown network for sandbox \"af951e7a358aff50950246461650e0a7adfd33f19366a09954150c87eb74ab6f\" successfully" Dec 13 09:49:27.104031 containerd[1459]: time="2024-12-13T09:49:27.103417544Z" level=info msg="StopPodSandbox for \"af951e7a358aff50950246461650e0a7adfd33f19366a09954150c87eb74ab6f\" returns successfully" Dec 13 09:49:27.108902 kubelet[2549]: E1213 09:49:27.105245 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:49:27.109606 containerd[1459]: time="2024-12-13T09:49:27.106301542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-464d2,Uid:89c56140-f295-40c7-ae2a-952e41b9599a,Namespace:kube-system,Attempt:1,}" Dec 13 09:49:27.109799 systemd[1]: run-netns-cni\x2dee7e8960\x2da5ba\x2d94f0\x2d2f61\x2d2b9be9d1f639.mount: Deactivated successfully. Dec 13 09:49:27.126797 kubelet[2549]: E1213 09:49:27.126744 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:49:27.138882 containerd[1459]: time="2024-12-13T09:49:27.137553281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z8kgk,Uid:56d0e423-edd4-4223-a5ef-7fe3393e4271,Namespace:calico-system,Attempt:1,}" Dec 13 09:49:27.148477 containerd[1459]: 2024-12-13 09:49:26.967 [INFO][4287] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="057efe23f5275971e6dfc7a081f92c63f2006e578fa2b36680b092d2778da766" Dec 13 09:49:27.148477 containerd[1459]: 2024-12-13 09:49:26.967 [INFO][4287] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="057efe23f5275971e6dfc7a081f92c63f2006e578fa2b36680b092d2778da766" iface="eth0" netns="/var/run/netns/cni-86496ccd-d954-849a-4a38-b56c440f781d" Dec 13 09:49:27.148477 containerd[1459]: 2024-12-13 09:49:26.968 [INFO][4287] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="057efe23f5275971e6dfc7a081f92c63f2006e578fa2b36680b092d2778da766" iface="eth0" netns="/var/run/netns/cni-86496ccd-d954-849a-4a38-b56c440f781d" Dec 13 09:49:27.148477 containerd[1459]: 2024-12-13 09:49:26.970 [INFO][4287] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="057efe23f5275971e6dfc7a081f92c63f2006e578fa2b36680b092d2778da766" iface="eth0" netns="/var/run/netns/cni-86496ccd-d954-849a-4a38-b56c440f781d" Dec 13 09:49:27.148477 containerd[1459]: 2024-12-13 09:49:26.970 [INFO][4287] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="057efe23f5275971e6dfc7a081f92c63f2006e578fa2b36680b092d2778da766" Dec 13 09:49:27.148477 containerd[1459]: 2024-12-13 09:49:26.971 [INFO][4287] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="057efe23f5275971e6dfc7a081f92c63f2006e578fa2b36680b092d2778da766" Dec 13 09:49:27.148477 containerd[1459]: 2024-12-13 09:49:27.051 [INFO][4316] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="057efe23f5275971e6dfc7a081f92c63f2006e578fa2b36680b092d2778da766" HandleID="k8s-pod-network.057efe23f5275971e6dfc7a081f92c63f2006e578fa2b36680b092d2778da766" Workload="ci--4081.2.1--d--c5ae8496ec-k8s-calico--apiserver--86c9dd4fbf--lcxl7-eth0" Dec 13 09:49:27.148477 containerd[1459]: 2024-12-13 09:49:27.053 [INFO][4316] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 09:49:27.148477 containerd[1459]: 2024-12-13 09:49:27.091 [INFO][4316] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 09:49:27.148477 containerd[1459]: 2024-12-13 09:49:27.118 [WARNING][4316] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="057efe23f5275971e6dfc7a081f92c63f2006e578fa2b36680b092d2778da766" HandleID="k8s-pod-network.057efe23f5275971e6dfc7a081f92c63f2006e578fa2b36680b092d2778da766" Workload="ci--4081.2.1--d--c5ae8496ec-k8s-calico--apiserver--86c9dd4fbf--lcxl7-eth0" Dec 13 09:49:27.148477 containerd[1459]: 2024-12-13 09:49:27.118 [INFO][4316] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="057efe23f5275971e6dfc7a081f92c63f2006e578fa2b36680b092d2778da766" HandleID="k8s-pod-network.057efe23f5275971e6dfc7a081f92c63f2006e578fa2b36680b092d2778da766" Workload="ci--4081.2.1--d--c5ae8496ec-k8s-calico--apiserver--86c9dd4fbf--lcxl7-eth0" Dec 13 09:49:27.148477 containerd[1459]: 2024-12-13 09:49:27.131 [INFO][4316] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 09:49:27.148477 containerd[1459]: 2024-12-13 09:49:27.144 [INFO][4287] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="057efe23f5275971e6dfc7a081f92c63f2006e578fa2b36680b092d2778da766" Dec 13 09:49:27.152279 containerd[1459]: time="2024-12-13T09:49:27.151972819Z" level=info msg="TearDown network for sandbox \"057efe23f5275971e6dfc7a081f92c63f2006e578fa2b36680b092d2778da766\" successfully" Dec 13 09:49:27.152279 containerd[1459]: time="2024-12-13T09:49:27.152035443Z" level=info msg="StopPodSandbox for \"057efe23f5275971e6dfc7a081f92c63f2006e578fa2b36680b092d2778da766\" returns successfully" Dec 13 09:49:27.163962 containerd[1459]: time="2024-12-13T09:49:27.156306386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86c9dd4fbf-lcxl7,Uid:c34b1b21-b5b0-4dd4-928a-ce99fd8dbbbc,Namespace:calico-apiserver,Attempt:1,}" Dec 13 09:49:27.158717 systemd[1]: run-netns-cni\x2d86496ccd\x2dd954\x2d849a\x2d4a38\x2db56c440f781d.mount: Deactivated successfully. Dec 13 09:49:27.657299 systemd-networkd[1359]: cali3b175e886d4: Link UP Dec 13 09:49:27.657708 systemd-networkd[1359]: cali3b175e886d4: Gained carrier Dec 13 09:49:27.727772 containerd[1459]: 2024-12-13 09:49:27.330 [INFO][4328] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.1--d--c5ae8496ec-k8s-coredns--7db6d8ff4d--464d2-eth0 coredns-7db6d8ff4d- kube-system 89c56140-f295-40c7-ae2a-952e41b9599a 871 0 2024-12-13 09:48:51 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.2.1-d-c5ae8496ec coredns-7db6d8ff4d-464d2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3b175e886d4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="4b9d00c8124d97930ac566f36499aba2dd612588085b8e906c4dea73a28b7545" Namespace="kube-system" Pod="coredns-7db6d8ff4d-464d2" WorkloadEndpoint="ci--4081.2.1--d--c5ae8496ec-k8s-coredns--7db6d8ff4d--464d2-" Dec 13 09:49:27.727772 containerd[1459]: 2024-12-13 09:49:27.332 [INFO][4328] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4b9d00c8124d97930ac566f36499aba2dd612588085b8e906c4dea73a28b7545" Namespace="kube-system" Pod="coredns-7db6d8ff4d-464d2" WorkloadEndpoint="ci--4081.2.1--d--c5ae8496ec-k8s-coredns--7db6d8ff4d--464d2-eth0" Dec 13 09:49:27.727772 containerd[1459]: 2024-12-13 09:49:27.498 [INFO][4366] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4b9d00c8124d97930ac566f36499aba2dd612588085b8e906c4dea73a28b7545" HandleID="k8s-pod-network.4b9d00c8124d97930ac566f36499aba2dd612588085b8e906c4dea73a28b7545" Workload="ci--4081.2.1--d--c5ae8496ec-k8s-coredns--7db6d8ff4d--464d2-eth0" Dec 13 09:49:27.727772 containerd[1459]: 2024-12-13 09:49:27.528 [INFO][4366] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4b9d00c8124d97930ac566f36499aba2dd612588085b8e906c4dea73a28b7545" HandleID="k8s-pod-network.4b9d00c8124d97930ac566f36499aba2dd612588085b8e906c4dea73a28b7545" Workload="ci--4081.2.1--d--c5ae8496ec-k8s-coredns--7db6d8ff4d--464d2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000100b90), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.2.1-d-c5ae8496ec", "pod":"coredns-7db6d8ff4d-464d2", "timestamp":"2024-12-13 09:49:27.498751196 +0000 UTC"}, Hostname:"ci-4081.2.1-d-c5ae8496ec", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 09:49:27.727772 containerd[1459]: 2024-12-13 09:49:27.528 [INFO][4366] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 09:49:27.727772 containerd[1459]: 2024-12-13 09:49:27.528 [INFO][4366] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 09:49:27.727772 containerd[1459]: 2024-12-13 09:49:27.528 [INFO][4366] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.1-d-c5ae8496ec' Dec 13 09:49:27.727772 containerd[1459]: 2024-12-13 09:49:27.533 [INFO][4366] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4b9d00c8124d97930ac566f36499aba2dd612588085b8e906c4dea73a28b7545" host="ci-4081.2.1-d-c5ae8496ec" Dec 13 09:49:27.727772 containerd[1459]: 2024-12-13 09:49:27.545 [INFO][4366] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.1-d-c5ae8496ec" Dec 13 09:49:27.727772 containerd[1459]: 2024-12-13 09:49:27.571 [INFO][4366] ipam/ipam.go 489: Trying affinity for 192.168.0.0/26 host="ci-4081.2.1-d-c5ae8496ec" Dec 13 09:49:27.727772 containerd[1459]: 2024-12-13 09:49:27.582 [INFO][4366] ipam/ipam.go 155: Attempting to load block cidr=192.168.0.0/26 host="ci-4081.2.1-d-c5ae8496ec" Dec 13 09:49:27.727772 containerd[1459]: 2024-12-13 09:49:27.596 [INFO][4366] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.0.0/26 host="ci-4081.2.1-d-c5ae8496ec" Dec 13 09:49:27.727772 containerd[1459]: 2024-12-13 09:49:27.596 [INFO][4366] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.0.0/26 handle="k8s-pod-network.4b9d00c8124d97930ac566f36499aba2dd612588085b8e906c4dea73a28b7545" host="ci-4081.2.1-d-c5ae8496ec" Dec 13 09:49:27.727772 containerd[1459]: 2024-12-13 09:49:27.600 [INFO][4366] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4b9d00c8124d97930ac566f36499aba2dd612588085b8e906c4dea73a28b7545 Dec 13 09:49:27.727772 containerd[1459]: 2024-12-13 09:49:27.620 [INFO][4366] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.0.0/26 handle="k8s-pod-network.4b9d00c8124d97930ac566f36499aba2dd612588085b8e906c4dea73a28b7545" host="ci-4081.2.1-d-c5ae8496ec" Dec 13 09:49:27.727772 containerd[1459]: 2024-12-13 09:49:27.638 [INFO][4366] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.0.4/26] block=192.168.0.0/26 handle="k8s-pod-network.4b9d00c8124d97930ac566f36499aba2dd612588085b8e906c4dea73a28b7545" host="ci-4081.2.1-d-c5ae8496ec" Dec 13 09:49:27.727772 containerd[1459]: 2024-12-13 09:49:27.638 [INFO][4366] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.0.4/26] handle="k8s-pod-network.4b9d00c8124d97930ac566f36499aba2dd612588085b8e906c4dea73a28b7545" host="ci-4081.2.1-d-c5ae8496ec" Dec 13 09:49:27.727772 containerd[1459]: 2024-12-13 09:49:27.638 [INFO][4366] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 09:49:27.727772 containerd[1459]: 2024-12-13 09:49:27.638 [INFO][4366] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.0.4/26] IPv6=[] ContainerID="4b9d00c8124d97930ac566f36499aba2dd612588085b8e906c4dea73a28b7545" HandleID="k8s-pod-network.4b9d00c8124d97930ac566f36499aba2dd612588085b8e906c4dea73a28b7545" Workload="ci--4081.2.1--d--c5ae8496ec-k8s-coredns--7db6d8ff4d--464d2-eth0" Dec 13 09:49:27.734446 containerd[1459]: 2024-12-13 09:49:27.645 [INFO][4328] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4b9d00c8124d97930ac566f36499aba2dd612588085b8e906c4dea73a28b7545" Namespace="kube-system" Pod="coredns-7db6d8ff4d-464d2" WorkloadEndpoint="ci--4081.2.1--d--c5ae8496ec-k8s-coredns--7db6d8ff4d--464d2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--d--c5ae8496ec-k8s-coredns--7db6d8ff4d--464d2-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"89c56140-f295-40c7-ae2a-952e41b9599a", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 48, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-d-c5ae8496ec", ContainerID:"", Pod:"coredns-7db6d8ff4d-464d2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.0.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3b175e886d4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 09:49:27.734446 containerd[1459]: 2024-12-13 09:49:27.645 [INFO][4328] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.0.4/32] ContainerID="4b9d00c8124d97930ac566f36499aba2dd612588085b8e906c4dea73a28b7545" Namespace="kube-system" Pod="coredns-7db6d8ff4d-464d2" WorkloadEndpoint="ci--4081.2.1--d--c5ae8496ec-k8s-coredns--7db6d8ff4d--464d2-eth0" Dec 13 09:49:27.734446 containerd[1459]: 2024-12-13 09:49:27.646 [INFO][4328] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3b175e886d4 ContainerID="4b9d00c8124d97930ac566f36499aba2dd612588085b8e906c4dea73a28b7545" Namespace="kube-system" Pod="coredns-7db6d8ff4d-464d2" WorkloadEndpoint="ci--4081.2.1--d--c5ae8496ec-k8s-coredns--7db6d8ff4d--464d2-eth0" Dec 13 09:49:27.734446 containerd[1459]: 2024-12-13 09:49:27.657 [INFO][4328] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4b9d00c8124d97930ac566f36499aba2dd612588085b8e906c4dea73a28b7545" Namespace="kube-system" Pod="coredns-7db6d8ff4d-464d2" WorkloadEndpoint="ci--4081.2.1--d--c5ae8496ec-k8s-coredns--7db6d8ff4d--464d2-eth0" Dec 13 09:49:27.734446 containerd[1459]: 2024-12-13 09:49:27.665 [INFO][4328] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4b9d00c8124d97930ac566f36499aba2dd612588085b8e906c4dea73a28b7545" Namespace="kube-system" Pod="coredns-7db6d8ff4d-464d2" WorkloadEndpoint="ci--4081.2.1--d--c5ae8496ec-k8s-coredns--7db6d8ff4d--464d2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--d--c5ae8496ec-k8s-coredns--7db6d8ff4d--464d2-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"89c56140-f295-40c7-ae2a-952e41b9599a", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 48, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-d-c5ae8496ec", ContainerID:"4b9d00c8124d97930ac566f36499aba2dd612588085b8e906c4dea73a28b7545", Pod:"coredns-7db6d8ff4d-464d2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.0.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3b175e886d4", MAC:"da:2f:5d:10:84:ab", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 09:49:27.734446 containerd[1459]: 2024-12-13 09:49:27.699 [INFO][4328] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4b9d00c8124d97930ac566f36499aba2dd612588085b8e906c4dea73a28b7545" Namespace="kube-system" Pod="coredns-7db6d8ff4d-464d2" WorkloadEndpoint="ci--4081.2.1--d--c5ae8496ec-k8s-coredns--7db6d8ff4d--464d2-eth0" Dec 13 09:49:27.849464 systemd-networkd[1359]: calibeb28c852ce: Link UP Dec 13 09:49:27.857667 systemd-networkd[1359]: calibeb28c852ce: Gained carrier Dec 13 09:49:27.867774 containerd[1459]: time="2024-12-13T09:49:27.867522968Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 09:49:27.867774 containerd[1459]: time="2024-12-13T09:49:27.867610877Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 09:49:27.867774 containerd[1459]: time="2024-12-13T09:49:27.867633698Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:49:27.869147 containerd[1459]: time="2024-12-13T09:49:27.868041690Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:49:27.934942 containerd[1459]: 2024-12-13 09:49:27.388 [INFO][4338] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.1--d--c5ae8496ec-k8s-csi--node--driver--z8kgk-eth0 csi-node-driver- calico-system 56d0e423-edd4-4223-a5ef-7fe3393e4271 872 0 2024-12-13 09:48:59 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.2.1-d-c5ae8496ec csi-node-driver-z8kgk eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calibeb28c852ce [] []}} ContainerID="9e50c2fe37a0ff09e09a14fc2d50bee1efcd03eb1a24bb6c3920a6e9c027c16d" Namespace="calico-system" Pod="csi-node-driver-z8kgk" WorkloadEndpoint="ci--4081.2.1--d--c5ae8496ec-k8s-csi--node--driver--z8kgk-" Dec 13 09:49:27.934942 containerd[1459]: 2024-12-13 09:49:27.388 [INFO][4338] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9e50c2fe37a0ff09e09a14fc2d50bee1efcd03eb1a24bb6c3920a6e9c027c16d" Namespace="calico-system" Pod="csi-node-driver-z8kgk" WorkloadEndpoint="ci--4081.2.1--d--c5ae8496ec-k8s-csi--node--driver--z8kgk-eth0" Dec 13 09:49:27.934942 containerd[1459]: 2024-12-13 09:49:27.575 [INFO][4373] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9e50c2fe37a0ff09e09a14fc2d50bee1efcd03eb1a24bb6c3920a6e9c027c16d" HandleID="k8s-pod-network.9e50c2fe37a0ff09e09a14fc2d50bee1efcd03eb1a24bb6c3920a6e9c027c16d" Workload="ci--4081.2.1--d--c5ae8496ec-k8s-csi--node--driver--z8kgk-eth0" Dec 13 09:49:27.934942 containerd[1459]: 2024-12-13 09:49:27.612 [INFO][4373] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9e50c2fe37a0ff09e09a14fc2d50bee1efcd03eb1a24bb6c3920a6e9c027c16d" HandleID="k8s-pod-network.9e50c2fe37a0ff09e09a14fc2d50bee1efcd03eb1a24bb6c3920a6e9c027c16d" Workload="ci--4081.2.1--d--c5ae8496ec-k8s-csi--node--driver--z8kgk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031d6a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.2.1-d-c5ae8496ec", "pod":"csi-node-driver-z8kgk", "timestamp":"2024-12-13 09:49:27.575089863 +0000 UTC"}, Hostname:"ci-4081.2.1-d-c5ae8496ec", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 09:49:27.934942 containerd[1459]: 2024-12-13 09:49:27.613 [INFO][4373] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 09:49:27.934942 containerd[1459]: 2024-12-13 09:49:27.638 [INFO][4373] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 09:49:27.934942 containerd[1459]: 2024-12-13 09:49:27.642 [INFO][4373] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.1-d-c5ae8496ec' Dec 13 09:49:27.934942 containerd[1459]: 2024-12-13 09:49:27.647 [INFO][4373] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9e50c2fe37a0ff09e09a14fc2d50bee1efcd03eb1a24bb6c3920a6e9c027c16d" host="ci-4081.2.1-d-c5ae8496ec" Dec 13 09:49:27.934942 containerd[1459]: 2024-12-13 09:49:27.676 [INFO][4373] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.1-d-c5ae8496ec" Dec 13 09:49:27.934942 containerd[1459]: 2024-12-13 09:49:27.711 [INFO][4373] ipam/ipam.go 489: Trying affinity for 192.168.0.0/26 host="ci-4081.2.1-d-c5ae8496ec" Dec 13 09:49:27.934942 containerd[1459]: 2024-12-13 09:49:27.719 [INFO][4373] ipam/ipam.go 155: Attempting to load block cidr=192.168.0.0/26 host="ci-4081.2.1-d-c5ae8496ec" Dec 13 09:49:27.934942 containerd[1459]: 2024-12-13 09:49:27.731 [INFO][4373] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.0.0/26 host="ci-4081.2.1-d-c5ae8496ec" Dec 13 09:49:27.934942 containerd[1459]: 2024-12-13 09:49:27.731 [INFO][4373] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.0.0/26 handle="k8s-pod-network.9e50c2fe37a0ff09e09a14fc2d50bee1efcd03eb1a24bb6c3920a6e9c027c16d" host="ci-4081.2.1-d-c5ae8496ec" Dec 13 09:49:27.934942 containerd[1459]: 2024-12-13 09:49:27.736 [INFO][4373] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9e50c2fe37a0ff09e09a14fc2d50bee1efcd03eb1a24bb6c3920a6e9c027c16d Dec 13 09:49:27.934942 containerd[1459]: 2024-12-13 09:49:27.756 [INFO][4373] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.0.0/26 handle="k8s-pod-network.9e50c2fe37a0ff09e09a14fc2d50bee1efcd03eb1a24bb6c3920a6e9c027c16d" host="ci-4081.2.1-d-c5ae8496ec" Dec 13 09:49:27.934942 containerd[1459]: 2024-12-13 09:49:27.788 [INFO][4373] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.0.5/26] block=192.168.0.0/26 handle="k8s-pod-network.9e50c2fe37a0ff09e09a14fc2d50bee1efcd03eb1a24bb6c3920a6e9c027c16d" host="ci-4081.2.1-d-c5ae8496ec" Dec 13 09:49:27.934942 containerd[1459]: 2024-12-13 09:49:27.788 [INFO][4373] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.0.5/26] handle="k8s-pod-network.9e50c2fe37a0ff09e09a14fc2d50bee1efcd03eb1a24bb6c3920a6e9c027c16d" host="ci-4081.2.1-d-c5ae8496ec" Dec 13 09:49:27.934942 containerd[1459]: 2024-12-13 09:49:27.788 [INFO][4373] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 09:49:27.934942 containerd[1459]: 2024-12-13 09:49:27.788 [INFO][4373] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.0.5/26] IPv6=[] ContainerID="9e50c2fe37a0ff09e09a14fc2d50bee1efcd03eb1a24bb6c3920a6e9c027c16d" HandleID="k8s-pod-network.9e50c2fe37a0ff09e09a14fc2d50bee1efcd03eb1a24bb6c3920a6e9c027c16d" Workload="ci--4081.2.1--d--c5ae8496ec-k8s-csi--node--driver--z8kgk-eth0" Dec 13 09:49:27.939741 containerd[1459]: 2024-12-13 09:49:27.799 [INFO][4338] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9e50c2fe37a0ff09e09a14fc2d50bee1efcd03eb1a24bb6c3920a6e9c027c16d" Namespace="calico-system" Pod="csi-node-driver-z8kgk" WorkloadEndpoint="ci--4081.2.1--d--c5ae8496ec-k8s-csi--node--driver--z8kgk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--d--c5ae8496ec-k8s-csi--node--driver--z8kgk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"56d0e423-edd4-4223-a5ef-7fe3393e4271", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 48, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-d-c5ae8496ec", ContainerID:"", Pod:"csi-node-driver-z8kgk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.0.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibeb28c852ce", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 09:49:27.939741 containerd[1459]: 2024-12-13 09:49:27.799 [INFO][4338] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.0.5/32] ContainerID="9e50c2fe37a0ff09e09a14fc2d50bee1efcd03eb1a24bb6c3920a6e9c027c16d" Namespace="calico-system" Pod="csi-node-driver-z8kgk" WorkloadEndpoint="ci--4081.2.1--d--c5ae8496ec-k8s-csi--node--driver--z8kgk-eth0" Dec 13 09:49:27.939741 containerd[1459]: 2024-12-13 09:49:27.799 [INFO][4338] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibeb28c852ce ContainerID="9e50c2fe37a0ff09e09a14fc2d50bee1efcd03eb1a24bb6c3920a6e9c027c16d" Namespace="calico-system" Pod="csi-node-driver-z8kgk" WorkloadEndpoint="ci--4081.2.1--d--c5ae8496ec-k8s-csi--node--driver--z8kgk-eth0" Dec 13 09:49:27.939741 containerd[1459]: 2024-12-13 09:49:27.856 [INFO][4338] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9e50c2fe37a0ff09e09a14fc2d50bee1efcd03eb1a24bb6c3920a6e9c027c16d" Namespace="calico-system" Pod="csi-node-driver-z8kgk" WorkloadEndpoint="ci--4081.2.1--d--c5ae8496ec-k8s-csi--node--driver--z8kgk-eth0" Dec 13 09:49:27.939741 containerd[1459]: 2024-12-13 09:49:27.871 [INFO][4338] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9e50c2fe37a0ff09e09a14fc2d50bee1efcd03eb1a24bb6c3920a6e9c027c16d" Namespace="calico-system" Pod="csi-node-driver-z8kgk" WorkloadEndpoint="ci--4081.2.1--d--c5ae8496ec-k8s-csi--node--driver--z8kgk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--d--c5ae8496ec-k8s-csi--node--driver--z8kgk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"56d0e423-edd4-4223-a5ef-7fe3393e4271", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 48, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-d-c5ae8496ec", ContainerID:"9e50c2fe37a0ff09e09a14fc2d50bee1efcd03eb1a24bb6c3920a6e9c027c16d", Pod:"csi-node-driver-z8kgk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.0.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibeb28c852ce", MAC:"1a:dd:0b:5a:e2:0d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 09:49:27.939741 containerd[1459]: 2024-12-13 09:49:27.910 [INFO][4338] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9e50c2fe37a0ff09e09a14fc2d50bee1efcd03eb1a24bb6c3920a6e9c027c16d" Namespace="calico-system" Pod="csi-node-driver-z8kgk" WorkloadEndpoint="ci--4081.2.1--d--c5ae8496ec-k8s-csi--node--driver--z8kgk-eth0" Dec 13 09:49:27.971238 systemd[1]: Started cri-containerd-4b9d00c8124d97930ac566f36499aba2dd612588085b8e906c4dea73a28b7545.scope - libcontainer container 4b9d00c8124d97930ac566f36499aba2dd612588085b8e906c4dea73a28b7545. Dec 13 09:49:28.019338 systemd-networkd[1359]: cali2cc24d342ca: Link UP Dec 13 09:49:28.030067 systemd-networkd[1359]: cali2cc24d342ca: Gained carrier Dec 13 09:49:28.101156 containerd[1459]: 2024-12-13 09:49:27.425 [INFO][4349] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.1--d--c5ae8496ec-k8s-calico--apiserver--86c9dd4fbf--lcxl7-eth0 calico-apiserver-86c9dd4fbf- calico-apiserver c34b1b21-b5b0-4dd4-928a-ce99fd8dbbbc 873 0 2024-12-13 09:48:59 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:86c9dd4fbf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.2.1-d-c5ae8496ec calico-apiserver-86c9dd4fbf-lcxl7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2cc24d342ca [] []}} ContainerID="da1f9eee5d6717c172858c7b731b80e24ea534c9a269300986326dc07d81ef84" Namespace="calico-apiserver" Pod="calico-apiserver-86c9dd4fbf-lcxl7" WorkloadEndpoint="ci--4081.2.1--d--c5ae8496ec-k8s-calico--apiserver--86c9dd4fbf--lcxl7-" Dec 13 09:49:28.101156 containerd[1459]: 2024-12-13 09:49:27.425 [INFO][4349] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="da1f9eee5d6717c172858c7b731b80e24ea534c9a269300986326dc07d81ef84" Namespace="calico-apiserver" Pod="calico-apiserver-86c9dd4fbf-lcxl7" WorkloadEndpoint="ci--4081.2.1--d--c5ae8496ec-k8s-calico--apiserver--86c9dd4fbf--lcxl7-eth0" Dec 13 09:49:28.101156 containerd[1459]: 2024-12-13 09:49:27.589 [INFO][4377] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="da1f9eee5d6717c172858c7b731b80e24ea534c9a269300986326dc07d81ef84" HandleID="k8s-pod-network.da1f9eee5d6717c172858c7b731b80e24ea534c9a269300986326dc07d81ef84" Workload="ci--4081.2.1--d--c5ae8496ec-k8s-calico--apiserver--86c9dd4fbf--lcxl7-eth0" Dec 13 09:49:28.101156 containerd[1459]: 2024-12-13 09:49:27.629 [INFO][4377] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="da1f9eee5d6717c172858c7b731b80e24ea534c9a269300986326dc07d81ef84" HandleID="k8s-pod-network.da1f9eee5d6717c172858c7b731b80e24ea534c9a269300986326dc07d81ef84" Workload="ci--4081.2.1--d--c5ae8496ec-k8s-calico--apiserver--86c9dd4fbf--lcxl7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003bc500), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.2.1-d-c5ae8496ec", "pod":"calico-apiserver-86c9dd4fbf-lcxl7", "timestamp":"2024-12-13 09:49:27.589437054 +0000 UTC"}, Hostname:"ci-4081.2.1-d-c5ae8496ec", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 09:49:28.101156 containerd[1459]: 2024-12-13 09:49:27.629 [INFO][4377] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 09:49:28.101156 containerd[1459]: 2024-12-13 09:49:27.790 [INFO][4377] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 09:49:28.101156 containerd[1459]: 2024-12-13 09:49:27.792 [INFO][4377] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.1-d-c5ae8496ec' Dec 13 09:49:28.101156 containerd[1459]: 2024-12-13 09:49:27.798 [INFO][4377] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.da1f9eee5d6717c172858c7b731b80e24ea534c9a269300986326dc07d81ef84" host="ci-4081.2.1-d-c5ae8496ec" Dec 13 09:49:28.101156 containerd[1459]: 2024-12-13 09:49:27.827 [INFO][4377] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.1-d-c5ae8496ec" Dec 13 09:49:28.101156 containerd[1459]: 2024-12-13 09:49:27.850 [INFO][4377] ipam/ipam.go 489: Trying affinity for 192.168.0.0/26 host="ci-4081.2.1-d-c5ae8496ec" Dec 13 09:49:28.101156 containerd[1459]: 2024-12-13 09:49:27.871 [INFO][4377] ipam/ipam.go 155: Attempting to load block cidr=192.168.0.0/26 host="ci-4081.2.1-d-c5ae8496ec" Dec 13 09:49:28.101156 containerd[1459]: 2024-12-13 09:49:27.888 [INFO][4377] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.0.0/26 host="ci-4081.2.1-d-c5ae8496ec" Dec 13 09:49:28.101156 containerd[1459]: 2024-12-13 09:49:27.888 [INFO][4377] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.0.0/26 handle="k8s-pod-network.da1f9eee5d6717c172858c7b731b80e24ea534c9a269300986326dc07d81ef84" host="ci-4081.2.1-d-c5ae8496ec" Dec 13 09:49:28.101156 containerd[1459]: 2024-12-13 09:49:27.900 [INFO][4377] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.da1f9eee5d6717c172858c7b731b80e24ea534c9a269300986326dc07d81ef84 Dec 13 09:49:28.101156 containerd[1459]: 2024-12-13 09:49:27.930 [INFO][4377] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.0.0/26 handle="k8s-pod-network.da1f9eee5d6717c172858c7b731b80e24ea534c9a269300986326dc07d81ef84" host="ci-4081.2.1-d-c5ae8496ec" Dec 13 09:49:28.101156 containerd[1459]: 2024-12-13 09:49:27.970 [INFO][4377] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.0.6/26] block=192.168.0.0/26 handle="k8s-pod-network.da1f9eee5d6717c172858c7b731b80e24ea534c9a269300986326dc07d81ef84" host="ci-4081.2.1-d-c5ae8496ec" Dec 13 09:49:28.101156 containerd[1459]: 2024-12-13 09:49:27.971 [INFO][4377] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.0.6/26] handle="k8s-pod-network.da1f9eee5d6717c172858c7b731b80e24ea534c9a269300986326dc07d81ef84" host="ci-4081.2.1-d-c5ae8496ec" Dec 13 09:49:28.101156 containerd[1459]: 2024-12-13 09:49:27.971 [INFO][4377] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 09:49:28.101156 containerd[1459]: 2024-12-13 09:49:27.971 [INFO][4377] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.0.6/26] IPv6=[] ContainerID="da1f9eee5d6717c172858c7b731b80e24ea534c9a269300986326dc07d81ef84" HandleID="k8s-pod-network.da1f9eee5d6717c172858c7b731b80e24ea534c9a269300986326dc07d81ef84" Workload="ci--4081.2.1--d--c5ae8496ec-k8s-calico--apiserver--86c9dd4fbf--lcxl7-eth0" Dec 13 09:49:28.103820 containerd[1459]: 2024-12-13 09:49:27.991 [INFO][4349] cni-plugin/k8s.go 386: Populated endpoint ContainerID="da1f9eee5d6717c172858c7b731b80e24ea534c9a269300986326dc07d81ef84" Namespace="calico-apiserver" Pod="calico-apiserver-86c9dd4fbf-lcxl7" WorkloadEndpoint="ci--4081.2.1--d--c5ae8496ec-k8s-calico--apiserver--86c9dd4fbf--lcxl7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--d--c5ae8496ec-k8s-calico--apiserver--86c9dd4fbf--lcxl7-eth0", GenerateName:"calico-apiserver-86c9dd4fbf-", Namespace:"calico-apiserver", SelfLink:"", UID:"c34b1b21-b5b0-4dd4-928a-ce99fd8dbbbc", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 48, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86c9dd4fbf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-d-c5ae8496ec", ContainerID:"", Pod:"calico-apiserver-86c9dd4fbf-lcxl7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.0.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2cc24d342ca", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 09:49:28.103820 containerd[1459]: 2024-12-13 09:49:27.992 [INFO][4349] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.0.6/32] ContainerID="da1f9eee5d6717c172858c7b731b80e24ea534c9a269300986326dc07d81ef84" Namespace="calico-apiserver" Pod="calico-apiserver-86c9dd4fbf-lcxl7" WorkloadEndpoint="ci--4081.2.1--d--c5ae8496ec-k8s-calico--apiserver--86c9dd4fbf--lcxl7-eth0" Dec 13 09:49:28.103820 containerd[1459]: 2024-12-13 09:49:27.992 [INFO][4349] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2cc24d342ca ContainerID="da1f9eee5d6717c172858c7b731b80e24ea534c9a269300986326dc07d81ef84" Namespace="calico-apiserver" Pod="calico-apiserver-86c9dd4fbf-lcxl7" WorkloadEndpoint="ci--4081.2.1--d--c5ae8496ec-k8s-calico--apiserver--86c9dd4fbf--lcxl7-eth0" Dec 13 09:49:28.103820 containerd[1459]: 2024-12-13 09:49:28.045 [INFO][4349] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="da1f9eee5d6717c172858c7b731b80e24ea534c9a269300986326dc07d81ef84" Namespace="calico-apiserver" Pod="calico-apiserver-86c9dd4fbf-lcxl7" WorkloadEndpoint="ci--4081.2.1--d--c5ae8496ec-k8s-calico--apiserver--86c9dd4fbf--lcxl7-eth0" Dec 13 09:49:28.103820 containerd[1459]: 2024-12-13 09:49:28.045 [INFO][4349] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="da1f9eee5d6717c172858c7b731b80e24ea534c9a269300986326dc07d81ef84" Namespace="calico-apiserver" Pod="calico-apiserver-86c9dd4fbf-lcxl7" WorkloadEndpoint="ci--4081.2.1--d--c5ae8496ec-k8s-calico--apiserver--86c9dd4fbf--lcxl7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--d--c5ae8496ec-k8s-calico--apiserver--86c9dd4fbf--lcxl7-eth0", GenerateName:"calico-apiserver-86c9dd4fbf-", Namespace:"calico-apiserver", SelfLink:"", UID:"c34b1b21-b5b0-4dd4-928a-ce99fd8dbbbc", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 48, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86c9dd4fbf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-d-c5ae8496ec", ContainerID:"da1f9eee5d6717c172858c7b731b80e24ea534c9a269300986326dc07d81ef84", Pod:"calico-apiserver-86c9dd4fbf-lcxl7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.0.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2cc24d342ca", MAC:"72:f3:c9:b9:53:e5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 09:49:28.103820 containerd[1459]: 2024-12-13 09:49:28.087 [INFO][4349] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="da1f9eee5d6717c172858c7b731b80e24ea534c9a269300986326dc07d81ef84" Namespace="calico-apiserver" Pod="calico-apiserver-86c9dd4fbf-lcxl7" WorkloadEndpoint="ci--4081.2.1--d--c5ae8496ec-k8s-calico--apiserver--86c9dd4fbf--lcxl7-eth0" Dec 13 09:49:28.111333 containerd[1459]: time="2024-12-13T09:49:28.110767951Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 09:49:28.111333 containerd[1459]: time="2024-12-13T09:49:28.110876700Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 09:49:28.111333 containerd[1459]: time="2024-12-13T09:49:28.110894479Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:49:28.111333 containerd[1459]: time="2024-12-13T09:49:28.111002531Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:49:28.138785 kubelet[2549]: E1213 09:49:28.135770 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:49:28.213259 containerd[1459]: time="2024-12-13T09:49:28.212645867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-464d2,Uid:89c56140-f295-40c7-ae2a-952e41b9599a,Namespace:kube-system,Attempt:1,} returns sandbox id \"4b9d00c8124d97930ac566f36499aba2dd612588085b8e906c4dea73a28b7545\"" Dec 13 09:49:28.215216 systemd[1]: Started cri-containerd-9e50c2fe37a0ff09e09a14fc2d50bee1efcd03eb1a24bb6c3920a6e9c027c16d.scope - libcontainer container 9e50c2fe37a0ff09e09a14fc2d50bee1efcd03eb1a24bb6c3920a6e9c027c16d. Dec 13 09:49:28.243200 kubelet[2549]: E1213 09:49:28.242715 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:49:28.255969 containerd[1459]: time="2024-12-13T09:49:28.253332140Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 09:49:28.255969 containerd[1459]: time="2024-12-13T09:49:28.253430535Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 09:49:28.255969 containerd[1459]: time="2024-12-13T09:49:28.253454245Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:49:28.262310 containerd[1459]: time="2024-12-13T09:49:28.258193714Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:49:28.269018 containerd[1459]: time="2024-12-13T09:49:28.268426758Z" level=info msg="CreateContainer within sandbox \"4b9d00c8124d97930ac566f36499aba2dd612588085b8e906c4dea73a28b7545\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 09:49:28.354034 containerd[1459]: time="2024-12-13T09:49:28.353234214Z" level=info msg="CreateContainer within sandbox \"4b9d00c8124d97930ac566f36499aba2dd612588085b8e906c4dea73a28b7545\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"761f55ad66a429a88c2f7fff9c834920b2607cf0c1800c0027842e2959f32c7f\"" Dec 13 09:49:28.361478 containerd[1459]: time="2024-12-13T09:49:28.358628024Z" level=info msg="StartContainer for \"761f55ad66a429a88c2f7fff9c834920b2607cf0c1800c0027842e2959f32c7f\"" Dec 13 09:49:28.381146 systemd[1]: Started cri-containerd-da1f9eee5d6717c172858c7b731b80e24ea534c9a269300986326dc07d81ef84.scope - libcontainer container da1f9eee5d6717c172858c7b731b80e24ea534c9a269300986326dc07d81ef84. Dec 13 09:49:28.404232 containerd[1459]: time="2024-12-13T09:49:28.404051814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z8kgk,Uid:56d0e423-edd4-4223-a5ef-7fe3393e4271,Namespace:calico-system,Attempt:1,} returns sandbox id \"9e50c2fe37a0ff09e09a14fc2d50bee1efcd03eb1a24bb6c3920a6e9c027c16d\"" Dec 13 09:49:28.488135 systemd[1]: Started cri-containerd-761f55ad66a429a88c2f7fff9c834920b2607cf0c1800c0027842e2959f32c7f.scope - libcontainer container 761f55ad66a429a88c2f7fff9c834920b2607cf0c1800c0027842e2959f32c7f. Dec 13 09:49:28.583119 containerd[1459]: time="2024-12-13T09:49:28.582271922Z" level=info msg="StartContainer for \"761f55ad66a429a88c2f7fff9c834920b2607cf0c1800c0027842e2959f32c7f\" returns successfully" Dec 13 09:49:28.595819 containerd[1459]: time="2024-12-13T09:49:28.595745716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86c9dd4fbf-lcxl7,Uid:c34b1b21-b5b0-4dd4-928a-ce99fd8dbbbc,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"da1f9eee5d6717c172858c7b731b80e24ea534c9a269300986326dc07d81ef84\"" Dec 13 09:49:28.847315 systemd-networkd[1359]: cali3b175e886d4: Gained IPv6LL Dec 13 09:49:29.168818 kubelet[2549]: E1213 09:49:29.167776 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:49:29.174643 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3719813095.mount: Deactivated successfully. Dec 13 09:49:29.208408 kubelet[2549]: I1213 09:49:29.207643 2549 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-464d2" podStartSLOduration=38.207614127 podStartE2EDuration="38.207614127s" podCreationTimestamp="2024-12-13 09:48:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 09:49:29.205467811 +0000 UTC m=+50.710353668" watchObservedRunningTime="2024-12-13 09:49:29.207614127 +0000 UTC m=+50.712500100" Dec 13 09:49:29.449405 containerd[1459]: time="2024-12-13T09:49:29.447352289Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:49:29.449405 containerd[1459]: time="2024-12-13T09:49:29.448776455Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Dec 13 09:49:29.451390 containerd[1459]: time="2024-12-13T09:49:29.450794268Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:49:29.454814 containerd[1459]: time="2024-12-13T09:49:29.454706901Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:49:29.456409 containerd[1459]: time="2024-12-13T09:49:29.456290481Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 4.24306001s" Dec 13 09:49:29.457416 containerd[1459]: time="2024-12-13T09:49:29.456824025Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Dec 13 09:49:29.458792 containerd[1459]: time="2024-12-13T09:49:29.458747520Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 09:49:29.485276 containerd[1459]: time="2024-12-13T09:49:29.484118119Z" level=info msg="CreateContainer within sandbox \"f88e87eb5b1f403602a10b43447a4705894aa3f36b486f6a850f5ed9024cb8da\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Dec 13 09:49:29.488220 systemd-networkd[1359]: calibeb28c852ce: Gained IPv6LL Dec 13 09:49:29.526923 containerd[1459]: time="2024-12-13T09:49:29.526843290Z" level=info msg="CreateContainer within sandbox \"f88e87eb5b1f403602a10b43447a4705894aa3f36b486f6a850f5ed9024cb8da\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"82d44ce765ac07075cd25959738a707b4f08036fa75d519e91671d1c2cb77730\"" Dec 13 09:49:29.528733 containerd[1459]: time="2024-12-13T09:49:29.528239217Z" level=info msg="StartContainer for \"82d44ce765ac07075cd25959738a707b4f08036fa75d519e91671d1c2cb77730\"" Dec 13 09:49:29.588180 systemd[1]: Started cri-containerd-82d44ce765ac07075cd25959738a707b4f08036fa75d519e91671d1c2cb77730.scope - libcontainer container 82d44ce765ac07075cd25959738a707b4f08036fa75d519e91671d1c2cb77730. Dec 13 09:49:29.616369 systemd-networkd[1359]: cali2cc24d342ca: Gained IPv6LL Dec 13 09:49:29.694495 containerd[1459]: time="2024-12-13T09:49:29.694441379Z" level=info msg="StartContainer for \"82d44ce765ac07075cd25959738a707b4f08036fa75d519e91671d1c2cb77730\" returns successfully" Dec 13 09:49:30.186218 kubelet[2549]: E1213 09:49:30.185664 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:49:30.236988 kubelet[2549]: I1213 09:49:30.233556 2549 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5bf6c6b877-7fph7" podStartSLOduration=26.986940913 podStartE2EDuration="31.233519776s" podCreationTimestamp="2024-12-13 09:48:59 +0000 UTC" firstStartedPulling="2024-12-13 09:49:25.211679942 +0000 UTC m=+46.716565793" lastFinishedPulling="2024-12-13 09:49:29.458258814 +0000 UTC m=+50.963144656" observedRunningTime="2024-12-13 09:49:30.227301484 +0000 UTC m=+51.732187365" watchObservedRunningTime="2024-12-13 09:49:30.233519776 +0000 UTC m=+51.738405642" Dec 13 09:49:31.187801 kubelet[2549]: E1213 09:49:31.186834 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:49:32.768579 containerd[1459]: time="2024-12-13T09:49:32.768115967Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:49:32.770840 containerd[1459]: time="2024-12-13T09:49:32.770396335Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Dec 13 09:49:32.773878 containerd[1459]: time="2024-12-13T09:49:32.773156603Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:49:32.774732 containerd[1459]: time="2024-12-13T09:49:32.774613717Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:49:32.776760 containerd[1459]: time="2024-12-13T09:49:32.776696683Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 3.317897274s" Dec 13 09:49:32.776760 containerd[1459]: time="2024-12-13T09:49:32.776761022Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Dec 13 09:49:32.778935 containerd[1459]: time="2024-12-13T09:49:32.778441066Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Dec 13 09:49:32.783562 containerd[1459]: time="2024-12-13T09:49:32.783502210Z" level=info msg="CreateContainer within sandbox \"89c083e9147355c49128996ae783464bc7c240ac9797d672d663b346d739bcbf\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 09:49:32.804890 containerd[1459]: time="2024-12-13T09:49:32.802301573Z" level=info msg="CreateContainer within sandbox \"89c083e9147355c49128996ae783464bc7c240ac9797d672d663b346d739bcbf\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ec2bdb59d2f5542cc24dee2ad5b260379303263c4b52ba32e5ef045884d13606\"" Dec 13 09:49:32.804890 containerd[1459]: time="2024-12-13T09:49:32.804098205Z" level=info msg="StartContainer for \"ec2bdb59d2f5542cc24dee2ad5b260379303263c4b52ba32e5ef045884d13606\"" Dec 13 09:49:32.807780 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount738569668.mount: Deactivated successfully. Dec 13 09:49:32.882191 systemd[1]: Started cri-containerd-ec2bdb59d2f5542cc24dee2ad5b260379303263c4b52ba32e5ef045884d13606.scope - libcontainer container ec2bdb59d2f5542cc24dee2ad5b260379303263c4b52ba32e5ef045884d13606. Dec 13 09:49:32.951610 containerd[1459]: time="2024-12-13T09:49:32.951545368Z" level=info msg="StartContainer for \"ec2bdb59d2f5542cc24dee2ad5b260379303263c4b52ba32e5ef045884d13606\" returns successfully" Dec 13 09:49:33.225960 kubelet[2549]: I1213 09:49:33.225566 2549 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-86c9dd4fbf-d9pn9" podStartSLOduration=26.740451266 podStartE2EDuration="34.225533835s" podCreationTimestamp="2024-12-13 09:48:59 +0000 UTC" firstStartedPulling="2024-12-13 09:49:25.292798232 +0000 UTC m=+46.797684074" lastFinishedPulling="2024-12-13 09:49:32.777880727 +0000 UTC m=+54.282766643" observedRunningTime="2024-12-13 09:49:33.224786226 +0000 UTC m=+54.729672085" watchObservedRunningTime="2024-12-13 09:49:33.225533835 +0000 UTC m=+54.730419697" Dec 13 09:49:34.205759 kubelet[2549]: I1213 09:49:34.204629 2549 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 09:49:34.622605 containerd[1459]: time="2024-12-13T09:49:34.622520241Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:49:34.625014 containerd[1459]: time="2024-12-13T09:49:34.623407743Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Dec 13 09:49:34.625014 containerd[1459]: time="2024-12-13T09:49:34.624197367Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:49:34.658116 containerd[1459]: time="2024-12-13T09:49:34.657609208Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.879126315s" Dec 13 09:49:34.658401 containerd[1459]: time="2024-12-13T09:49:34.658365497Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Dec 13 09:49:34.658568 containerd[1459]: time="2024-12-13T09:49:34.658320354Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:49:34.673884 containerd[1459]: time="2024-12-13T09:49:34.672778663Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 09:49:34.685999 containerd[1459]: time="2024-12-13T09:49:34.683409553Z" level=info msg="CreateContainer within sandbox \"9e50c2fe37a0ff09e09a14fc2d50bee1efcd03eb1a24bb6c3920a6e9c027c16d\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Dec 13 09:49:34.759509 containerd[1459]: time="2024-12-13T09:49:34.759439212Z" level=info msg="CreateContainer within sandbox \"9e50c2fe37a0ff09e09a14fc2d50bee1efcd03eb1a24bb6c3920a6e9c027c16d\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"9963a7a5e3ca856def6ba4711945643916d75b317951a0c9e5a59f0b05fb608a\"" Dec 13 09:49:34.764326 containerd[1459]: time="2024-12-13T09:49:34.764171823Z" level=info msg="StartContainer for \"9963a7a5e3ca856def6ba4711945643916d75b317951a0c9e5a59f0b05fb608a\"" Dec 13 09:49:34.847261 systemd[1]: run-containerd-runc-k8s.io-9963a7a5e3ca856def6ba4711945643916d75b317951a0c9e5a59f0b05fb608a-runc.iMVx9E.mount: Deactivated successfully. Dec 13 09:49:34.857279 systemd[1]: Started cri-containerd-9963a7a5e3ca856def6ba4711945643916d75b317951a0c9e5a59f0b05fb608a.scope - libcontainer container 9963a7a5e3ca856def6ba4711945643916d75b317951a0c9e5a59f0b05fb608a. Dec 13 09:49:34.939047 containerd[1459]: time="2024-12-13T09:49:34.938748877Z" level=info msg="StartContainer for \"9963a7a5e3ca856def6ba4711945643916d75b317951a0c9e5a59f0b05fb608a\" returns successfully" Dec 13 09:49:35.080086 containerd[1459]: time="2024-12-13T09:49:35.079978364Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:49:35.082770 containerd[1459]: time="2024-12-13T09:49:35.081951549Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Dec 13 09:49:35.084988 containerd[1459]: time="2024-12-13T09:49:35.084925236Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 410.803599ms" Dec 13 09:49:35.085251 containerd[1459]: time="2024-12-13T09:49:35.085219703Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Dec 13 09:49:35.088170 containerd[1459]: time="2024-12-13T09:49:35.087838362Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Dec 13 09:49:35.092272 containerd[1459]: time="2024-12-13T09:49:35.092206766Z" level=info msg="CreateContainer within sandbox \"da1f9eee5d6717c172858c7b731b80e24ea534c9a269300986326dc07d81ef84\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 09:49:35.111768 containerd[1459]: time="2024-12-13T09:49:35.111693288Z" level=info msg="CreateContainer within sandbox \"da1f9eee5d6717c172858c7b731b80e24ea534c9a269300986326dc07d81ef84\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f14678ff2d9fea085a393d0393f1f8cb65bae949e8fe0d6152ae92197f43ad86\"" Dec 13 09:49:35.113561 containerd[1459]: time="2024-12-13T09:49:35.113021775Z" level=info msg="StartContainer for \"f14678ff2d9fea085a393d0393f1f8cb65bae949e8fe0d6152ae92197f43ad86\"" Dec 13 09:49:35.159217 systemd[1]: Started cri-containerd-f14678ff2d9fea085a393d0393f1f8cb65bae949e8fe0d6152ae92197f43ad86.scope - libcontainer container f14678ff2d9fea085a393d0393f1f8cb65bae949e8fe0d6152ae92197f43ad86. Dec 13 09:49:35.235815 containerd[1459]: time="2024-12-13T09:49:35.231924871Z" level=info msg="StartContainer for \"f14678ff2d9fea085a393d0393f1f8cb65bae949e8fe0d6152ae92197f43ad86\" returns successfully" Dec 13 09:49:35.742838 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2325780347.mount: Deactivated successfully. Dec 13 09:49:36.246254 kubelet[2549]: I1213 09:49:36.246170 2549 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-86c9dd4fbf-lcxl7" podStartSLOduration=30.761486812 podStartE2EDuration="37.246147366s" podCreationTimestamp="2024-12-13 09:48:59 +0000 UTC" firstStartedPulling="2024-12-13 09:49:28.602950417 +0000 UTC m=+50.107836257" lastFinishedPulling="2024-12-13 09:49:35.087610962 +0000 UTC m=+56.592496811" observedRunningTime="2024-12-13 09:49:36.245666572 +0000 UTC m=+57.750552430" watchObservedRunningTime="2024-12-13 09:49:36.246147366 +0000 UTC m=+57.751033223" Dec 13 09:49:36.937068 containerd[1459]: time="2024-12-13T09:49:36.936984318Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:49:36.939187 containerd[1459]: time="2024-12-13T09:49:36.938937369Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Dec 13 09:49:36.940690 containerd[1459]: time="2024-12-13T09:49:36.940495313Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:49:36.947706 containerd[1459]: time="2024-12-13T09:49:36.947627600Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:49:36.948974 containerd[1459]: time="2024-12-13T09:49:36.948731149Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.860804151s" Dec 13 09:49:36.948974 containerd[1459]: time="2024-12-13T09:49:36.948801212Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Dec 13 09:49:36.953896 containerd[1459]: time="2024-12-13T09:49:36.953823173Z" level=info msg="CreateContainer within sandbox \"9e50c2fe37a0ff09e09a14fc2d50bee1efcd03eb1a24bb6c3920a6e9c027c16d\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Dec 13 09:49:36.993933 containerd[1459]: time="2024-12-13T09:49:36.993720124Z" level=info msg="CreateContainer within sandbox \"9e50c2fe37a0ff09e09a14fc2d50bee1efcd03eb1a24bb6c3920a6e9c027c16d\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"c2da6cb33dbfc5a172f66e317c80180af456f40604a11141adb1a6343e7f3086\"" Dec 13 09:49:36.994933 containerd[1459]: time="2024-12-13T09:49:36.994747881Z" level=info msg="StartContainer for \"c2da6cb33dbfc5a172f66e317c80180af456f40604a11141adb1a6343e7f3086\"" Dec 13 09:49:37.054222 systemd[1]: Started cri-containerd-c2da6cb33dbfc5a172f66e317c80180af456f40604a11141adb1a6343e7f3086.scope - libcontainer container c2da6cb33dbfc5a172f66e317c80180af456f40604a11141adb1a6343e7f3086. Dec 13 09:49:37.121045 containerd[1459]: time="2024-12-13T09:49:37.120488884Z" level=info msg="StartContainer for \"c2da6cb33dbfc5a172f66e317c80180af456f40604a11141adb1a6343e7f3086\" returns successfully" Dec 13 09:49:37.231380 kubelet[2549]: I1213 09:49:37.231229 2549 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 09:49:37.254177 kubelet[2549]: I1213 09:49:37.253694 2549 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-z8kgk" podStartSLOduration=29.709909272 podStartE2EDuration="38.253665048s" podCreationTimestamp="2024-12-13 09:48:59 +0000 UTC" firstStartedPulling="2024-12-13 09:49:28.40724196 +0000 UTC m=+49.912127798" lastFinishedPulling="2024-12-13 09:49:36.950997712 +0000 UTC m=+58.455883574" observedRunningTime="2024-12-13 09:49:37.252179804 +0000 UTC m=+58.757065723" watchObservedRunningTime="2024-12-13 09:49:37.253665048 +0000 UTC m=+58.758550912" Dec 13 09:49:38.167341 kubelet[2549]: I1213 09:49:38.167270 2549 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Dec 13 09:49:38.172693 kubelet[2549]: I1213 09:49:38.172635 2549 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Dec 13 09:49:38.783296 containerd[1459]: time="2024-12-13T09:49:38.783235034Z" level=info msg="StopPodSandbox for \"5c75e6e80546eb168b5765424bba8d3fff071e062eeb36b1ea0dbcc09848e032\"" Dec 13 09:49:39.009938 containerd[1459]: 2024-12-13 09:49:38.947 [WARNING][4855] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5c75e6e80546eb168b5765424bba8d3fff071e062eeb36b1ea0dbcc09848e032" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--d--c5ae8496ec-k8s-csi--node--driver--z8kgk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"56d0e423-edd4-4223-a5ef-7fe3393e4271", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 48, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-d-c5ae8496ec", ContainerID:"9e50c2fe37a0ff09e09a14fc2d50bee1efcd03eb1a24bb6c3920a6e9c027c16d", Pod:"csi-node-driver-z8kgk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.0.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibeb28c852ce", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 09:49:39.009938 containerd[1459]: 2024-12-13 09:49:38.951 [INFO][4855] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5c75e6e80546eb168b5765424bba8d3fff071e062eeb36b1ea0dbcc09848e032" Dec 13 09:49:39.009938 containerd[1459]: 2024-12-13 09:49:38.951 [INFO][4855] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5c75e6e80546eb168b5765424bba8d3fff071e062eeb36b1ea0dbcc09848e032" iface="eth0" netns="" Dec 13 09:49:39.009938 containerd[1459]: 2024-12-13 09:49:38.951 [INFO][4855] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5c75e6e80546eb168b5765424bba8d3fff071e062eeb36b1ea0dbcc09848e032" Dec 13 09:49:39.009938 containerd[1459]: 2024-12-13 09:49:38.951 [INFO][4855] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5c75e6e80546eb168b5765424bba8d3fff071e062eeb36b1ea0dbcc09848e032" Dec 13 09:49:39.009938 containerd[1459]: 2024-12-13 09:49:38.992 [INFO][4862] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5c75e6e80546eb168b5765424bba8d3fff071e062eeb36b1ea0dbcc09848e032" HandleID="k8s-pod-network.5c75e6e80546eb168b5765424bba8d3fff071e062eeb36b1ea0dbcc09848e032" Workload="ci--4081.2.1--d--c5ae8496ec-k8s-csi--node--driver--z8kgk-eth0" Dec 13 09:49:39.009938 containerd[1459]: 2024-12-13 09:49:38.992 [INFO][4862] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 09:49:39.009938 containerd[1459]: 2024-12-13 09:49:38.993 [INFO][4862] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 09:49:39.009938 containerd[1459]: 2024-12-13 09:49:39.000 [WARNING][4862] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5c75e6e80546eb168b5765424bba8d3fff071e062eeb36b1ea0dbcc09848e032" HandleID="k8s-pod-network.5c75e6e80546eb168b5765424bba8d3fff071e062eeb36b1ea0dbcc09848e032" Workload="ci--4081.2.1--d--c5ae8496ec-k8s-csi--node--driver--z8kgk-eth0" Dec 13 09:49:39.009938 containerd[1459]: 2024-12-13 09:49:39.001 [INFO][4862] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5c75e6e80546eb168b5765424bba8d3fff071e062eeb36b1ea0dbcc09848e032" HandleID="k8s-pod-network.5c75e6e80546eb168b5765424bba8d3fff071e062eeb36b1ea0dbcc09848e032" Workload="ci--4081.2.1--d--c5ae8496ec-k8s-csi--node--driver--z8kgk-eth0" Dec 13 09:49:39.009938 containerd[1459]: 2024-12-13 09:49:39.004 [INFO][4862] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 09:49:39.009938 containerd[1459]: 2024-12-13 09:49:39.006 [INFO][4855] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5c75e6e80546eb168b5765424bba8d3fff071e062eeb36b1ea0dbcc09848e032" Dec 13 09:49:39.009938 containerd[1459]: time="2024-12-13T09:49:39.009882465Z" level=info msg="TearDown network for sandbox \"5c75e6e80546eb168b5765424bba8d3fff071e062eeb36b1ea0dbcc09848e032\" successfully" Dec 13 09:49:39.009938 containerd[1459]: time="2024-12-13T09:49:39.009925537Z" level=info msg="StopPodSandbox for \"5c75e6e80546eb168b5765424bba8d3fff071e062eeb36b1ea0dbcc09848e032\" returns successfully" Dec 13 09:49:39.012237 containerd[1459]: time="2024-12-13T09:49:39.010745638Z" level=info msg="RemovePodSandbox for \"5c75e6e80546eb168b5765424bba8d3fff071e062eeb36b1ea0dbcc09848e032\"" Dec 13 09:49:39.012237 containerd[1459]: time="2024-12-13T09:49:39.010782732Z" level=info msg="Forcibly stopping sandbox \"5c75e6e80546eb168b5765424bba8d3fff071e062eeb36b1ea0dbcc09848e032\"" Dec 13 09:49:39.142050 containerd[1459]: 2024-12-13 09:49:39.078 [WARNING][4881] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5c75e6e80546eb168b5765424bba8d3fff071e062eeb36b1ea0dbcc09848e032" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--d--c5ae8496ec-k8s-csi--node--driver--z8kgk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"56d0e423-edd4-4223-a5ef-7fe3393e4271", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 48, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-d-c5ae8496ec", ContainerID:"9e50c2fe37a0ff09e09a14fc2d50bee1efcd03eb1a24bb6c3920a6e9c027c16d", Pod:"csi-node-driver-z8kgk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.0.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibeb28c852ce", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 09:49:39.142050 containerd[1459]: 2024-12-13 09:49:39.078 [INFO][4881] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5c75e6e80546eb168b5765424bba8d3fff071e062eeb36b1ea0dbcc09848e032" Dec 13 09:49:39.142050 containerd[1459]: 2024-12-13 09:49:39.078 [INFO][4881] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5c75e6e80546eb168b5765424bba8d3fff071e062eeb36b1ea0dbcc09848e032" iface="eth0" netns="" Dec 13 09:49:39.142050 containerd[1459]: 2024-12-13 09:49:39.078 [INFO][4881] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5c75e6e80546eb168b5765424bba8d3fff071e062eeb36b1ea0dbcc09848e032" Dec 13 09:49:39.142050 containerd[1459]: 2024-12-13 09:49:39.078 [INFO][4881] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5c75e6e80546eb168b5765424bba8d3fff071e062eeb36b1ea0dbcc09848e032" Dec 13 09:49:39.142050 containerd[1459]: 2024-12-13 09:49:39.118 [INFO][4887] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5c75e6e80546eb168b5765424bba8d3fff071e062eeb36b1ea0dbcc09848e032" HandleID="k8s-pod-network.5c75e6e80546eb168b5765424bba8d3fff071e062eeb36b1ea0dbcc09848e032" Workload="ci--4081.2.1--d--c5ae8496ec-k8s-csi--node--driver--z8kgk-eth0" Dec 13 09:49:39.142050 containerd[1459]: 2024-12-13 09:49:39.118 [INFO][4887] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 09:49:39.142050 containerd[1459]: 2024-12-13 09:49:39.118 [INFO][4887] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 09:49:39.142050 containerd[1459]: 2024-12-13 09:49:39.130 [WARNING][4887] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5c75e6e80546eb168b5765424bba8d3fff071e062eeb36b1ea0dbcc09848e032" HandleID="k8s-pod-network.5c75e6e80546eb168b5765424bba8d3fff071e062eeb36b1ea0dbcc09848e032" Workload="ci--4081.2.1--d--c5ae8496ec-k8s-csi--node--driver--z8kgk-eth0" Dec 13 09:49:39.142050 containerd[1459]: 2024-12-13 09:49:39.130 [INFO][4887] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5c75e6e80546eb168b5765424bba8d3fff071e062eeb36b1ea0dbcc09848e032" HandleID="k8s-pod-network.5c75e6e80546eb168b5765424bba8d3fff071e062eeb36b1ea0dbcc09848e032" Workload="ci--4081.2.1--d--c5ae8496ec-k8s-csi--node--driver--z8kgk-eth0" Dec 13 09:49:39.142050 containerd[1459]: 2024-12-13 09:49:39.133 [INFO][4887] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 09:49:39.142050 containerd[1459]: 2024-12-13 09:49:39.137 [INFO][4881] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5c75e6e80546eb168b5765424bba8d3fff071e062eeb36b1ea0dbcc09848e032" Dec 13 09:49:39.142050 containerd[1459]: time="2024-12-13T09:49:39.140038548Z" level=info msg="TearDown network for sandbox \"5c75e6e80546eb168b5765424bba8d3fff071e062eeb36b1ea0dbcc09848e032\" successfully" Dec 13 09:49:39.156038 containerd[1459]: time="2024-12-13T09:49:39.155941086Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5c75e6e80546eb168b5765424bba8d3fff071e062eeb36b1ea0dbcc09848e032\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 09:49:39.156612 containerd[1459]: time="2024-12-13T09:49:39.156421280Z" level=info msg="RemovePodSandbox \"5c75e6e80546eb168b5765424bba8d3fff071e062eeb36b1ea0dbcc09848e032\" returns successfully" Dec 13 09:49:39.157534 containerd[1459]: time="2024-12-13T09:49:39.157487934Z" level=info msg="StopPodSandbox for \"a745741dd6fea6bfc613c4c6cc0844f2d02b642c65e77cd08ec0e9f3160ede60\"" Dec 13 09:49:39.290665 containerd[1459]: 2024-12-13 09:49:39.216 [WARNING][4905] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a745741dd6fea6bfc613c4c6cc0844f2d02b642c65e77cd08ec0e9f3160ede60" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--d--c5ae8496ec-k8s-calico--kube--controllers--5bf6c6b877--7fph7-eth0", GenerateName:"calico-kube-controllers-5bf6c6b877-", Namespace:"calico-system", SelfLink:"", UID:"e9d7b383-6d7b-4fc2-8e54-c423fd3aaee5", ResourceVersion:"914", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 48, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5bf6c6b877", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-d-c5ae8496ec", ContainerID:"f88e87eb5b1f403602a10b43447a4705894aa3f36b486f6a850f5ed9024cb8da", Pod:"calico-kube-controllers-5bf6c6b877-7fph7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.0.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali51203558a73", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 09:49:39.290665 containerd[1459]: 2024-12-13 09:49:39.217 [INFO][4905] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a745741dd6fea6bfc613c4c6cc0844f2d02b642c65e77cd08ec0e9f3160ede60" Dec 13 09:49:39.290665 containerd[1459]: 2024-12-13 09:49:39.217 [INFO][4905] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a745741dd6fea6bfc613c4c6cc0844f2d02b642c65e77cd08ec0e9f3160ede60" iface="eth0" netns="" Dec 13 09:49:39.290665 containerd[1459]: 2024-12-13 09:49:39.217 [INFO][4905] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a745741dd6fea6bfc613c4c6cc0844f2d02b642c65e77cd08ec0e9f3160ede60" Dec 13 09:49:39.290665 containerd[1459]: 2024-12-13 09:49:39.217 [INFO][4905] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a745741dd6fea6bfc613c4c6cc0844f2d02b642c65e77cd08ec0e9f3160ede60" Dec 13 09:49:39.290665 containerd[1459]: 2024-12-13 09:49:39.272 [INFO][4911] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a745741dd6fea6bfc613c4c6cc0844f2d02b642c65e77cd08ec0e9f3160ede60" HandleID="k8s-pod-network.a745741dd6fea6bfc613c4c6cc0844f2d02b642c65e77cd08ec0e9f3160ede60" Workload="ci--4081.2.1--d--c5ae8496ec-k8s-calico--kube--controllers--5bf6c6b877--7fph7-eth0" Dec 13 09:49:39.290665 containerd[1459]: 2024-12-13 09:49:39.272 [INFO][4911] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 09:49:39.290665 containerd[1459]: 2024-12-13 09:49:39.272 [INFO][4911] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 09:49:39.290665 containerd[1459]: 2024-12-13 09:49:39.280 [WARNING][4911] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a745741dd6fea6bfc613c4c6cc0844f2d02b642c65e77cd08ec0e9f3160ede60" HandleID="k8s-pod-network.a745741dd6fea6bfc613c4c6cc0844f2d02b642c65e77cd08ec0e9f3160ede60" Workload="ci--4081.2.1--d--c5ae8496ec-k8s-calico--kube--controllers--5bf6c6b877--7fph7-eth0" Dec 13 09:49:39.290665 containerd[1459]: 2024-12-13 09:49:39.280 [INFO][4911] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a745741dd6fea6bfc613c4c6cc0844f2d02b642c65e77cd08ec0e9f3160ede60" HandleID="k8s-pod-network.a745741dd6fea6bfc613c4c6cc0844f2d02b642c65e77cd08ec0e9f3160ede60" Workload="ci--4081.2.1--d--c5ae8496ec-k8s-calico--kube--controllers--5bf6c6b877--7fph7-eth0" Dec 13 09:49:39.290665 containerd[1459]: 2024-12-13 09:49:39.283 [INFO][4911] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 09:49:39.290665 containerd[1459]: 2024-12-13 09:49:39.286 [INFO][4905] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a745741dd6fea6bfc613c4c6cc0844f2d02b642c65e77cd08ec0e9f3160ede60" Dec 13 09:49:39.290665 containerd[1459]: time="2024-12-13T09:49:39.290509043Z" level=info msg="TearDown network for sandbox \"a745741dd6fea6bfc613c4c6cc0844f2d02b642c65e77cd08ec0e9f3160ede60\" successfully" Dec 13 09:49:39.290665 containerd[1459]: time="2024-12-13T09:49:39.290538681Z" level=info msg="StopPodSandbox for \"a745741dd6fea6bfc613c4c6cc0844f2d02b642c65e77cd08ec0e9f3160ede60\" returns successfully" Dec 13 09:49:39.292362 containerd[1459]: time="2024-12-13T09:49:39.292277060Z" level=info msg="RemovePodSandbox for \"a745741dd6fea6bfc613c4c6cc0844f2d02b642c65e77cd08ec0e9f3160ede60\"" Dec 13 09:49:39.292405 containerd[1459]: time="2024-12-13T09:49:39.292371634Z" level=info msg="Forcibly stopping sandbox \"a745741dd6fea6bfc613c4c6cc0844f2d02b642c65e77cd08ec0e9f3160ede60\"" Dec 13 09:49:39.447583 containerd[1459]: 2024-12-13 09:49:39.362 [WARNING][4929] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a745741dd6fea6bfc613c4c6cc0844f2d02b642c65e77cd08ec0e9f3160ede60" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--d--c5ae8496ec-k8s-calico--kube--controllers--5bf6c6b877--7fph7-eth0", GenerateName:"calico-kube-controllers-5bf6c6b877-", Namespace:"calico-system", SelfLink:"", UID:"e9d7b383-6d7b-4fc2-8e54-c423fd3aaee5", ResourceVersion:"914", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 48, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5bf6c6b877", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-d-c5ae8496ec", ContainerID:"f88e87eb5b1f403602a10b43447a4705894aa3f36b486f6a850f5ed9024cb8da", Pod:"calico-kube-controllers-5bf6c6b877-7fph7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.0.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali51203558a73", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 09:49:39.447583 containerd[1459]: 2024-12-13 09:49:39.364 [INFO][4929] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a745741dd6fea6bfc613c4c6cc0844f2d02b642c65e77cd08ec0e9f3160ede60" Dec 13 09:49:39.447583 containerd[1459]: 2024-12-13 09:49:39.364 [INFO][4929] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a745741dd6fea6bfc613c4c6cc0844f2d02b642c65e77cd08ec0e9f3160ede60" iface="eth0" netns="" Dec 13 09:49:39.447583 containerd[1459]: 2024-12-13 09:49:39.364 [INFO][4929] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a745741dd6fea6bfc613c4c6cc0844f2d02b642c65e77cd08ec0e9f3160ede60" Dec 13 09:49:39.447583 containerd[1459]: 2024-12-13 09:49:39.364 [INFO][4929] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a745741dd6fea6bfc613c4c6cc0844f2d02b642c65e77cd08ec0e9f3160ede60" Dec 13 09:49:39.447583 containerd[1459]: 2024-12-13 09:49:39.418 [INFO][4936] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a745741dd6fea6bfc613c4c6cc0844f2d02b642c65e77cd08ec0e9f3160ede60" HandleID="k8s-pod-network.a745741dd6fea6bfc613c4c6cc0844f2d02b642c65e77cd08ec0e9f3160ede60" Workload="ci--4081.2.1--d--c5ae8496ec-k8s-calico--kube--controllers--5bf6c6b877--7fph7-eth0" Dec 13 09:49:39.447583 containerd[1459]: 2024-12-13 09:49:39.418 [INFO][4936] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 09:49:39.447583 containerd[1459]: 2024-12-13 09:49:39.418 [INFO][4936] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 09:49:39.447583 containerd[1459]: 2024-12-13 09:49:39.433 [WARNING][4936] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a745741dd6fea6bfc613c4c6cc0844f2d02b642c65e77cd08ec0e9f3160ede60" HandleID="k8s-pod-network.a745741dd6fea6bfc613c4c6cc0844f2d02b642c65e77cd08ec0e9f3160ede60" Workload="ci--4081.2.1--d--c5ae8496ec-k8s-calico--kube--controllers--5bf6c6b877--7fph7-eth0" Dec 13 09:49:39.447583 containerd[1459]: 2024-12-13 09:49:39.434 [INFO][4936] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a745741dd6fea6bfc613c4c6cc0844f2d02b642c65e77cd08ec0e9f3160ede60" HandleID="k8s-pod-network.a745741dd6fea6bfc613c4c6cc0844f2d02b642c65e77cd08ec0e9f3160ede60" Workload="ci--4081.2.1--d--c5ae8496ec-k8s-calico--kube--controllers--5bf6c6b877--7fph7-eth0" Dec 13 09:49:39.447583 containerd[1459]: 2024-12-13 09:49:39.440 [INFO][4936] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 09:49:39.447583 containerd[1459]: 2024-12-13 09:49:39.443 [INFO][4929] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a745741dd6fea6bfc613c4c6cc0844f2d02b642c65e77cd08ec0e9f3160ede60" Dec 13 09:49:39.449128 containerd[1459]: time="2024-12-13T09:49:39.448131916Z" level=info msg="TearDown network for sandbox \"a745741dd6fea6bfc613c4c6cc0844f2d02b642c65e77cd08ec0e9f3160ede60\" successfully" Dec 13 09:49:39.459932 containerd[1459]: time="2024-12-13T09:49:39.459878073Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a745741dd6fea6bfc613c4c6cc0844f2d02b642c65e77cd08ec0e9f3160ede60\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 09:49:39.460367 containerd[1459]: time="2024-12-13T09:49:39.460218451Z" level=info msg="RemovePodSandbox \"a745741dd6fea6bfc613c4c6cc0844f2d02b642c65e77cd08ec0e9f3160ede60\" returns successfully" Dec 13 09:49:39.461815 containerd[1459]: time="2024-12-13T09:49:39.461406105Z" level=info msg="StopPodSandbox for \"5aae01cbde5a234023a54c3b41d360b07ffda50c3fdd1730a2a86bfe0d163135\"" Dec 13 09:49:39.609591 containerd[1459]: 2024-12-13 09:49:39.538 [WARNING][4955] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5aae01cbde5a234023a54c3b41d360b07ffda50c3fdd1730a2a86bfe0d163135" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--d--c5ae8496ec-k8s-calico--apiserver--86c9dd4fbf--d9pn9-eth0", GenerateName:"calico-apiserver-86c9dd4fbf-", Namespace:"calico-apiserver", SelfLink:"", UID:"97522c65-0ab9-4890-ab5c-998cdfc7bb0c", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 48, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86c9dd4fbf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-d-c5ae8496ec", ContainerID:"89c083e9147355c49128996ae783464bc7c240ac9797d672d663b346d739bcbf", Pod:"calico-apiserver-86c9dd4fbf-d9pn9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.0.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6588cede904", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 09:49:39.609591 containerd[1459]: 2024-12-13 09:49:39.539 [INFO][4955] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5aae01cbde5a234023a54c3b41d360b07ffda50c3fdd1730a2a86bfe0d163135" Dec 13 09:49:39.609591 containerd[1459]: 2024-12-13 09:49:39.539 [INFO][4955] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5aae01cbde5a234023a54c3b41d360b07ffda50c3fdd1730a2a86bfe0d163135" iface="eth0" netns="" Dec 13 09:49:39.609591 containerd[1459]: 2024-12-13 09:49:39.539 [INFO][4955] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5aae01cbde5a234023a54c3b41d360b07ffda50c3fdd1730a2a86bfe0d163135" Dec 13 09:49:39.609591 containerd[1459]: 2024-12-13 09:49:39.539 [INFO][4955] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5aae01cbde5a234023a54c3b41d360b07ffda50c3fdd1730a2a86bfe0d163135" Dec 13 09:49:39.609591 containerd[1459]: 2024-12-13 09:49:39.590 [INFO][4961] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5aae01cbde5a234023a54c3b41d360b07ffda50c3fdd1730a2a86bfe0d163135" HandleID="k8s-pod-network.5aae01cbde5a234023a54c3b41d360b07ffda50c3fdd1730a2a86bfe0d163135" Workload="ci--4081.2.1--d--c5ae8496ec-k8s-calico--apiserver--86c9dd4fbf--d9pn9-eth0" Dec 13 09:49:39.609591 containerd[1459]: 2024-12-13 09:49:39.591 [INFO][4961] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 09:49:39.609591 containerd[1459]: 2024-12-13 09:49:39.591 [INFO][4961] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 09:49:39.609591 containerd[1459]: 2024-12-13 09:49:39.600 [WARNING][4961] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5aae01cbde5a234023a54c3b41d360b07ffda50c3fdd1730a2a86bfe0d163135" HandleID="k8s-pod-network.5aae01cbde5a234023a54c3b41d360b07ffda50c3fdd1730a2a86bfe0d163135" Workload="ci--4081.2.1--d--c5ae8496ec-k8s-calico--apiserver--86c9dd4fbf--d9pn9-eth0" Dec 13 09:49:39.609591 containerd[1459]: 2024-12-13 09:49:39.601 [INFO][4961] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5aae01cbde5a234023a54c3b41d360b07ffda50c3fdd1730a2a86bfe0d163135" HandleID="k8s-pod-network.5aae01cbde5a234023a54c3b41d360b07ffda50c3fdd1730a2a86bfe0d163135" Workload="ci--4081.2.1--d--c5ae8496ec-k8s-calico--apiserver--86c9dd4fbf--d9pn9-eth0" Dec 13 09:49:39.609591 containerd[1459]: 2024-12-13 09:49:39.604 [INFO][4961] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 09:49:39.609591 containerd[1459]: 2024-12-13 09:49:39.607 [INFO][4955] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5aae01cbde5a234023a54c3b41d360b07ffda50c3fdd1730a2a86bfe0d163135" Dec 13 09:49:39.610799 containerd[1459]: time="2024-12-13T09:49:39.610588535Z" level=info msg="TearDown network for sandbox \"5aae01cbde5a234023a54c3b41d360b07ffda50c3fdd1730a2a86bfe0d163135\" successfully" Dec 13 09:49:39.610799 containerd[1459]: time="2024-12-13T09:49:39.610635377Z" level=info msg="StopPodSandbox for \"5aae01cbde5a234023a54c3b41d360b07ffda50c3fdd1730a2a86bfe0d163135\" returns successfully" Dec 13 09:49:39.611418 containerd[1459]: time="2024-12-13T09:49:39.611363067Z" level=info msg="RemovePodSandbox for \"5aae01cbde5a234023a54c3b41d360b07ffda50c3fdd1730a2a86bfe0d163135\"" Dec 13 09:49:39.611418 containerd[1459]: time="2024-12-13T09:49:39.611404586Z" level=info msg="Forcibly stopping sandbox \"5aae01cbde5a234023a54c3b41d360b07ffda50c3fdd1730a2a86bfe0d163135\"" Dec 13 09:49:39.728579 containerd[1459]: 2024-12-13 09:49:39.675 [WARNING][4979] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5aae01cbde5a234023a54c3b41d360b07ffda50c3fdd1730a2a86bfe0d163135" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--d--c5ae8496ec-k8s-calico--apiserver--86c9dd4fbf--d9pn9-eth0", GenerateName:"calico-apiserver-86c9dd4fbf-", Namespace:"calico-apiserver", SelfLink:"", UID:"97522c65-0ab9-4890-ab5c-998cdfc7bb0c", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 48, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86c9dd4fbf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-d-c5ae8496ec", ContainerID:"89c083e9147355c49128996ae783464bc7c240ac9797d672d663b346d739bcbf", Pod:"calico-apiserver-86c9dd4fbf-d9pn9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.0.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6588cede904", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 09:49:39.728579 containerd[1459]: 2024-12-13 09:49:39.676 [INFO][4979] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5aae01cbde5a234023a54c3b41d360b07ffda50c3fdd1730a2a86bfe0d163135" Dec 13 09:49:39.728579 containerd[1459]: 2024-12-13 09:49:39.676 [INFO][4979] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5aae01cbde5a234023a54c3b41d360b07ffda50c3fdd1730a2a86bfe0d163135" iface="eth0" netns="" Dec 13 09:49:39.728579 containerd[1459]: 2024-12-13 09:49:39.676 [INFO][4979] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5aae01cbde5a234023a54c3b41d360b07ffda50c3fdd1730a2a86bfe0d163135" Dec 13 09:49:39.728579 containerd[1459]: 2024-12-13 09:49:39.676 [INFO][4979] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5aae01cbde5a234023a54c3b41d360b07ffda50c3fdd1730a2a86bfe0d163135" Dec 13 09:49:39.728579 containerd[1459]: 2024-12-13 09:49:39.710 [INFO][4985] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5aae01cbde5a234023a54c3b41d360b07ffda50c3fdd1730a2a86bfe0d163135" HandleID="k8s-pod-network.5aae01cbde5a234023a54c3b41d360b07ffda50c3fdd1730a2a86bfe0d163135" Workload="ci--4081.2.1--d--c5ae8496ec-k8s-calico--apiserver--86c9dd4fbf--d9pn9-eth0" Dec 13 09:49:39.728579 containerd[1459]: 2024-12-13 09:49:39.710 [INFO][4985] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 09:49:39.728579 containerd[1459]: 2024-12-13 09:49:39.710 [INFO][4985] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 09:49:39.728579 containerd[1459]: 2024-12-13 09:49:39.719 [WARNING][4985] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5aae01cbde5a234023a54c3b41d360b07ffda50c3fdd1730a2a86bfe0d163135" HandleID="k8s-pod-network.5aae01cbde5a234023a54c3b41d360b07ffda50c3fdd1730a2a86bfe0d163135" Workload="ci--4081.2.1--d--c5ae8496ec-k8s-calico--apiserver--86c9dd4fbf--d9pn9-eth0" Dec 13 09:49:39.728579 containerd[1459]: 2024-12-13 09:49:39.719 [INFO][4985] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5aae01cbde5a234023a54c3b41d360b07ffda50c3fdd1730a2a86bfe0d163135" HandleID="k8s-pod-network.5aae01cbde5a234023a54c3b41d360b07ffda50c3fdd1730a2a86bfe0d163135" Workload="ci--4081.2.1--d--c5ae8496ec-k8s-calico--apiserver--86c9dd4fbf--d9pn9-eth0" Dec 13 09:49:39.728579 containerd[1459]: 2024-12-13 09:49:39.722 [INFO][4985] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 09:49:39.728579 containerd[1459]: 2024-12-13 09:49:39.725 [INFO][4979] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5aae01cbde5a234023a54c3b41d360b07ffda50c3fdd1730a2a86bfe0d163135" Dec 13 09:49:39.729295 containerd[1459]: time="2024-12-13T09:49:39.728553200Z" level=info msg="TearDown network for sandbox \"5aae01cbde5a234023a54c3b41d360b07ffda50c3fdd1730a2a86bfe0d163135\" successfully" Dec 13 09:49:39.733631 containerd[1459]: time="2024-12-13T09:49:39.733543340Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5aae01cbde5a234023a54c3b41d360b07ffda50c3fdd1730a2a86bfe0d163135\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 09:49:39.733831 containerd[1459]: time="2024-12-13T09:49:39.733679339Z" level=info msg="RemovePodSandbox \"5aae01cbde5a234023a54c3b41d360b07ffda50c3fdd1730a2a86bfe0d163135\" returns successfully" Dec 13 09:49:39.734665 containerd[1459]: time="2024-12-13T09:49:39.734608212Z" level=info msg="StopPodSandbox for \"057efe23f5275971e6dfc7a081f92c63f2006e578fa2b36680b092d2778da766\"" Dec 13 09:49:39.848436 containerd[1459]: 2024-12-13 09:49:39.792 [WARNING][5003] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="057efe23f5275971e6dfc7a081f92c63f2006e578fa2b36680b092d2778da766" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--d--c5ae8496ec-k8s-calico--apiserver--86c9dd4fbf--lcxl7-eth0", GenerateName:"calico-apiserver-86c9dd4fbf-", Namespace:"calico-apiserver", SelfLink:"", UID:"c34b1b21-b5b0-4dd4-928a-ce99fd8dbbbc", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 48, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86c9dd4fbf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-d-c5ae8496ec", ContainerID:"da1f9eee5d6717c172858c7b731b80e24ea534c9a269300986326dc07d81ef84", Pod:"calico-apiserver-86c9dd4fbf-lcxl7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.0.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2cc24d342ca", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 09:49:39.848436 containerd[1459]: 2024-12-13 09:49:39.792 [INFO][5003] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="057efe23f5275971e6dfc7a081f92c63f2006e578fa2b36680b092d2778da766" Dec 13 09:49:39.848436 containerd[1459]: 2024-12-13 09:49:39.792 [INFO][5003] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="057efe23f5275971e6dfc7a081f92c63f2006e578fa2b36680b092d2778da766" iface="eth0" netns="" Dec 13 09:49:39.848436 containerd[1459]: 2024-12-13 09:49:39.792 [INFO][5003] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="057efe23f5275971e6dfc7a081f92c63f2006e578fa2b36680b092d2778da766" Dec 13 09:49:39.848436 containerd[1459]: 2024-12-13 09:49:39.792 [INFO][5003] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="057efe23f5275971e6dfc7a081f92c63f2006e578fa2b36680b092d2778da766" Dec 13 09:49:39.848436 containerd[1459]: 2024-12-13 09:49:39.832 [INFO][5009] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="057efe23f5275971e6dfc7a081f92c63f2006e578fa2b36680b092d2778da766" HandleID="k8s-pod-network.057efe23f5275971e6dfc7a081f92c63f2006e578fa2b36680b092d2778da766" Workload="ci--4081.2.1--d--c5ae8496ec-k8s-calico--apiserver--86c9dd4fbf--lcxl7-eth0" Dec 13 09:49:39.848436 containerd[1459]: 2024-12-13 09:49:39.833 [INFO][5009] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 09:49:39.848436 containerd[1459]: 2024-12-13 09:49:39.833 [INFO][5009] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 09:49:39.848436 containerd[1459]: 2024-12-13 09:49:39.840 [WARNING][5009] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="057efe23f5275971e6dfc7a081f92c63f2006e578fa2b36680b092d2778da766" HandleID="k8s-pod-network.057efe23f5275971e6dfc7a081f92c63f2006e578fa2b36680b092d2778da766" Workload="ci--4081.2.1--d--c5ae8496ec-k8s-calico--apiserver--86c9dd4fbf--lcxl7-eth0" Dec 13 09:49:39.848436 containerd[1459]: 2024-12-13 09:49:39.840 [INFO][5009] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="057efe23f5275971e6dfc7a081f92c63f2006e578fa2b36680b092d2778da766" HandleID="k8s-pod-network.057efe23f5275971e6dfc7a081f92c63f2006e578fa2b36680b092d2778da766" Workload="ci--4081.2.1--d--c5ae8496ec-k8s-calico--apiserver--86c9dd4fbf--lcxl7-eth0" Dec 13 09:49:39.848436 containerd[1459]: 2024-12-13 09:49:39.843 [INFO][5009] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 09:49:39.848436 containerd[1459]: 2024-12-13 09:49:39.845 [INFO][5003] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="057efe23f5275971e6dfc7a081f92c63f2006e578fa2b36680b092d2778da766" Dec 13 09:49:39.850087 containerd[1459]: time="2024-12-13T09:49:39.849618006Z" level=info msg="TearDown network for sandbox \"057efe23f5275971e6dfc7a081f92c63f2006e578fa2b36680b092d2778da766\" successfully" Dec 13 09:49:39.850087 containerd[1459]: time="2024-12-13T09:49:39.849670918Z" level=info msg="StopPodSandbox for \"057efe23f5275971e6dfc7a081f92c63f2006e578fa2b36680b092d2778da766\" returns successfully" Dec 13 09:49:39.851038 containerd[1459]: time="2024-12-13T09:49:39.850327064Z" level=info msg="RemovePodSandbox for \"057efe23f5275971e6dfc7a081f92c63f2006e578fa2b36680b092d2778da766\"" Dec 13 09:49:39.851038 containerd[1459]: time="2024-12-13T09:49:39.850365193Z" level=info msg="Forcibly stopping sandbox \"057efe23f5275971e6dfc7a081f92c63f2006e578fa2b36680b092d2778da766\"" Dec 13 09:49:39.949554 containerd[1459]: 2024-12-13 09:49:39.904 [WARNING][5027] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="057efe23f5275971e6dfc7a081f92c63f2006e578fa2b36680b092d2778da766" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--d--c5ae8496ec-k8s-calico--apiserver--86c9dd4fbf--lcxl7-eth0", GenerateName:"calico-apiserver-86c9dd4fbf-", Namespace:"calico-apiserver", SelfLink:"", UID:"c34b1b21-b5b0-4dd4-928a-ce99fd8dbbbc", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 48, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86c9dd4fbf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-d-c5ae8496ec", ContainerID:"da1f9eee5d6717c172858c7b731b80e24ea534c9a269300986326dc07d81ef84", Pod:"calico-apiserver-86c9dd4fbf-lcxl7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.0.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2cc24d342ca", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 09:49:39.949554 containerd[1459]: 2024-12-13 09:49:39.905 [INFO][5027] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="057efe23f5275971e6dfc7a081f92c63f2006e578fa2b36680b092d2778da766" Dec 13 09:49:39.949554 containerd[1459]: 2024-12-13 09:49:39.905 [INFO][5027] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="057efe23f5275971e6dfc7a081f92c63f2006e578fa2b36680b092d2778da766" iface="eth0" netns="" Dec 13 09:49:39.949554 containerd[1459]: 2024-12-13 09:49:39.905 [INFO][5027] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="057efe23f5275971e6dfc7a081f92c63f2006e578fa2b36680b092d2778da766" Dec 13 09:49:39.949554 containerd[1459]: 2024-12-13 09:49:39.905 [INFO][5027] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="057efe23f5275971e6dfc7a081f92c63f2006e578fa2b36680b092d2778da766" Dec 13 09:49:39.949554 containerd[1459]: 2024-12-13 09:49:39.934 [INFO][5033] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="057efe23f5275971e6dfc7a081f92c63f2006e578fa2b36680b092d2778da766" HandleID="k8s-pod-network.057efe23f5275971e6dfc7a081f92c63f2006e578fa2b36680b092d2778da766" Workload="ci--4081.2.1--d--c5ae8496ec-k8s-calico--apiserver--86c9dd4fbf--lcxl7-eth0" Dec 13 09:49:39.949554 containerd[1459]: 2024-12-13 09:49:39.935 [INFO][5033] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 09:49:39.949554 containerd[1459]: 2024-12-13 09:49:39.935 [INFO][5033] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 09:49:39.949554 containerd[1459]: 2024-12-13 09:49:39.943 [WARNING][5033] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="057efe23f5275971e6dfc7a081f92c63f2006e578fa2b36680b092d2778da766" HandleID="k8s-pod-network.057efe23f5275971e6dfc7a081f92c63f2006e578fa2b36680b092d2778da766" Workload="ci--4081.2.1--d--c5ae8496ec-k8s-calico--apiserver--86c9dd4fbf--lcxl7-eth0" Dec 13 09:49:39.949554 containerd[1459]: 2024-12-13 09:49:39.943 [INFO][5033] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="057efe23f5275971e6dfc7a081f92c63f2006e578fa2b36680b092d2778da766" HandleID="k8s-pod-network.057efe23f5275971e6dfc7a081f92c63f2006e578fa2b36680b092d2778da766" Workload="ci--4081.2.1--d--c5ae8496ec-k8s-calico--apiserver--86c9dd4fbf--lcxl7-eth0" Dec 13 09:49:39.949554 containerd[1459]: 2024-12-13 09:49:39.945 [INFO][5033] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 09:49:39.949554 containerd[1459]: 2024-12-13 09:49:39.947 [INFO][5027] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="057efe23f5275971e6dfc7a081f92c63f2006e578fa2b36680b092d2778da766" Dec 13 09:49:39.950713 containerd[1459]: time="2024-12-13T09:49:39.950013698Z" level=info msg="TearDown network for sandbox \"057efe23f5275971e6dfc7a081f92c63f2006e578fa2b36680b092d2778da766\" successfully" Dec 13 09:49:39.954265 containerd[1459]: time="2024-12-13T09:49:39.953991081Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"057efe23f5275971e6dfc7a081f92c63f2006e578fa2b36680b092d2778da766\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 09:49:39.954265 containerd[1459]: time="2024-12-13T09:49:39.954092261Z" level=info msg="RemovePodSandbox \"057efe23f5275971e6dfc7a081f92c63f2006e578fa2b36680b092d2778da766\" returns successfully" Dec 13 09:49:39.955161 containerd[1459]: time="2024-12-13T09:49:39.954814389Z" level=info msg="StopPodSandbox for \"af951e7a358aff50950246461650e0a7adfd33f19366a09954150c87eb74ab6f\"" Dec 13 09:49:40.053357 containerd[1459]: 2024-12-13 09:49:40.005 [WARNING][5052] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="af951e7a358aff50950246461650e0a7adfd33f19366a09954150c87eb74ab6f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--d--c5ae8496ec-k8s-coredns--7db6d8ff4d--464d2-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"89c56140-f295-40c7-ae2a-952e41b9599a", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 48, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-d-c5ae8496ec", ContainerID:"4b9d00c8124d97930ac566f36499aba2dd612588085b8e906c4dea73a28b7545", Pod:"coredns-7db6d8ff4d-464d2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.0.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3b175e886d4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 09:49:40.053357 containerd[1459]: 2024-12-13 09:49:40.006 [INFO][5052] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="af951e7a358aff50950246461650e0a7adfd33f19366a09954150c87eb74ab6f" Dec 13 09:49:40.053357 containerd[1459]: 2024-12-13 09:49:40.006 [INFO][5052] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="af951e7a358aff50950246461650e0a7adfd33f19366a09954150c87eb74ab6f" iface="eth0" netns="" Dec 13 09:49:40.053357 containerd[1459]: 2024-12-13 09:49:40.006 [INFO][5052] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="af951e7a358aff50950246461650e0a7adfd33f19366a09954150c87eb74ab6f" Dec 13 09:49:40.053357 containerd[1459]: 2024-12-13 09:49:40.007 [INFO][5052] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="af951e7a358aff50950246461650e0a7adfd33f19366a09954150c87eb74ab6f" Dec 13 09:49:40.053357 containerd[1459]: 2024-12-13 09:49:40.035 [INFO][5058] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="af951e7a358aff50950246461650e0a7adfd33f19366a09954150c87eb74ab6f" HandleID="k8s-pod-network.af951e7a358aff50950246461650e0a7adfd33f19366a09954150c87eb74ab6f" Workload="ci--4081.2.1--d--c5ae8496ec-k8s-coredns--7db6d8ff4d--464d2-eth0" Dec 13 09:49:40.053357 containerd[1459]: 2024-12-13 09:49:40.035 [INFO][5058] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 09:49:40.053357 containerd[1459]: 2024-12-13 09:49:40.035 [INFO][5058] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 09:49:40.053357 containerd[1459]: 2024-12-13 09:49:40.045 [WARNING][5058] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="af951e7a358aff50950246461650e0a7adfd33f19366a09954150c87eb74ab6f" HandleID="k8s-pod-network.af951e7a358aff50950246461650e0a7adfd33f19366a09954150c87eb74ab6f" Workload="ci--4081.2.1--d--c5ae8496ec-k8s-coredns--7db6d8ff4d--464d2-eth0" Dec 13 09:49:40.053357 containerd[1459]: 2024-12-13 09:49:40.045 [INFO][5058] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="af951e7a358aff50950246461650e0a7adfd33f19366a09954150c87eb74ab6f" HandleID="k8s-pod-network.af951e7a358aff50950246461650e0a7adfd33f19366a09954150c87eb74ab6f" Workload="ci--4081.2.1--d--c5ae8496ec-k8s-coredns--7db6d8ff4d--464d2-eth0" Dec 13 09:49:40.053357 containerd[1459]: 2024-12-13 09:49:40.047 [INFO][5058] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 09:49:40.053357 containerd[1459]: 2024-12-13 09:49:40.050 [INFO][5052] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="af951e7a358aff50950246461650e0a7adfd33f19366a09954150c87eb74ab6f" Dec 13 09:49:40.054217 containerd[1459]: time="2024-12-13T09:49:40.053407699Z" level=info msg="TearDown network for sandbox \"af951e7a358aff50950246461650e0a7adfd33f19366a09954150c87eb74ab6f\" successfully" Dec 13 09:49:40.054217 containerd[1459]: time="2024-12-13T09:49:40.053439226Z" level=info msg="StopPodSandbox for \"af951e7a358aff50950246461650e0a7adfd33f19366a09954150c87eb74ab6f\" returns successfully" Dec 13 09:49:40.055334 containerd[1459]: time="2024-12-13T09:49:40.054471868Z" level=info msg="RemovePodSandbox for \"af951e7a358aff50950246461650e0a7adfd33f19366a09954150c87eb74ab6f\"" Dec 13 09:49:40.055334 containerd[1459]: time="2024-12-13T09:49:40.054507467Z" level=info msg="Forcibly stopping sandbox \"af951e7a358aff50950246461650e0a7adfd33f19366a09954150c87eb74ab6f\"" Dec 13 09:49:40.154683 containerd[1459]: 2024-12-13 09:49:40.109 [WARNING][5076] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="af951e7a358aff50950246461650e0a7adfd33f19366a09954150c87eb74ab6f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--d--c5ae8496ec-k8s-coredns--7db6d8ff4d--464d2-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"89c56140-f295-40c7-ae2a-952e41b9599a", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 48, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-d-c5ae8496ec", ContainerID:"4b9d00c8124d97930ac566f36499aba2dd612588085b8e906c4dea73a28b7545", Pod:"coredns-7db6d8ff4d-464d2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.0.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3b175e886d4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 09:49:40.154683 containerd[1459]: 2024-12-13 09:49:40.109 [INFO][5076] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="af951e7a358aff50950246461650e0a7adfd33f19366a09954150c87eb74ab6f" Dec 13 09:49:40.154683 containerd[1459]: 2024-12-13 09:49:40.109 [INFO][5076] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="af951e7a358aff50950246461650e0a7adfd33f19366a09954150c87eb74ab6f" iface="eth0" netns="" Dec 13 09:49:40.154683 containerd[1459]: 2024-12-13 09:49:40.109 [INFO][5076] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="af951e7a358aff50950246461650e0a7adfd33f19366a09954150c87eb74ab6f" Dec 13 09:49:40.154683 containerd[1459]: 2024-12-13 09:49:40.109 [INFO][5076] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="af951e7a358aff50950246461650e0a7adfd33f19366a09954150c87eb74ab6f" Dec 13 09:49:40.154683 containerd[1459]: 2024-12-13 09:49:40.139 [INFO][5083] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="af951e7a358aff50950246461650e0a7adfd33f19366a09954150c87eb74ab6f" HandleID="k8s-pod-network.af951e7a358aff50950246461650e0a7adfd33f19366a09954150c87eb74ab6f" Workload="ci--4081.2.1--d--c5ae8496ec-k8s-coredns--7db6d8ff4d--464d2-eth0" Dec 13 09:49:40.154683 containerd[1459]: 2024-12-13 09:49:40.139 [INFO][5083] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 09:49:40.154683 containerd[1459]: 2024-12-13 09:49:40.139 [INFO][5083] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 09:49:40.154683 containerd[1459]: 2024-12-13 09:49:40.146 [WARNING][5083] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="af951e7a358aff50950246461650e0a7adfd33f19366a09954150c87eb74ab6f" HandleID="k8s-pod-network.af951e7a358aff50950246461650e0a7adfd33f19366a09954150c87eb74ab6f" Workload="ci--4081.2.1--d--c5ae8496ec-k8s-coredns--7db6d8ff4d--464d2-eth0" Dec 13 09:49:40.154683 containerd[1459]: 2024-12-13 09:49:40.146 [INFO][5083] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="af951e7a358aff50950246461650e0a7adfd33f19366a09954150c87eb74ab6f" HandleID="k8s-pod-network.af951e7a358aff50950246461650e0a7adfd33f19366a09954150c87eb74ab6f" Workload="ci--4081.2.1--d--c5ae8496ec-k8s-coredns--7db6d8ff4d--464d2-eth0" Dec 13 09:49:40.154683 containerd[1459]: 2024-12-13 09:49:40.149 [INFO][5083] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 09:49:40.154683 containerd[1459]: 2024-12-13 09:49:40.151 [INFO][5076] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="af951e7a358aff50950246461650e0a7adfd33f19366a09954150c87eb74ab6f" Dec 13 09:49:40.155551 containerd[1459]: time="2024-12-13T09:49:40.154731796Z" level=info msg="TearDown network for sandbox \"af951e7a358aff50950246461650e0a7adfd33f19366a09954150c87eb74ab6f\" successfully" Dec 13 09:49:40.157546 containerd[1459]: time="2024-12-13T09:49:40.157500155Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"af951e7a358aff50950246461650e0a7adfd33f19366a09954150c87eb74ab6f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 09:49:40.157698 containerd[1459]: time="2024-12-13T09:49:40.157577211Z" level=info msg="RemovePodSandbox \"af951e7a358aff50950246461650e0a7adfd33f19366a09954150c87eb74ab6f\" returns successfully" Dec 13 09:49:40.158627 containerd[1459]: time="2024-12-13T09:49:40.158226298Z" level=info msg="StopPodSandbox for \"c2fc255fd4e942f4f893bb52f8456827bd455aca6a60de4b7f00b6042b49a4df\"" Dec 13 09:49:40.297190 containerd[1459]: 2024-12-13 09:49:40.215 [WARNING][5101] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c2fc255fd4e942f4f893bb52f8456827bd455aca6a60de4b7f00b6042b49a4df" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--d--c5ae8496ec-k8s-coredns--7db6d8ff4d--4xm55-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"9046cd15-11c7-4a60-ba00-3642ddd7002a", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 48, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-d-c5ae8496ec", ContainerID:"5b3420cc8a40cbdddfe53c4fc687a309ad11d7b909a2b0a0c5f72da4874e7713", Pod:"coredns-7db6d8ff4d-4xm55", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.0.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5e4c2066ecf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 09:49:40.297190 containerd[1459]: 2024-12-13 09:49:40.215 [INFO][5101] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c2fc255fd4e942f4f893bb52f8456827bd455aca6a60de4b7f00b6042b49a4df" Dec 13 09:49:40.297190 containerd[1459]: 2024-12-13 09:49:40.216 [INFO][5101] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c2fc255fd4e942f4f893bb52f8456827bd455aca6a60de4b7f00b6042b49a4df" iface="eth0" netns="" Dec 13 09:49:40.297190 containerd[1459]: 2024-12-13 09:49:40.216 [INFO][5101] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c2fc255fd4e942f4f893bb52f8456827bd455aca6a60de4b7f00b6042b49a4df" Dec 13 09:49:40.297190 containerd[1459]: 2024-12-13 09:49:40.216 [INFO][5101] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c2fc255fd4e942f4f893bb52f8456827bd455aca6a60de4b7f00b6042b49a4df" Dec 13 09:49:40.297190 containerd[1459]: 2024-12-13 09:49:40.274 [INFO][5107] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c2fc255fd4e942f4f893bb52f8456827bd455aca6a60de4b7f00b6042b49a4df" HandleID="k8s-pod-network.c2fc255fd4e942f4f893bb52f8456827bd455aca6a60de4b7f00b6042b49a4df" Workload="ci--4081.2.1--d--c5ae8496ec-k8s-coredns--7db6d8ff4d--4xm55-eth0" Dec 13 09:49:40.297190 containerd[1459]: 2024-12-13 09:49:40.274 [INFO][5107] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 09:49:40.297190 containerd[1459]: 2024-12-13 09:49:40.275 [INFO][5107] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 09:49:40.297190 containerd[1459]: 2024-12-13 09:49:40.288 [WARNING][5107] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c2fc255fd4e942f4f893bb52f8456827bd455aca6a60de4b7f00b6042b49a4df" HandleID="k8s-pod-network.c2fc255fd4e942f4f893bb52f8456827bd455aca6a60de4b7f00b6042b49a4df" Workload="ci--4081.2.1--d--c5ae8496ec-k8s-coredns--7db6d8ff4d--4xm55-eth0" Dec 13 09:49:40.297190 containerd[1459]: 2024-12-13 09:49:40.288 [INFO][5107] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c2fc255fd4e942f4f893bb52f8456827bd455aca6a60de4b7f00b6042b49a4df" HandleID="k8s-pod-network.c2fc255fd4e942f4f893bb52f8456827bd455aca6a60de4b7f00b6042b49a4df" Workload="ci--4081.2.1--d--c5ae8496ec-k8s-coredns--7db6d8ff4d--4xm55-eth0" Dec 13 09:49:40.297190 containerd[1459]: 2024-12-13 09:49:40.291 [INFO][5107] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 09:49:40.297190 containerd[1459]: 2024-12-13 09:49:40.294 [INFO][5101] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c2fc255fd4e942f4f893bb52f8456827bd455aca6a60de4b7f00b6042b49a4df" Dec 13 09:49:40.298247 containerd[1459]: time="2024-12-13T09:49:40.298050110Z" level=info msg="TearDown network for sandbox \"c2fc255fd4e942f4f893bb52f8456827bd455aca6a60de4b7f00b6042b49a4df\" successfully" Dec 13 09:49:40.298247 containerd[1459]: time="2024-12-13T09:49:40.298103980Z" level=info msg="StopPodSandbox for \"c2fc255fd4e942f4f893bb52f8456827bd455aca6a60de4b7f00b6042b49a4df\" returns successfully" Dec 13 09:49:40.298932 containerd[1459]: time="2024-12-13T09:49:40.298879017Z" level=info msg="RemovePodSandbox for \"c2fc255fd4e942f4f893bb52f8456827bd455aca6a60de4b7f00b6042b49a4df\"" Dec 13 09:49:40.299109 containerd[1459]: time="2024-12-13T09:49:40.299044905Z" level=info msg="Forcibly stopping sandbox \"c2fc255fd4e942f4f893bb52f8456827bd455aca6a60de4b7f00b6042b49a4df\"" Dec 13 09:49:40.420356 containerd[1459]: 2024-12-13 09:49:40.363 [WARNING][5125] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c2fc255fd4e942f4f893bb52f8456827bd455aca6a60de4b7f00b6042b49a4df" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--d--c5ae8496ec-k8s-coredns--7db6d8ff4d--4xm55-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"9046cd15-11c7-4a60-ba00-3642ddd7002a", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 48, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-d-c5ae8496ec", ContainerID:"5b3420cc8a40cbdddfe53c4fc687a309ad11d7b909a2b0a0c5f72da4874e7713", Pod:"coredns-7db6d8ff4d-4xm55", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.0.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5e4c2066ecf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 09:49:40.420356 containerd[1459]: 2024-12-13 09:49:40.365 [INFO][5125] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c2fc255fd4e942f4f893bb52f8456827bd455aca6a60de4b7f00b6042b49a4df" Dec 13 09:49:40.420356 containerd[1459]: 2024-12-13 09:49:40.366 [INFO][5125] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c2fc255fd4e942f4f893bb52f8456827bd455aca6a60de4b7f00b6042b49a4df" iface="eth0" netns="" Dec 13 09:49:40.420356 containerd[1459]: 2024-12-13 09:49:40.366 [INFO][5125] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c2fc255fd4e942f4f893bb52f8456827bd455aca6a60de4b7f00b6042b49a4df" Dec 13 09:49:40.420356 containerd[1459]: 2024-12-13 09:49:40.366 [INFO][5125] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c2fc255fd4e942f4f893bb52f8456827bd455aca6a60de4b7f00b6042b49a4df" Dec 13 09:49:40.420356 containerd[1459]: 2024-12-13 09:49:40.404 [INFO][5132] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c2fc255fd4e942f4f893bb52f8456827bd455aca6a60de4b7f00b6042b49a4df" HandleID="k8s-pod-network.c2fc255fd4e942f4f893bb52f8456827bd455aca6a60de4b7f00b6042b49a4df" Workload="ci--4081.2.1--d--c5ae8496ec-k8s-coredns--7db6d8ff4d--4xm55-eth0" Dec 13 09:49:40.420356 containerd[1459]: 2024-12-13 09:49:40.404 [INFO][5132] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 09:49:40.420356 containerd[1459]: 2024-12-13 09:49:40.405 [INFO][5132] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 09:49:40.420356 containerd[1459]: 2024-12-13 09:49:40.412 [WARNING][5132] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c2fc255fd4e942f4f893bb52f8456827bd455aca6a60de4b7f00b6042b49a4df" HandleID="k8s-pod-network.c2fc255fd4e942f4f893bb52f8456827bd455aca6a60de4b7f00b6042b49a4df" Workload="ci--4081.2.1--d--c5ae8496ec-k8s-coredns--7db6d8ff4d--4xm55-eth0" Dec 13 09:49:40.420356 containerd[1459]: 2024-12-13 09:49:40.412 [INFO][5132] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c2fc255fd4e942f4f893bb52f8456827bd455aca6a60de4b7f00b6042b49a4df" HandleID="k8s-pod-network.c2fc255fd4e942f4f893bb52f8456827bd455aca6a60de4b7f00b6042b49a4df" Workload="ci--4081.2.1--d--c5ae8496ec-k8s-coredns--7db6d8ff4d--4xm55-eth0" Dec 13 09:49:40.420356 containerd[1459]: 2024-12-13 09:49:40.415 [INFO][5132] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 09:49:40.420356 containerd[1459]: 2024-12-13 09:49:40.417 [INFO][5125] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c2fc255fd4e942f4f893bb52f8456827bd455aca6a60de4b7f00b6042b49a4df" Dec 13 09:49:40.420356 containerd[1459]: time="2024-12-13T09:49:40.420230110Z" level=info msg="TearDown network for sandbox \"c2fc255fd4e942f4f893bb52f8456827bd455aca6a60de4b7f00b6042b49a4df\" successfully" Dec 13 09:49:40.426366 containerd[1459]: time="2024-12-13T09:49:40.426251483Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c2fc255fd4e942f4f893bb52f8456827bd455aca6a60de4b7f00b6042b49a4df\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 09:49:40.426553 containerd[1459]: time="2024-12-13T09:49:40.426421308Z" level=info msg="RemovePodSandbox \"c2fc255fd4e942f4f893bb52f8456827bd455aca6a60de4b7f00b6042b49a4df\" returns successfully" Dec 13 09:49:40.937797 kubelet[2549]: E1213 09:49:40.937230 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:49:55.927372 kubelet[2549]: I1213 09:49:55.927032 2549 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 09:50:06.746064 kubelet[2549]: E1213 09:50:06.745056 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:50:07.746432 kubelet[2549]: E1213 09:50:07.745646 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:50:09.745432 kubelet[2549]: E1213 09:50:09.745326 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:50:09.745432 kubelet[2549]: E1213 09:50:09.745330 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:50:35.745955 kubelet[2549]: E1213 09:50:35.745056 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:50:36.745885 kubelet[2549]: E1213 09:50:36.745224 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:50:43.746916 kubelet[2549]: E1213 09:50:43.746007 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:50:50.746476 kubelet[2549]: E1213 09:50:50.745781 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:51:06.764372 systemd[1]: Started sshd@7-159.223.206.54:22-147.75.109.163:47062.service - OpenSSH per-connection server daemon (147.75.109.163:47062). Dec 13 09:51:06.942522 sshd[5354]: Accepted publickey for core from 147.75.109.163 port 47062 ssh2: RSA SHA256:+8uCT8SkxIXioNlPTvLVKvDkt1DQL7UiLdQc1FAbEg4 Dec 13 09:51:06.951285 sshd[5354]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:51:06.973698 systemd-logind[1435]: New session 8 of user core. Dec 13 09:51:06.990231 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 09:51:07.793119 sshd[5354]: pam_unix(sshd:session): session closed for user core Dec 13 09:51:07.800989 systemd[1]: sshd@7-159.223.206.54:22-147.75.109.163:47062.service: Deactivated successfully. Dec 13 09:51:07.804657 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 09:51:07.806724 systemd-logind[1435]: Session 8 logged out. Waiting for processes to exit. Dec 13 09:51:07.808979 systemd-logind[1435]: Removed session 8. Dec 13 09:51:12.253369 systemd[1]: run-containerd-runc-k8s.io-82d44ce765ac07075cd25959738a707b4f08036fa75d519e91671d1c2cb77730-runc.JOKc2a.mount: Deactivated successfully. Dec 13 09:51:12.816420 systemd[1]: Started sshd@8-159.223.206.54:22-147.75.109.163:47068.service - OpenSSH per-connection server daemon (147.75.109.163:47068). Dec 13 09:51:12.897976 sshd[5408]: Accepted publickey for core from 147.75.109.163 port 47068 ssh2: RSA SHA256:+8uCT8SkxIXioNlPTvLVKvDkt1DQL7UiLdQc1FAbEg4 Dec 13 09:51:12.901097 sshd[5408]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:51:12.911927 systemd-logind[1435]: New session 9 of user core. Dec 13 09:51:12.922177 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 09:51:13.201627 sshd[5408]: pam_unix(sshd:session): session closed for user core Dec 13 09:51:13.209005 systemd[1]: sshd@8-159.223.206.54:22-147.75.109.163:47068.service: Deactivated successfully. Dec 13 09:51:13.212271 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 09:51:13.214182 systemd-logind[1435]: Session 9 logged out. Waiting for processes to exit. Dec 13 09:51:13.216215 systemd-logind[1435]: Removed session 9. Dec 13 09:51:18.222483 systemd[1]: Started sshd@9-159.223.206.54:22-147.75.109.163:53054.service - OpenSSH per-connection server daemon (147.75.109.163:53054). Dec 13 09:51:18.282384 sshd[5422]: Accepted publickey for core from 147.75.109.163 port 53054 ssh2: RSA SHA256:+8uCT8SkxIXioNlPTvLVKvDkt1DQL7UiLdQc1FAbEg4 Dec 13 09:51:18.283565 sshd[5422]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:51:18.289281 systemd-logind[1435]: New session 10 of user core. Dec 13 09:51:18.295474 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 09:51:18.442310 sshd[5422]: pam_unix(sshd:session): session closed for user core Dec 13 09:51:18.448361 systemd[1]: sshd@9-159.223.206.54:22-147.75.109.163:53054.service: Deactivated successfully. Dec 13 09:51:18.448363 systemd-logind[1435]: Session 10 logged out. Waiting for processes to exit. Dec 13 09:51:18.452285 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 09:51:18.453694 systemd-logind[1435]: Removed session 10. Dec 13 09:51:20.747562 kubelet[2549]: E1213 09:51:20.745743 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:51:23.467278 systemd[1]: Started sshd@10-159.223.206.54:22-147.75.109.163:53056.service - OpenSSH per-connection server daemon (147.75.109.163:53056). Dec 13 09:51:23.528171 sshd[5439]: Accepted publickey for core from 147.75.109.163 port 53056 ssh2: RSA SHA256:+8uCT8SkxIXioNlPTvLVKvDkt1DQL7UiLdQc1FAbEg4 Dec 13 09:51:23.530264 sshd[5439]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:51:23.538006 systemd-logind[1435]: New session 11 of user core. Dec 13 09:51:23.543258 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 09:51:23.723789 sshd[5439]: pam_unix(sshd:session): session closed for user core Dec 13 09:51:23.734752 systemd[1]: sshd@10-159.223.206.54:22-147.75.109.163:53056.service: Deactivated successfully. Dec 13 09:51:23.737940 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 09:51:23.741692 systemd-logind[1435]: Session 11 logged out. Waiting for processes to exit. Dec 13 09:51:23.748384 systemd[1]: Started sshd@11-159.223.206.54:22-147.75.109.163:53064.service - OpenSSH per-connection server daemon (147.75.109.163:53064). Dec 13 09:51:23.751274 systemd-logind[1435]: Removed session 11. Dec 13 09:51:23.801972 sshd[5453]: Accepted publickey for core from 147.75.109.163 port 53064 ssh2: RSA SHA256:+8uCT8SkxIXioNlPTvLVKvDkt1DQL7UiLdQc1FAbEg4 Dec 13 09:51:23.803784 sshd[5453]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:51:23.810455 systemd-logind[1435]: New session 12 of user core. Dec 13 09:51:23.820191 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 09:51:24.053177 sshd[5453]: pam_unix(sshd:session): session closed for user core Dec 13 09:51:24.072072 systemd[1]: sshd@11-159.223.206.54:22-147.75.109.163:53064.service: Deactivated successfully. Dec 13 09:51:24.077779 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 09:51:24.082458 systemd-logind[1435]: Session 12 logged out. Waiting for processes to exit. Dec 13 09:51:24.092835 systemd[1]: Started sshd@12-159.223.206.54:22-147.75.109.163:53072.service - OpenSSH per-connection server daemon (147.75.109.163:53072). Dec 13 09:51:24.108508 systemd-logind[1435]: Removed session 12. Dec 13 09:51:24.181382 sshd[5464]: Accepted publickey for core from 147.75.109.163 port 53072 ssh2: RSA SHA256:+8uCT8SkxIXioNlPTvLVKvDkt1DQL7UiLdQc1FAbEg4 Dec 13 09:51:24.184469 sshd[5464]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:51:24.197039 systemd-logind[1435]: New session 13 of user core. Dec 13 09:51:24.212222 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 09:51:24.403678 sshd[5464]: pam_unix(sshd:session): session closed for user core Dec 13 09:51:24.410986 systemd[1]: sshd@12-159.223.206.54:22-147.75.109.163:53072.service: Deactivated successfully. Dec 13 09:51:24.414771 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 09:51:24.416764 systemd-logind[1435]: Session 13 logged out. Waiting for processes to exit. Dec 13 09:51:24.418653 systemd-logind[1435]: Removed session 13. Dec 13 09:51:24.747950 kubelet[2549]: E1213 09:51:24.745323 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:51:25.745655 kubelet[2549]: E1213 09:51:25.745514 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:51:28.745297 kubelet[2549]: E1213 09:51:28.745231 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:51:29.424353 systemd[1]: Started sshd@13-159.223.206.54:22-147.75.109.163:46308.service - OpenSSH per-connection server daemon (147.75.109.163:46308). Dec 13 09:51:29.475015 sshd[5476]: Accepted publickey for core from 147.75.109.163 port 46308 ssh2: RSA SHA256:+8uCT8SkxIXioNlPTvLVKvDkt1DQL7UiLdQc1FAbEg4 Dec 13 09:51:29.478477 sshd[5476]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:51:29.485232 systemd-logind[1435]: New session 14 of user core. Dec 13 09:51:29.490250 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 09:51:29.663064 sshd[5476]: pam_unix(sshd:session): session closed for user core Dec 13 09:51:29.668915 systemd-logind[1435]: Session 14 logged out. Waiting for processes to exit. Dec 13 09:51:29.669202 systemd[1]: sshd@13-159.223.206.54:22-147.75.109.163:46308.service: Deactivated successfully. Dec 13 09:51:29.673310 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 09:51:29.676500 systemd-logind[1435]: Removed session 14. Dec 13 09:51:34.683466 systemd[1]: Started sshd@14-159.223.206.54:22-147.75.109.163:46312.service - OpenSSH per-connection server daemon (147.75.109.163:46312). Dec 13 09:51:34.723984 sshd[5494]: Accepted publickey for core from 147.75.109.163 port 46312 ssh2: RSA SHA256:+8uCT8SkxIXioNlPTvLVKvDkt1DQL7UiLdQc1FAbEg4 Dec 13 09:51:34.729308 sshd[5494]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:51:34.736054 systemd-logind[1435]: New session 15 of user core. Dec 13 09:51:34.742401 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 09:51:34.911355 sshd[5494]: pam_unix(sshd:session): session closed for user core Dec 13 09:51:34.918267 systemd-logind[1435]: Session 15 logged out. Waiting for processes to exit. Dec 13 09:51:34.919180 systemd[1]: sshd@14-159.223.206.54:22-147.75.109.163:46312.service: Deactivated successfully. Dec 13 09:51:34.924140 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 09:51:34.926488 systemd-logind[1435]: Removed session 15. Dec 13 09:51:39.939997 systemd[1]: Started sshd@15-159.223.206.54:22-147.75.109.163:54240.service - OpenSSH per-connection server daemon (147.75.109.163:54240). Dec 13 09:51:39.997620 sshd[5509]: Accepted publickey for core from 147.75.109.163 port 54240 ssh2: RSA SHA256:+8uCT8SkxIXioNlPTvLVKvDkt1DQL7UiLdQc1FAbEg4 Dec 13 09:51:39.999981 sshd[5509]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:51:40.008099 systemd-logind[1435]: New session 16 of user core. Dec 13 09:51:40.014240 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 09:51:40.177278 sshd[5509]: pam_unix(sshd:session): session closed for user core Dec 13 09:51:40.183282 systemd[1]: sshd@15-159.223.206.54:22-147.75.109.163:54240.service: Deactivated successfully. Dec 13 09:51:40.188243 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 09:51:40.192617 systemd-logind[1435]: Session 16 logged out. Waiting for processes to exit. Dec 13 09:51:40.195262 systemd-logind[1435]: Removed session 16. Dec 13 09:51:41.745201 kubelet[2549]: E1213 09:51:41.745141 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:51:45.195662 systemd[1]: Started sshd@16-159.223.206.54:22-147.75.109.163:54242.service - OpenSSH per-connection server daemon (147.75.109.163:54242). Dec 13 09:51:45.241769 sshd[5561]: Accepted publickey for core from 147.75.109.163 port 54242 ssh2: RSA SHA256:+8uCT8SkxIXioNlPTvLVKvDkt1DQL7UiLdQc1FAbEg4 Dec 13 09:51:45.244324 sshd[5561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:51:45.251103 systemd-logind[1435]: New session 17 of user core. Dec 13 09:51:45.254554 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 09:51:45.448404 sshd[5561]: pam_unix(sshd:session): session closed for user core Dec 13 09:51:45.460689 systemd[1]: sshd@16-159.223.206.54:22-147.75.109.163:54242.service: Deactivated successfully. Dec 13 09:51:45.465152 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 09:51:45.466697 systemd-logind[1435]: Session 17 logged out. Waiting for processes to exit. Dec 13 09:51:45.477434 systemd[1]: Started sshd@17-159.223.206.54:22-147.75.109.163:54246.service - OpenSSH per-connection server daemon (147.75.109.163:54246). Dec 13 09:51:45.480391 systemd-logind[1435]: Removed session 17. Dec 13 09:51:45.542262 sshd[5574]: Accepted publickey for core from 147.75.109.163 port 54246 ssh2: RSA SHA256:+8uCT8SkxIXioNlPTvLVKvDkt1DQL7UiLdQc1FAbEg4 Dec 13 09:51:45.545061 sshd[5574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:51:45.555055 systemd-logind[1435]: New session 18 of user core. Dec 13 09:51:45.562243 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 09:51:46.068310 sshd[5574]: pam_unix(sshd:session): session closed for user core Dec 13 09:51:46.080151 systemd[1]: sshd@17-159.223.206.54:22-147.75.109.163:54246.service: Deactivated successfully. Dec 13 09:51:46.083768 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 09:51:46.085090 systemd-logind[1435]: Session 18 logged out. Waiting for processes to exit. Dec 13 09:51:46.094683 systemd[1]: Started sshd@18-159.223.206.54:22-147.75.109.163:52084.service - OpenSSH per-connection server daemon (147.75.109.163:52084). Dec 13 09:51:46.097883 systemd-logind[1435]: Removed session 18. Dec 13 09:51:46.205741 sshd[5586]: Accepted publickey for core from 147.75.109.163 port 52084 ssh2: RSA SHA256:+8uCT8SkxIXioNlPTvLVKvDkt1DQL7UiLdQc1FAbEg4 Dec 13 09:51:46.207960 sshd[5586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:51:46.214363 systemd-logind[1435]: New session 19 of user core. Dec 13 09:51:46.233251 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 09:51:46.747134 kubelet[2549]: E1213 09:51:46.746670 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:51:49.320308 sshd[5586]: pam_unix(sshd:session): session closed for user core Dec 13 09:51:49.351045 systemd[1]: Started sshd@19-159.223.206.54:22-147.75.109.163:52096.service - OpenSSH per-connection server daemon (147.75.109.163:52096). Dec 13 09:51:49.352628 systemd[1]: sshd@18-159.223.206.54:22-147.75.109.163:52084.service: Deactivated successfully. Dec 13 09:51:49.360726 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 09:51:49.365301 systemd-logind[1435]: Session 19 logged out. Waiting for processes to exit. Dec 13 09:51:49.371934 systemd-logind[1435]: Removed session 19. Dec 13 09:51:49.466908 sshd[5604]: Accepted publickey for core from 147.75.109.163 port 52096 ssh2: RSA SHA256:+8uCT8SkxIXioNlPTvLVKvDkt1DQL7UiLdQc1FAbEg4 Dec 13 09:51:49.470688 sshd[5604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:51:49.480201 systemd-logind[1435]: New session 20 of user core. Dec 13 09:51:49.484249 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 09:51:50.150919 sshd[5604]: pam_unix(sshd:session): session closed for user core Dec 13 09:51:50.169454 systemd[1]: sshd@19-159.223.206.54:22-147.75.109.163:52096.service: Deactivated successfully. Dec 13 09:51:50.175579 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 09:51:50.183599 systemd-logind[1435]: Session 20 logged out. Waiting for processes to exit. Dec 13 09:51:50.193445 systemd[1]: Started sshd@20-159.223.206.54:22-147.75.109.163:52110.service - OpenSSH per-connection server daemon (147.75.109.163:52110). Dec 13 09:51:50.197890 systemd-logind[1435]: Removed session 20. Dec 13 09:51:50.257784 sshd[5617]: Accepted publickey for core from 147.75.109.163 port 52110 ssh2: RSA SHA256:+8uCT8SkxIXioNlPTvLVKvDkt1DQL7UiLdQc1FAbEg4 Dec 13 09:51:50.260632 sshd[5617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:51:50.270200 systemd-logind[1435]: New session 21 of user core. Dec 13 09:51:50.277276 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 09:51:50.446024 sshd[5617]: pam_unix(sshd:session): session closed for user core Dec 13 09:51:50.454501 systemd[1]: sshd@20-159.223.206.54:22-147.75.109.163:52110.service: Deactivated successfully. Dec 13 09:51:50.458499 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 09:51:50.460438 systemd-logind[1435]: Session 21 logged out. Waiting for processes to exit. Dec 13 09:51:50.461890 systemd-logind[1435]: Removed session 21. Dec 13 09:51:55.468475 systemd[1]: Started sshd@21-159.223.206.54:22-147.75.109.163:52116.service - OpenSSH per-connection server daemon (147.75.109.163:52116). Dec 13 09:51:55.530536 sshd[5632]: Accepted publickey for core from 147.75.109.163 port 52116 ssh2: RSA SHA256:+8uCT8SkxIXioNlPTvLVKvDkt1DQL7UiLdQc1FAbEg4 Dec 13 09:51:55.533484 sshd[5632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:51:55.541589 systemd-logind[1435]: New session 22 of user core. Dec 13 09:51:55.550261 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 09:51:55.757208 sshd[5632]: pam_unix(sshd:session): session closed for user core Dec 13 09:51:55.765117 systemd[1]: sshd@21-159.223.206.54:22-147.75.109.163:52116.service: Deactivated successfully. Dec 13 09:51:55.769465 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 09:51:55.771526 systemd-logind[1435]: Session 22 logged out. Waiting for processes to exit. Dec 13 09:51:55.773674 systemd-logind[1435]: Removed session 22. Dec 13 09:52:00.781258 systemd[1]: Started sshd@22-159.223.206.54:22-147.75.109.163:45062.service - OpenSSH per-connection server daemon (147.75.109.163:45062). Dec 13 09:52:00.957910 sshd[5669]: Accepted publickey for core from 147.75.109.163 port 45062 ssh2: RSA SHA256:+8uCT8SkxIXioNlPTvLVKvDkt1DQL7UiLdQc1FAbEg4 Dec 13 09:52:00.964474 sshd[5669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:52:00.974048 systemd-logind[1435]: New session 23 of user core. Dec 13 09:52:00.983248 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 13 09:52:01.402419 sshd[5669]: pam_unix(sshd:session): session closed for user core Dec 13 09:52:01.409254 systemd[1]: sshd@22-159.223.206.54:22-147.75.109.163:45062.service: Deactivated successfully. Dec 13 09:52:01.412598 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 09:52:01.416191 systemd-logind[1435]: Session 23 logged out. Waiting for processes to exit. Dec 13 09:52:01.418686 systemd-logind[1435]: Removed session 23. Dec 13 09:52:06.422288 systemd[1]: Started sshd@23-159.223.206.54:22-147.75.109.163:40516.service - OpenSSH per-connection server daemon (147.75.109.163:40516). Dec 13 09:52:06.475956 sshd[5688]: Accepted publickey for core from 147.75.109.163 port 40516 ssh2: RSA SHA256:+8uCT8SkxIXioNlPTvLVKvDkt1DQL7UiLdQc1FAbEg4 Dec 13 09:52:06.477757 sshd[5688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:52:06.484563 systemd-logind[1435]: New session 24 of user core. Dec 13 09:52:06.488165 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 13 09:52:06.713220 sshd[5688]: pam_unix(sshd:session): session closed for user core Dec 13 09:52:06.718782 systemd[1]: sshd@23-159.223.206.54:22-147.75.109.163:40516.service: Deactivated successfully. Dec 13 09:52:06.722321 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 09:52:06.723938 systemd-logind[1435]: Session 24 logged out. Waiting for processes to exit. Dec 13 09:52:06.725726 systemd-logind[1435]: Removed session 24. Dec 13 09:52:07.784382 kubelet[2549]: E1213 09:52:07.783999 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:52:10.752895 kubelet[2549]: E1213 09:52:10.751758 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:52:11.732318 systemd[1]: Started sshd@24-159.223.206.54:22-147.75.109.163:40518.service - OpenSSH per-connection server daemon (147.75.109.163:40518). Dec 13 09:52:11.792439 sshd[5725]: Accepted publickey for core from 147.75.109.163 port 40518 ssh2: RSA SHA256:+8uCT8SkxIXioNlPTvLVKvDkt1DQL7UiLdQc1FAbEg4 Dec 13 09:52:11.795199 sshd[5725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:52:11.803934 systemd-logind[1435]: New session 25 of user core. Dec 13 09:52:11.808345 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 13 09:52:12.010248 sshd[5725]: pam_unix(sshd:session): session closed for user core Dec 13 09:52:12.017810 systemd[1]: sshd@24-159.223.206.54:22-147.75.109.163:40518.service: Deactivated successfully. Dec 13 09:52:12.022753 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 09:52:12.024386 systemd-logind[1435]: Session 25 logged out. Waiting for processes to exit. Dec 13 09:52:12.025965 systemd-logind[1435]: Removed session 25.