Jul 2 00:17:03.036407 kernel: Linux version 6.6.36-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Mon Jul 1 22:47:51 -00 2024 Jul 2 00:17:03.036449 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=7cbbc16c4aaa626caa51ed60a6754ae638f7b2b87370c3f4fc6a9772b7874a8b Jul 2 00:17:03.036469 kernel: BIOS-provided physical RAM map: Jul 2 00:17:03.036480 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 2 00:17:03.036490 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 2 00:17:03.036500 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 2 00:17:03.036512 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffd7fff] usable Jul 2 00:17:03.039345 kernel: BIOS-e820: [mem 0x000000007ffd8000-0x000000007fffffff] reserved Jul 2 00:17:03.039360 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 2 00:17:03.039389 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 2 00:17:03.039400 kernel: NX (Execute Disable) protection: active Jul 2 00:17:03.039411 kernel: APIC: Static calls initialized Jul 2 00:17:03.039423 kernel: SMBIOS 2.8 present. Jul 2 00:17:03.039436 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Jul 2 00:17:03.039449 kernel: Hypervisor detected: KVM Jul 2 00:17:03.039467 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 2 00:17:03.039479 kernel: kvm-clock: using sched offset of 4328522848 cycles Jul 2 00:17:03.039494 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 2 00:17:03.039508 kernel: tsc: Detected 2494.140 MHz processor Jul 2 00:17:03.039537 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 2 00:17:03.039551 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 2 00:17:03.039565 kernel: last_pfn = 0x7ffd8 max_arch_pfn = 0x400000000 Jul 2 00:17:03.039578 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 2 00:17:03.039591 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 2 00:17:03.039610 kernel: ACPI: Early table checksum verification disabled Jul 2 00:17:03.039622 kernel: ACPI: RSDP 0x00000000000F5A50 000014 (v00 BOCHS ) Jul 2 00:17:03.039634 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:17:03.039647 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:17:03.039660 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:17:03.039673 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jul 2 00:17:03.039686 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:17:03.039700 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:17:03.039714 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:17:03.039731 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:17:03.039743 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Jul 2 00:17:03.039755 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Jul 2 00:17:03.039768 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jul 2 00:17:03.039781 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Jul 2 00:17:03.039795 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Jul 2 00:17:03.039809 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Jul 2 00:17:03.039834 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Jul 2 00:17:03.039850 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jul 2 00:17:03.039864 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jul 2 00:17:03.039878 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jul 2 00:17:03.039892 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jul 2 00:17:03.039906 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffd7fff] -> [mem 0x00000000-0x7ffd7fff] Jul 2 00:17:03.039919 kernel: NODE_DATA(0) allocated [mem 0x7ffd2000-0x7ffd7fff] Jul 2 00:17:03.039940 kernel: Zone ranges: Jul 2 00:17:03.039953 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 2 00:17:03.039966 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffd7fff] Jul 2 00:17:03.039979 kernel: Normal empty Jul 2 00:17:03.039993 kernel: Movable zone start for each node Jul 2 00:17:03.040008 kernel: Early memory node ranges Jul 2 00:17:03.040022 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 2 00:17:03.040036 kernel: node 0: [mem 0x0000000000100000-0x000000007ffd7fff] Jul 2 00:17:03.040050 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffd7fff] Jul 2 00:17:03.040069 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 2 00:17:03.040083 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 2 00:17:03.040098 kernel: On node 0, zone DMA32: 40 pages in unavailable ranges Jul 2 00:17:03.040113 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 2 00:17:03.040127 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 2 00:17:03.040142 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 2 00:17:03.040157 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 2 00:17:03.040171 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 2 00:17:03.040186 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 2 00:17:03.040205 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 2 00:17:03.040219 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 2 00:17:03.040234 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 2 00:17:03.040249 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 2 00:17:03.040264 kernel: TSC deadline timer available Jul 2 00:17:03.040277 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jul 2 00:17:03.040292 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 2 00:17:03.040306 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Jul 2 00:17:03.040321 kernel: Booting paravirtualized kernel on KVM Jul 2 00:17:03.040340 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 2 00:17:03.040354 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jul 2 00:17:03.040368 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Jul 2 00:17:03.040383 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Jul 2 00:17:03.040397 kernel: pcpu-alloc: [0] 0 1 Jul 2 00:17:03.040410 kernel: kvm-guest: PV spinlocks disabled, no host support Jul 2 00:17:03.040427 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=7cbbc16c4aaa626caa51ed60a6754ae638f7b2b87370c3f4fc6a9772b7874a8b Jul 2 00:17:03.040442 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 00:17:03.040460 kernel: random: crng init done Jul 2 00:17:03.040474 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 2 00:17:03.040488 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 2 00:17:03.040502 kernel: Fallback order for Node 0: 0 Jul 2 00:17:03.040515 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515800 Jul 2 00:17:03.040545 kernel: Policy zone: DMA32 Jul 2 00:17:03.040558 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 00:17:03.040573 kernel: Memory: 1965044K/2096600K available (12288K kernel code, 2303K rwdata, 22640K rodata, 49328K init, 2016K bss, 131296K reserved, 0K cma-reserved) Jul 2 00:17:03.040587 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 2 00:17:03.040605 kernel: Kernel/User page tables isolation: enabled Jul 2 00:17:03.040619 kernel: ftrace: allocating 37658 entries in 148 pages Jul 2 00:17:03.040632 kernel: ftrace: allocated 148 pages with 3 groups Jul 2 00:17:03.040646 kernel: Dynamic Preempt: voluntary Jul 2 00:17:03.040661 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 2 00:17:03.040675 kernel: rcu: RCU event tracing is enabled. Jul 2 00:17:03.040689 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 2 00:17:03.040702 kernel: Trampoline variant of Tasks RCU enabled. Jul 2 00:17:03.040718 kernel: Rude variant of Tasks RCU enabled. Jul 2 00:17:03.040736 kernel: Tracing variant of Tasks RCU enabled. Jul 2 00:17:03.040751 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 00:17:03.040764 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 2 00:17:03.040778 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jul 2 00:17:03.040792 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 2 00:17:03.040807 kernel: Console: colour VGA+ 80x25 Jul 2 00:17:03.040821 kernel: printk: console [tty0] enabled Jul 2 00:17:03.040835 kernel: printk: console [ttyS0] enabled Jul 2 00:17:03.040850 kernel: ACPI: Core revision 20230628 Jul 2 00:17:03.040864 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 2 00:17:03.040880 kernel: APIC: Switch to symmetric I/O mode setup Jul 2 00:17:03.040894 kernel: x2apic enabled Jul 2 00:17:03.040908 kernel: APIC: Switched APIC routing to: physical x2apic Jul 2 00:17:03.040922 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 2 00:17:03.040937 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Jul 2 00:17:03.040951 kernel: Calibrating delay loop (skipped) preset value.. 4988.28 BogoMIPS (lpj=2494140) Jul 2 00:17:03.040965 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jul 2 00:17:03.040980 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jul 2 00:17:03.041010 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 2 00:17:03.041025 kernel: Spectre V2 : Mitigation: Retpolines Jul 2 00:17:03.041040 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jul 2 00:17:03.041058 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jul 2 00:17:03.041073 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jul 2 00:17:03.041087 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 2 00:17:03.041103 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 2 00:17:03.041118 kernel: MDS: Mitigation: Clear CPU buffers Jul 2 00:17:03.041133 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jul 2 00:17:03.041152 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 2 00:17:03.041164 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 2 00:17:03.041178 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 2 00:17:03.041193 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 2 00:17:03.041209 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jul 2 00:17:03.041224 kernel: Freeing SMP alternatives memory: 32K Jul 2 00:17:03.041239 kernel: pid_max: default: 32768 minimum: 301 Jul 2 00:17:03.041253 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Jul 2 00:17:03.041273 kernel: SELinux: Initializing. Jul 2 00:17:03.041289 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 2 00:17:03.041304 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 2 00:17:03.041320 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Jul 2 00:17:03.041334 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jul 2 00:17:03.041350 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jul 2 00:17:03.041366 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jul 2 00:17:03.041383 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Jul 2 00:17:03.041403 kernel: signal: max sigframe size: 1776 Jul 2 00:17:03.041419 kernel: rcu: Hierarchical SRCU implementation. Jul 2 00:17:03.041433 kernel: rcu: Max phase no-delay instances is 400. Jul 2 00:17:03.041448 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 2 00:17:03.041464 kernel: smp: Bringing up secondary CPUs ... Jul 2 00:17:03.041480 kernel: smpboot: x86: Booting SMP configuration: Jul 2 00:17:03.041496 kernel: .... node #0, CPUs: #1 Jul 2 00:17:03.041512 kernel: smp: Brought up 1 node, 2 CPUs Jul 2 00:17:03.044301 kernel: smpboot: Max logical packages: 1 Jul 2 00:17:03.044322 kernel: smpboot: Total of 2 processors activated (9976.56 BogoMIPS) Jul 2 00:17:03.044358 kernel: devtmpfs: initialized Jul 2 00:17:03.044375 kernel: x86/mm: Memory block size: 128MB Jul 2 00:17:03.044391 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 00:17:03.044408 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 2 00:17:03.044424 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 00:17:03.044440 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 00:17:03.044454 kernel: audit: initializing netlink subsys (disabled) Jul 2 00:17:03.044469 kernel: audit: type=2000 audit(1719879422.616:1): state=initialized audit_enabled=0 res=1 Jul 2 00:17:03.044485 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 00:17:03.044506 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 2 00:17:03.044538 kernel: cpuidle: using governor menu Jul 2 00:17:03.044553 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 00:17:03.044567 kernel: dca service started, version 1.12.1 Jul 2 00:17:03.044581 kernel: PCI: Using configuration type 1 for base access Jul 2 00:17:03.044596 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 2 00:17:03.044612 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 00:17:03.044626 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 2 00:17:03.044640 kernel: ACPI: Added _OSI(Module Device) Jul 2 00:17:03.044661 kernel: ACPI: Added _OSI(Processor Device) Jul 2 00:17:03.044675 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 00:17:03.044689 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 00:17:03.044705 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 2 00:17:03.044720 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 2 00:17:03.044735 kernel: ACPI: Interpreter enabled Jul 2 00:17:03.044748 kernel: ACPI: PM: (supports S0 S5) Jul 2 00:17:03.044762 kernel: ACPI: Using IOAPIC for interrupt routing Jul 2 00:17:03.044777 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 2 00:17:03.044797 kernel: PCI: Using E820 reservations for host bridge windows Jul 2 00:17:03.044811 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jul 2 00:17:03.044826 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 2 00:17:03.045169 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jul 2 00:17:03.045334 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jul 2 00:17:03.045473 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jul 2 00:17:03.045494 kernel: acpiphp: Slot [3] registered Jul 2 00:17:03.045515 kernel: acpiphp: Slot [4] registered Jul 2 00:17:03.045564 kernel: acpiphp: Slot [5] registered Jul 2 00:17:03.045579 kernel: acpiphp: Slot [6] registered Jul 2 00:17:03.045593 kernel: acpiphp: Slot [7] registered Jul 2 00:17:03.045606 kernel: acpiphp: Slot [8] registered Jul 2 00:17:03.045620 kernel: acpiphp: Slot [9] registered Jul 2 00:17:03.045635 kernel: acpiphp: Slot [10] registered Jul 2 00:17:03.045650 kernel: acpiphp: Slot [11] registered Jul 2 00:17:03.045665 kernel: acpiphp: Slot [12] registered Jul 2 00:17:03.045686 kernel: acpiphp: Slot [13] registered Jul 2 00:17:03.045702 kernel: acpiphp: Slot [14] registered Jul 2 00:17:03.045716 kernel: acpiphp: Slot [15] registered Jul 2 00:17:03.045729 kernel: acpiphp: Slot [16] registered Jul 2 00:17:03.045743 kernel: acpiphp: Slot [17] registered Jul 2 00:17:03.045758 kernel: acpiphp: Slot [18] registered Jul 2 00:17:03.045772 kernel: acpiphp: Slot [19] registered Jul 2 00:17:03.045787 kernel: acpiphp: Slot [20] registered Jul 2 00:17:03.045802 kernel: acpiphp: Slot [21] registered Jul 2 00:17:03.045818 kernel: acpiphp: Slot [22] registered Jul 2 00:17:03.045838 kernel: acpiphp: Slot [23] registered Jul 2 00:17:03.045852 kernel: acpiphp: Slot [24] registered Jul 2 00:17:03.045866 kernel: acpiphp: Slot [25] registered Jul 2 00:17:03.045880 kernel: acpiphp: Slot [26] registered Jul 2 00:17:03.045895 kernel: acpiphp: Slot [27] registered Jul 2 00:17:03.045910 kernel: acpiphp: Slot [28] registered Jul 2 00:17:03.045925 kernel: acpiphp: Slot [29] registered Jul 2 00:17:03.045939 kernel: acpiphp: Slot [30] registered Jul 2 00:17:03.045954 kernel: acpiphp: Slot [31] registered Jul 2 00:17:03.045974 kernel: PCI host bridge to bus 0000:00 Jul 2 00:17:03.046165 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 2 00:17:03.046302 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 2 00:17:03.046438 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 2 00:17:03.047133 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jul 2 00:17:03.047304 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jul 2 00:17:03.047443 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 2 00:17:03.047647 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jul 2 00:17:03.047804 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jul 2 00:17:03.047983 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jul 2 00:17:03.048143 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Jul 2 00:17:03.048270 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jul 2 00:17:03.048412 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jul 2 00:17:03.048565 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jul 2 00:17:03.048715 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jul 2 00:17:03.048852 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Jul 2 00:17:03.048948 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Jul 2 00:17:03.049054 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jul 2 00:17:03.049147 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jul 2 00:17:03.049238 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jul 2 00:17:03.049353 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jul 2 00:17:03.049464 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jul 2 00:17:03.052811 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Jul 2 00:17:03.053014 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Jul 2 00:17:03.053167 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jul 2 00:17:03.053320 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 2 00:17:03.053472 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jul 2 00:17:03.053604 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Jul 2 00:17:03.053754 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Jul 2 00:17:03.053896 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Jul 2 00:17:03.054054 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jul 2 00:17:03.054196 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Jul 2 00:17:03.054343 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Jul 2 00:17:03.054502 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Jul 2 00:17:03.056841 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Jul 2 00:17:03.057017 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Jul 2 00:17:03.057174 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Jul 2 00:17:03.057318 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Jul 2 00:17:03.057427 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Jul 2 00:17:03.059711 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Jul 2 00:17:03.059923 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Jul 2 00:17:03.060077 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Jul 2 00:17:03.060252 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Jul 2 00:17:03.060355 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Jul 2 00:17:03.060449 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Jul 2 00:17:03.060653 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Jul 2 00:17:03.060803 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Jul 2 00:17:03.060921 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Jul 2 00:17:03.061044 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Jul 2 00:17:03.061063 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 2 00:17:03.061077 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 2 00:17:03.061091 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 2 00:17:03.061105 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 2 00:17:03.061120 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jul 2 00:17:03.061142 kernel: iommu: Default domain type: Translated Jul 2 00:17:03.061157 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 2 00:17:03.061170 kernel: PCI: Using ACPI for IRQ routing Jul 2 00:17:03.061183 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 2 00:17:03.061197 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 2 00:17:03.061213 kernel: e820: reserve RAM buffer [mem 0x7ffd8000-0x7fffffff] Jul 2 00:17:03.061374 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jul 2 00:17:03.061542 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jul 2 00:17:03.061696 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 2 00:17:03.061724 kernel: vgaarb: loaded Jul 2 00:17:03.061738 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 2 00:17:03.061753 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 2 00:17:03.061768 kernel: clocksource: Switched to clocksource kvm-clock Jul 2 00:17:03.061783 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 00:17:03.061799 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 00:17:03.061813 kernel: pnp: PnP ACPI init Jul 2 00:17:03.061825 kernel: pnp: PnP ACPI: found 4 devices Jul 2 00:17:03.061839 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 2 00:17:03.061858 kernel: NET: Registered PF_INET protocol family Jul 2 00:17:03.061868 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 2 00:17:03.061876 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jul 2 00:17:03.061885 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 00:17:03.061894 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 2 00:17:03.061903 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jul 2 00:17:03.061912 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jul 2 00:17:03.061921 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 2 00:17:03.061932 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 2 00:17:03.061949 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 00:17:03.061961 kernel: NET: Registered PF_XDP protocol family Jul 2 00:17:03.062115 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 2 00:17:03.062220 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 2 00:17:03.062304 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 2 00:17:03.062429 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jul 2 00:17:03.064733 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jul 2 00:17:03.064969 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jul 2 00:17:03.065164 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 2 00:17:03.065187 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jul 2 00:17:03.065326 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7b0 took 35070 usecs Jul 2 00:17:03.065340 kernel: PCI: CLS 0 bytes, default 64 Jul 2 00:17:03.065350 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 2 00:17:03.065359 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Jul 2 00:17:03.065368 kernel: Initialise system trusted keyrings Jul 2 00:17:03.065380 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jul 2 00:17:03.065400 kernel: Key type asymmetric registered Jul 2 00:17:03.065412 kernel: Asymmetric key parser 'x509' registered Jul 2 00:17:03.065426 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 2 00:17:03.065442 kernel: io scheduler mq-deadline registered Jul 2 00:17:03.065451 kernel: io scheduler kyber registered Jul 2 00:17:03.065461 kernel: io scheduler bfq registered Jul 2 00:17:03.065475 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 2 00:17:03.065490 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jul 2 00:17:03.065502 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jul 2 00:17:03.065521 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jul 2 00:17:03.065566 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 00:17:03.065578 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 2 00:17:03.065591 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 2 00:17:03.065603 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 2 00:17:03.065616 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 2 00:17:03.065814 kernel: rtc_cmos 00:03: RTC can wake from S4 Jul 2 00:17:03.065838 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 2 00:17:03.065956 kernel: rtc_cmos 00:03: registered as rtc0 Jul 2 00:17:03.066070 kernel: rtc_cmos 00:03: setting system clock to 2024-07-02T00:17:02 UTC (1719879422) Jul 2 00:17:03.066156 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jul 2 00:17:03.066168 kernel: intel_pstate: CPU model not supported Jul 2 00:17:03.066177 kernel: NET: Registered PF_INET6 protocol family Jul 2 00:17:03.066186 kernel: Segment Routing with IPv6 Jul 2 00:17:03.066195 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 00:17:03.066204 kernel: NET: Registered PF_PACKET protocol family Jul 2 00:17:03.066212 kernel: Key type dns_resolver registered Jul 2 00:17:03.066226 kernel: IPI shorthand broadcast: enabled Jul 2 00:17:03.066236 kernel: sched_clock: Marking stable (1156004805, 132905476)->(1348621523, -59711242) Jul 2 00:17:03.066245 kernel: registered taskstats version 1 Jul 2 00:17:03.066254 kernel: Loading compiled-in X.509 certificates Jul 2 00:17:03.066263 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.36-flatcar: be1ede902d88b56c26cc000ff22391c78349d771' Jul 2 00:17:03.066271 kernel: Key type .fscrypt registered Jul 2 00:17:03.066280 kernel: Key type fscrypt-provisioning registered Jul 2 00:17:03.066289 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 2 00:17:03.066297 kernel: ima: Allocated hash algorithm: sha1 Jul 2 00:17:03.066309 kernel: ima: No architecture policies found Jul 2 00:17:03.066318 kernel: clk: Disabling unused clocks Jul 2 00:17:03.066326 kernel: Freeing unused kernel image (initmem) memory: 49328K Jul 2 00:17:03.066335 kernel: Write protecting the kernel read-only data: 36864k Jul 2 00:17:03.066344 kernel: Freeing unused kernel image (rodata/data gap) memory: 1936K Jul 2 00:17:03.066387 kernel: Run /init as init process Jul 2 00:17:03.066400 kernel: with arguments: Jul 2 00:17:03.066410 kernel: /init Jul 2 00:17:03.066419 kernel: with environment: Jul 2 00:17:03.066430 kernel: HOME=/ Jul 2 00:17:03.066440 kernel: TERM=linux Jul 2 00:17:03.066448 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 00:17:03.066461 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 00:17:03.066473 systemd[1]: Detected virtualization kvm. Jul 2 00:17:03.066482 systemd[1]: Detected architecture x86-64. Jul 2 00:17:03.066491 systemd[1]: Running in initrd. Jul 2 00:17:03.066505 systemd[1]: No hostname configured, using default hostname. Jul 2 00:17:03.066515 systemd[1]: Hostname set to . Jul 2 00:17:03.068623 systemd[1]: Initializing machine ID from VM UUID. Jul 2 00:17:03.068636 systemd[1]: Queued start job for default target initrd.target. Jul 2 00:17:03.068646 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 00:17:03.068657 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 00:17:03.068670 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 2 00:17:03.068685 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 00:17:03.068707 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 2 00:17:03.068717 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 2 00:17:03.068728 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 2 00:17:03.068738 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 2 00:17:03.068748 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 00:17:03.068759 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 00:17:03.068768 systemd[1]: Reached target paths.target - Path Units. Jul 2 00:17:03.068781 systemd[1]: Reached target slices.target - Slice Units. Jul 2 00:17:03.068792 systemd[1]: Reached target swap.target - Swaps. Jul 2 00:17:03.068806 systemd[1]: Reached target timers.target - Timer Units. Jul 2 00:17:03.068825 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 00:17:03.068839 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 00:17:03.068854 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 2 00:17:03.068872 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 2 00:17:03.068886 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 00:17:03.068900 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 00:17:03.068916 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 00:17:03.068931 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 00:17:03.068945 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 2 00:17:03.068961 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 00:17:03.068978 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 2 00:17:03.069001 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 00:17:03.069016 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 00:17:03.069032 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 00:17:03.069049 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:17:03.069065 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 2 00:17:03.069081 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 00:17:03.069097 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 00:17:03.069120 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 2 00:17:03.069136 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 00:17:03.069212 systemd-journald[182]: Collecting audit messages is disabled. Jul 2 00:17:03.069261 systemd-journald[182]: Journal started Jul 2 00:17:03.069296 systemd-journald[182]: Runtime Journal (/run/log/journal/d63ace3fd7f0448a8c1668aee5b89580) is 4.9M, max 39.3M, 34.4M free. Jul 2 00:17:03.049584 systemd-modules-load[183]: Inserted module 'overlay' Jul 2 00:17:03.079561 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 00:17:03.086297 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 00:17:03.087315 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:17:03.096218 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 00:17:03.106246 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 00:17:03.107112 systemd-modules-load[183]: Inserted module 'br_netfilter' Jul 2 00:17:03.107701 kernel: Bridge firewalling registered Jul 2 00:17:03.109341 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 00:17:03.117060 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:17:03.121799 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 00:17:03.123555 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 00:17:03.147401 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:17:03.149772 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 2 00:17:03.167836 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:17:03.170612 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 00:17:03.180068 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 00:17:03.190321 dracut-cmdline[215]: dracut-dracut-053 Jul 2 00:17:03.196451 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=7cbbc16c4aaa626caa51ed60a6754ae638f7b2b87370c3f4fc6a9772b7874a8b Jul 2 00:17:03.244483 systemd-resolved[221]: Positive Trust Anchors: Jul 2 00:17:03.244505 systemd-resolved[221]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 00:17:03.244582 systemd-resolved[221]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 00:17:03.248374 systemd-resolved[221]: Defaulting to hostname 'linux'. Jul 2 00:17:03.250370 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 00:17:03.251355 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 00:17:03.353633 kernel: SCSI subsystem initialized Jul 2 00:17:03.369595 kernel: Loading iSCSI transport class v2.0-870. Jul 2 00:17:03.386108 kernel: iscsi: registered transport (tcp) Jul 2 00:17:03.419667 kernel: iscsi: registered transport (qla4xxx) Jul 2 00:17:03.419819 kernel: QLogic iSCSI HBA Driver Jul 2 00:17:03.493372 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 2 00:17:03.504929 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 2 00:17:03.547029 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 00:17:03.547151 kernel: device-mapper: uevent: version 1.0.3 Jul 2 00:17:03.547181 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 2 00:17:03.606598 kernel: raid6: avx2x4 gen() 20216 MB/s Jul 2 00:17:03.623615 kernel: raid6: avx2x2 gen() 18555 MB/s Jul 2 00:17:03.640903 kernel: raid6: avx2x1 gen() 18064 MB/s Jul 2 00:17:03.641034 kernel: raid6: using algorithm avx2x4 gen() 20216 MB/s Jul 2 00:17:03.659892 kernel: raid6: .... xor() 5269 MB/s, rmw enabled Jul 2 00:17:03.660598 kernel: raid6: using avx2x2 recovery algorithm Jul 2 00:17:03.691590 kernel: xor: automatically using best checksumming function avx Jul 2 00:17:03.945595 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 2 00:17:03.964910 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 2 00:17:03.975200 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 00:17:04.006211 systemd-udevd[403]: Using default interface naming scheme 'v255'. Jul 2 00:17:04.013391 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 00:17:04.023788 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 2 00:17:04.063030 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Jul 2 00:17:04.117418 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 00:17:04.125065 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 00:17:04.214979 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 00:17:04.227260 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 2 00:17:04.257678 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 2 00:17:04.268466 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 00:17:04.270001 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 00:17:04.270829 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 00:17:04.282849 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 2 00:17:04.328613 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 2 00:17:04.366560 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Jul 2 00:17:04.442972 kernel: scsi host0: Virtio SCSI HBA Jul 2 00:17:04.443216 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jul 2 00:17:04.443342 kernel: cryptd: max_cpu_qlen set to 1000 Jul 2 00:17:04.443356 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 2 00:17:04.443388 kernel: GPT:9289727 != 125829119 Jul 2 00:17:04.443404 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 2 00:17:04.443420 kernel: GPT:9289727 != 125829119 Jul 2 00:17:04.443432 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 2 00:17:04.443444 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 00:17:04.443460 kernel: libata version 3.00 loaded. Jul 2 00:17:04.443478 kernel: AVX2 version of gcm_enc/dec engaged. Jul 2 00:17:04.443495 kernel: AES CTR mode by8 optimization enabled Jul 2 00:17:04.443511 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Jul 2 00:17:04.448731 kernel: virtio_blk virtio5: [vdb] 968 512-byte logical blocks (496 kB/484 KiB) Jul 2 00:17:04.456776 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 00:17:04.456984 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:17:04.459317 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:17:04.459982 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:17:04.466098 kernel: ata_piix 0000:00:01.1: version 2.13 Jul 2 00:17:04.549018 kernel: scsi host1: ata_piix Jul 2 00:17:04.557099 kernel: scsi host2: ata_piix Jul 2 00:17:04.571715 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Jul 2 00:17:04.571748 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Jul 2 00:17:04.571762 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (449) Jul 2 00:17:04.571775 kernel: BTRFS: device fsid 2fd636b8-f582-46f8-bde2-15e56e3958c1 devid 1 transid 35 /dev/vda3 scanned by (udev-worker) (457) Jul 2 00:17:04.571788 kernel: ACPI: bus type USB registered Jul 2 00:17:04.571800 kernel: usbcore: registered new interface driver usbfs Jul 2 00:17:04.571812 kernel: usbcore: registered new interface driver hub Jul 2 00:17:04.571836 kernel: usbcore: registered new device driver usb Jul 2 00:17:04.460248 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:17:04.464674 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:17:04.485180 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:17:04.543494 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 2 00:17:04.593842 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:17:04.607741 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 2 00:17:04.620406 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 2 00:17:04.625339 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 2 00:17:04.625994 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 2 00:17:04.638039 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 2 00:17:04.643446 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:17:04.649311 disk-uuid[532]: Primary Header is updated. Jul 2 00:17:04.649311 disk-uuid[532]: Secondary Entries is updated. Jul 2 00:17:04.649311 disk-uuid[532]: Secondary Header is updated. Jul 2 00:17:04.670082 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 00:17:04.684573 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 00:17:04.702215 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:17:04.705419 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 00:17:04.738258 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Jul 2 00:17:04.747107 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Jul 2 00:17:04.747374 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Jul 2 00:17:04.747605 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Jul 2 00:17:04.748768 kernel: hub 1-0:1.0: USB hub found Jul 2 00:17:04.748981 kernel: hub 1-0:1.0: 2 ports detected Jul 2 00:17:05.693624 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 00:17:05.693938 disk-uuid[534]: The operation has completed successfully. Jul 2 00:17:05.750491 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 00:17:05.750695 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 2 00:17:05.778881 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 2 00:17:05.784646 sh[564]: Success Jul 2 00:17:05.805564 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jul 2 00:17:05.865595 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 2 00:17:05.880792 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 2 00:17:05.883995 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 2 00:17:05.912561 kernel: BTRFS info (device dm-0): first mount of filesystem 2fd636b8-f582-46f8-bde2-15e56e3958c1 Jul 2 00:17:05.912666 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:17:05.915448 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 2 00:17:05.915598 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 2 00:17:05.916963 kernel: BTRFS info (device dm-0): using free space tree Jul 2 00:17:05.930051 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 2 00:17:05.931601 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 2 00:17:05.938873 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 2 00:17:05.941730 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 2 00:17:05.953817 kernel: BTRFS info (device vda6): first mount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:17:05.953920 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:17:05.953943 kernel: BTRFS info (device vda6): using free space tree Jul 2 00:17:05.961554 kernel: BTRFS info (device vda6): auto enabling async discard Jul 2 00:17:05.976320 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 2 00:17:05.978612 kernel: BTRFS info (device vda6): last unmount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:17:05.989329 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 2 00:17:05.995836 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 2 00:17:06.104273 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 00:17:06.115249 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 00:17:06.153818 systemd-networkd[749]: lo: Link UP Jul 2 00:17:06.153831 systemd-networkd[749]: lo: Gained carrier Jul 2 00:17:06.156838 systemd-networkd[749]: Enumeration completed Jul 2 00:17:06.157029 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 00:17:06.158151 systemd-networkd[749]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jul 2 00:17:06.158156 systemd-networkd[749]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Jul 2 00:17:06.158877 systemd[1]: Reached target network.target - Network. Jul 2 00:17:06.160320 systemd-networkd[749]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:17:06.160325 systemd-networkd[749]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 00:17:06.161328 systemd-networkd[749]: eth0: Link UP Jul 2 00:17:06.161334 systemd-networkd[749]: eth0: Gained carrier Jul 2 00:17:06.161349 systemd-networkd[749]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jul 2 00:17:06.167716 systemd-networkd[749]: eth1: Link UP Jul 2 00:17:06.167721 systemd-networkd[749]: eth1: Gained carrier Jul 2 00:17:06.167740 systemd-networkd[749]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:17:06.183294 systemd-networkd[749]: eth0: DHCPv4 address 64.23.132.250/20, gateway 64.23.128.1 acquired from 169.254.169.253 Jul 2 00:17:06.189686 systemd-networkd[749]: eth1: DHCPv4 address 10.124.0.2/20 acquired from 169.254.169.253 Jul 2 00:17:06.208486 ignition[653]: Ignition 2.18.0 Jul 2 00:17:06.209546 ignition[653]: Stage: fetch-offline Jul 2 00:17:06.209669 ignition[653]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:17:06.209684 ignition[653]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 2 00:17:06.212982 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 00:17:06.209959 ignition[653]: parsed url from cmdline: "" Jul 2 00:17:06.209964 ignition[653]: no config URL provided Jul 2 00:17:06.209970 ignition[653]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 00:17:06.209981 ignition[653]: no config at "/usr/lib/ignition/user.ign" Jul 2 00:17:06.209988 ignition[653]: failed to fetch config: resource requires networking Jul 2 00:17:06.210316 ignition[653]: Ignition finished successfully Jul 2 00:17:06.222146 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 2 00:17:06.251631 ignition[758]: Ignition 2.18.0 Jul 2 00:17:06.251645 ignition[758]: Stage: fetch Jul 2 00:17:06.251912 ignition[758]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:17:06.251923 ignition[758]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 2 00:17:06.252077 ignition[758]: parsed url from cmdline: "" Jul 2 00:17:06.252084 ignition[758]: no config URL provided Jul 2 00:17:06.252094 ignition[758]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 00:17:06.252110 ignition[758]: no config at "/usr/lib/ignition/user.ign" Jul 2 00:17:06.252135 ignition[758]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Jul 2 00:17:06.276964 ignition[758]: GET result: OK Jul 2 00:17:06.277972 ignition[758]: parsing config with SHA512: 5b5cd5f377d19a4d6dd4682a17bb5e17bf264d15c1323bdcc1f29ee846b7eb337632ebaafba9aa36fb9c7de7e31a33c85c177934420ac5d61ab4c1f6d0fd02c0 Jul 2 00:17:06.288087 unknown[758]: fetched base config from "system" Jul 2 00:17:06.288966 ignition[758]: fetch: fetch complete Jul 2 00:17:06.288114 unknown[758]: fetched base config from "system" Jul 2 00:17:06.289895 ignition[758]: fetch: fetch passed Jul 2 00:17:06.288129 unknown[758]: fetched user config from "digitalocean" Jul 2 00:17:06.290005 ignition[758]: Ignition finished successfully Jul 2 00:17:06.292056 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 2 00:17:06.299912 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 2 00:17:06.341235 ignition[765]: Ignition 2.18.0 Jul 2 00:17:06.341255 ignition[765]: Stage: kargs Jul 2 00:17:06.341504 ignition[765]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:17:06.341528 ignition[765]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 2 00:17:06.343756 ignition[765]: kargs: kargs passed Jul 2 00:17:06.346156 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 2 00:17:06.343836 ignition[765]: Ignition finished successfully Jul 2 00:17:06.355033 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 2 00:17:06.382160 ignition[773]: Ignition 2.18.0 Jul 2 00:17:06.382178 ignition[773]: Stage: disks Jul 2 00:17:06.382453 ignition[773]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:17:06.382469 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 2 00:17:06.385799 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 2 00:17:06.384018 ignition[773]: disks: disks passed Jul 2 00:17:06.384121 ignition[773]: Ignition finished successfully Jul 2 00:17:06.387836 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 2 00:17:06.393437 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 2 00:17:06.394232 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 00:17:06.395360 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 00:17:06.396302 systemd[1]: Reached target basic.target - Basic System. Jul 2 00:17:06.407941 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 2 00:17:06.430351 systemd-fsck[783]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 2 00:17:06.435627 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 2 00:17:06.440706 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 2 00:17:06.570543 kernel: EXT4-fs (vda9): mounted filesystem c5a17c06-b440-4aab-a0fa-5b60bb1d8586 r/w with ordered data mode. Quota mode: none. Jul 2 00:17:06.571218 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 2 00:17:06.572548 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 2 00:17:06.589759 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 00:17:06.593449 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 2 00:17:06.602385 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Jul 2 00:17:06.605718 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (791) Jul 2 00:17:06.609548 kernel: BTRFS info (device vda6): first mount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:17:06.609904 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 2 00:17:06.614259 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:17:06.614327 kernel: BTRFS info (device vda6): using free space tree Jul 2 00:17:06.614399 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 00:17:06.614460 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 00:17:06.620306 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 2 00:17:06.626548 kernel: BTRFS info (device vda6): auto enabling async discard Jul 2 00:17:06.629066 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 2 00:17:06.635877 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 00:17:06.724863 coreos-metadata[794]: Jul 02 00:17:06.724 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jul 2 00:17:06.727340 coreos-metadata[793]: Jul 02 00:17:06.727 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jul 2 00:17:06.731542 initrd-setup-root[821]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 00:17:06.737790 initrd-setup-root[828]: cut: /sysroot/etc/group: No such file or directory Jul 2 00:17:06.740598 coreos-metadata[793]: Jul 02 00:17:06.738 INFO Fetch successful Jul 2 00:17:06.742642 coreos-metadata[794]: Jul 02 00:17:06.741 INFO Fetch successful Jul 2 00:17:06.749218 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Jul 2 00:17:06.750322 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Jul 2 00:17:06.754664 initrd-setup-root[835]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 00:17:06.756973 coreos-metadata[794]: Jul 02 00:17:06.756 INFO wrote hostname ci-3975.1.1-c-5be545c9fd to /sysroot/etc/hostname Jul 2 00:17:06.758620 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 2 00:17:06.763600 initrd-setup-root[844]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 00:17:06.913326 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 2 00:17:06.920882 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 2 00:17:06.923903 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 2 00:17:06.941180 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 2 00:17:06.942499 kernel: BTRFS info (device vda6): last unmount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:17:06.978931 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 2 00:17:06.991456 ignition[913]: INFO : Ignition 2.18.0 Jul 2 00:17:06.991456 ignition[913]: INFO : Stage: mount Jul 2 00:17:06.994102 ignition[913]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:17:06.994102 ignition[913]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 2 00:17:06.994102 ignition[913]: INFO : mount: mount passed Jul 2 00:17:06.994102 ignition[913]: INFO : Ignition finished successfully Jul 2 00:17:06.995121 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 2 00:17:07.005141 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 2 00:17:07.035015 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 00:17:07.063606 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (925) Jul 2 00:17:07.067380 kernel: BTRFS info (device vda6): first mount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:17:07.067494 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:17:07.067509 kernel: BTRFS info (device vda6): using free space tree Jul 2 00:17:07.072594 kernel: BTRFS info (device vda6): auto enabling async discard Jul 2 00:17:07.076997 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 00:17:07.122163 ignition[942]: INFO : Ignition 2.18.0 Jul 2 00:17:07.122163 ignition[942]: INFO : Stage: files Jul 2 00:17:07.124077 ignition[942]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:17:07.124077 ignition[942]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 2 00:17:07.126021 ignition[942]: DEBUG : files: compiled without relabeling support, skipping Jul 2 00:17:07.126021 ignition[942]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 00:17:07.126021 ignition[942]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 00:17:07.131256 ignition[942]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 00:17:07.132808 ignition[942]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 00:17:07.132808 ignition[942]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 00:17:07.132061 unknown[942]: wrote ssh authorized keys file for user: core Jul 2 00:17:07.135772 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 00:17:07.135772 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 2 00:17:07.478960 systemd-networkd[749]: eth0: Gained IPv6LL Jul 2 00:17:07.845058 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 2 00:17:07.916602 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 00:17:07.916602 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 2 00:17:07.918355 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 00:17:07.918355 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 2 00:17:07.918355 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 2 00:17:07.918355 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 00:17:07.918355 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 00:17:07.918355 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 00:17:07.918355 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 00:17:07.918355 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 00:17:07.918355 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 00:17:07.925515 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jul 2 00:17:07.925515 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jul 2 00:17:07.925515 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jul 2 00:17:07.925515 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jul 2 00:17:08.055001 systemd-networkd[749]: eth1: Gained IPv6LL Jul 2 00:17:08.358536 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 2 00:17:08.741693 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jul 2 00:17:08.741693 ignition[942]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 2 00:17:08.744135 ignition[942]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 00:17:08.744135 ignition[942]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 00:17:08.744135 ignition[942]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 2 00:17:08.744135 ignition[942]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 2 00:17:08.744135 ignition[942]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 2 00:17:08.744135 ignition[942]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 00:17:08.749687 ignition[942]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 00:17:08.749687 ignition[942]: INFO : files: files passed Jul 2 00:17:08.749687 ignition[942]: INFO : Ignition finished successfully Jul 2 00:17:08.745999 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 2 00:17:08.766832 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 2 00:17:08.770828 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 2 00:17:08.773810 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 00:17:08.774648 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 2 00:17:08.799164 initrd-setup-root-after-ignition[971]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:17:08.799164 initrd-setup-root-after-ignition[971]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:17:08.801859 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:17:08.804487 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 00:17:08.806071 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 2 00:17:08.811900 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 2 00:17:08.860243 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 00:17:08.860379 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 2 00:17:08.862311 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 2 00:17:08.862963 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 2 00:17:08.864204 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 2 00:17:08.868826 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 2 00:17:08.900908 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 00:17:08.908871 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 2 00:17:08.933699 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 2 00:17:08.934773 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 00:17:08.937320 systemd[1]: Stopped target timers.target - Timer Units. Jul 2 00:17:08.938087 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 00:17:08.938253 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 00:17:08.939998 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 2 00:17:08.941378 systemd[1]: Stopped target basic.target - Basic System. Jul 2 00:17:08.942472 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 2 00:17:08.943481 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 00:17:08.944644 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 2 00:17:08.945806 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 2 00:17:08.947394 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 00:17:08.948853 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 2 00:17:08.950024 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 2 00:17:08.951496 systemd[1]: Stopped target swap.target - Swaps. Jul 2 00:17:08.952728 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 00:17:08.952911 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 2 00:17:08.955010 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 2 00:17:08.956051 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 00:17:08.956982 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 2 00:17:08.957115 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 00:17:08.958194 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 00:17:08.958385 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 2 00:17:08.960145 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 00:17:08.960608 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 00:17:08.961792 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 00:17:08.961931 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 2 00:17:08.963036 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 2 00:17:08.963227 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 2 00:17:08.973982 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 2 00:17:08.976001 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 00:17:08.976395 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 00:17:08.982146 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 2 00:17:08.982956 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 00:17:08.983277 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 00:17:08.985314 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 00:17:08.989468 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 00:17:09.001133 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 00:17:09.001329 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 2 00:17:09.005747 ignition[995]: INFO : Ignition 2.18.0 Jul 2 00:17:09.005747 ignition[995]: INFO : Stage: umount Jul 2 00:17:09.005747 ignition[995]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:17:09.005747 ignition[995]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 2 00:17:09.005747 ignition[995]: INFO : umount: umount passed Jul 2 00:17:09.005747 ignition[995]: INFO : Ignition finished successfully Jul 2 00:17:09.006405 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 00:17:09.006674 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 2 00:17:09.012922 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 00:17:09.013130 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 2 00:17:09.014021 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 00:17:09.014166 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 2 00:17:09.015485 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 2 00:17:09.017935 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 2 00:17:09.024037 systemd[1]: Stopped target network.target - Network. Jul 2 00:17:09.025760 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 00:17:09.026083 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 00:17:09.027118 systemd[1]: Stopped target paths.target - Path Units. Jul 2 00:17:09.029725 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 00:17:09.030683 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 00:17:09.031260 systemd[1]: Stopped target slices.target - Slice Units. Jul 2 00:17:09.032587 systemd[1]: Stopped target sockets.target - Socket Units. Jul 2 00:17:09.033998 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 00:17:09.034068 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 00:17:09.035456 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 00:17:09.035566 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 00:17:09.036685 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 00:17:09.036808 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 2 00:17:09.039495 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 2 00:17:09.039688 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 2 00:17:09.041735 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 2 00:17:09.043293 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 2 00:17:09.045469 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 00:17:09.065744 systemd-networkd[749]: eth1: DHCPv6 lease lost Jul 2 00:17:09.070934 systemd-networkd[749]: eth0: DHCPv6 lease lost Jul 2 00:17:09.071504 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 00:17:09.071706 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 2 00:17:09.076427 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 2 00:17:09.077667 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 00:17:09.088225 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 00:17:09.088404 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 2 00:17:09.090939 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 00:17:09.091045 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 2 00:17:09.111915 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 2 00:17:09.130726 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 00:17:09.130919 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 00:17:09.133965 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 00:17:09.134077 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:17:09.134819 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 00:17:09.134911 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 2 00:17:09.137195 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 00:17:09.140447 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 00:17:09.142166 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 2 00:17:09.152739 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 00:17:09.152960 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 2 00:17:09.159565 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 00:17:09.159772 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 00:17:09.161422 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 00:17:09.161732 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 2 00:17:09.163278 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 00:17:09.163360 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 00:17:09.164163 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 00:17:09.164259 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 2 00:17:09.166249 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 00:17:09.166325 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 2 00:17:09.167884 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 00:17:09.167983 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:17:09.177935 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 2 00:17:09.178689 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 2 00:17:09.178841 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 00:17:09.180032 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:17:09.180131 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:17:09.184050 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 00:17:09.186037 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 2 00:17:09.193508 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 00:17:09.193718 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 2 00:17:09.195497 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 2 00:17:09.201961 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 2 00:17:09.228037 systemd[1]: Switching root. Jul 2 00:17:09.262078 systemd-journald[182]: Journal stopped Jul 2 00:17:10.552617 systemd-journald[182]: Received SIGTERM from PID 1 (systemd). Jul 2 00:17:10.552705 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 00:17:10.552726 kernel: SELinux: policy capability open_perms=1 Jul 2 00:17:10.552738 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 00:17:10.552755 kernel: SELinux: policy capability always_check_network=0 Jul 2 00:17:10.552778 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 00:17:10.552790 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 00:17:10.552805 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 00:17:10.552818 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 00:17:10.552835 kernel: audit: type=1403 audit(1719879429.474:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 2 00:17:10.552860 systemd[1]: Successfully loaded SELinux policy in 54.055ms. Jul 2 00:17:10.556607 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 16.773ms. Jul 2 00:17:10.556652 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 00:17:10.556671 systemd[1]: Detected virtualization kvm. Jul 2 00:17:10.556690 systemd[1]: Detected architecture x86-64. Jul 2 00:17:10.556703 systemd[1]: Detected first boot. Jul 2 00:17:10.556718 systemd[1]: Hostname set to . Jul 2 00:17:10.556731 systemd[1]: Initializing machine ID from VM UUID. Jul 2 00:17:10.556749 zram_generator::config[1038]: No configuration found. Jul 2 00:17:10.556765 systemd[1]: Populated /etc with preset unit settings. Jul 2 00:17:10.556781 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 2 00:17:10.556794 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 2 00:17:10.556807 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 2 00:17:10.556821 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 2 00:17:10.556833 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 2 00:17:10.556846 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 2 00:17:10.556858 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 2 00:17:10.556872 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 2 00:17:10.556889 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 2 00:17:10.556907 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 2 00:17:10.556920 systemd[1]: Created slice user.slice - User and Session Slice. Jul 2 00:17:10.556960 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 00:17:10.556973 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 00:17:10.556986 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 2 00:17:10.556999 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 2 00:17:10.557011 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 2 00:17:10.557023 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 00:17:10.557039 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 2 00:17:10.557052 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 00:17:10.557065 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 2 00:17:10.557079 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 2 00:17:10.557091 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 2 00:17:10.557104 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 2 00:17:10.557120 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 00:17:10.557133 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 00:17:10.557145 systemd[1]: Reached target slices.target - Slice Units. Jul 2 00:17:10.557158 systemd[1]: Reached target swap.target - Swaps. Jul 2 00:17:10.557170 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 2 00:17:10.557183 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 2 00:17:10.557196 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 00:17:10.557209 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 00:17:10.557222 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 00:17:10.557241 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 2 00:17:10.557257 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 2 00:17:10.557270 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 2 00:17:10.557283 systemd[1]: Mounting media.mount - External Media Directory... Jul 2 00:17:10.557296 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:17:10.557308 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 2 00:17:10.557321 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 2 00:17:10.557333 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 2 00:17:10.557347 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 00:17:10.557362 systemd[1]: Reached target machines.target - Containers. Jul 2 00:17:10.557375 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 2 00:17:10.557388 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:17:10.557400 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 00:17:10.557413 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 2 00:17:10.557426 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 00:17:10.557439 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 00:17:10.557451 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 00:17:10.557464 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 2 00:17:10.557479 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 00:17:10.557492 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 00:17:10.557506 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 2 00:17:10.557540 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 2 00:17:10.557553 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 2 00:17:10.557566 systemd[1]: Stopped systemd-fsck-usr.service. Jul 2 00:17:10.557578 kernel: loop: module loaded Jul 2 00:17:10.557592 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 00:17:10.557609 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 00:17:10.557622 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 2 00:17:10.557634 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 2 00:17:10.557647 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 00:17:10.557660 systemd[1]: verity-setup.service: Deactivated successfully. Jul 2 00:17:10.557673 systemd[1]: Stopped verity-setup.service. Jul 2 00:17:10.557686 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:17:10.557698 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 2 00:17:10.557711 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 2 00:17:10.557727 systemd[1]: Mounted media.mount - External Media Directory. Jul 2 00:17:10.557740 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 2 00:17:10.557753 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 2 00:17:10.557808 systemd-journald[1110]: Collecting audit messages is disabled. Jul 2 00:17:10.557842 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 2 00:17:10.557855 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 00:17:10.557869 systemd-journald[1110]: Journal started Jul 2 00:17:10.557913 systemd-journald[1110]: Runtime Journal (/run/log/journal/d63ace3fd7f0448a8c1668aee5b89580) is 4.9M, max 39.3M, 34.4M free. Jul 2 00:17:10.228096 systemd[1]: Queued start job for default target multi-user.target. Jul 2 00:17:10.252198 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 2 00:17:10.252802 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 2 00:17:10.564761 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 00:17:10.565277 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 00:17:10.565446 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 2 00:17:10.566422 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:17:10.567676 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 00:17:10.568725 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:17:10.568903 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 00:17:10.571195 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:17:10.571470 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 00:17:10.572456 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 00:17:10.574920 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 2 00:17:10.575721 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 2 00:17:10.601806 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 2 00:17:10.613222 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 2 00:17:10.613872 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 00:17:10.613917 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 00:17:10.615749 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 2 00:17:10.622660 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 2 00:17:10.630831 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 2 00:17:10.631770 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:17:10.640832 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 2 00:17:10.650896 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 2 00:17:10.651619 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 00:17:10.652542 kernel: fuse: init (API version 7.39) Jul 2 00:17:10.673780 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 2 00:17:10.674380 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 00:17:10.677086 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 00:17:10.680877 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 2 00:17:10.684703 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 00:17:10.684910 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 2 00:17:10.685832 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 2 00:17:10.687098 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 2 00:17:10.701704 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 2 00:17:10.712924 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 2 00:17:10.729658 systemd-journald[1110]: Time spent on flushing to /var/log/journal/d63ace3fd7f0448a8c1668aee5b89580 is 93.491ms for 981 entries. Jul 2 00:17:10.729658 systemd-journald[1110]: System Journal (/var/log/journal/d63ace3fd7f0448a8c1668aee5b89580) is 8.0M, max 195.6M, 187.6M free. Jul 2 00:17:10.871259 systemd-journald[1110]: Received client request to flush runtime journal. Jul 2 00:17:10.871377 kernel: ACPI: bus type drm_connector registered Jul 2 00:17:10.871420 kernel: loop0: detected capacity change from 0 to 8 Jul 2 00:17:10.871451 kernel: block loop0: the capability attribute has been deprecated. Jul 2 00:17:10.871851 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 00:17:10.743220 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 2 00:17:10.756923 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 2 00:17:10.759198 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 00:17:10.760636 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 00:17:10.761806 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 2 00:17:10.898361 kernel: loop1: detected capacity change from 0 to 210664 Jul 2 00:17:10.766297 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 2 00:17:10.774703 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 2 00:17:10.832285 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 00:17:10.847914 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 2 00:17:10.880352 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 2 00:17:10.883153 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:17:10.921369 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 00:17:10.930689 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 2 00:17:10.952228 udevadm[1169]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 2 00:17:10.964479 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 2 00:17:10.975450 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 00:17:10.992701 kernel: loop2: detected capacity change from 0 to 139904 Jul 2 00:17:11.062203 kernel: loop3: detected capacity change from 0 to 80568 Jul 2 00:17:11.117916 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Jul 2 00:17:11.117949 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Jul 2 00:17:11.135546 kernel: loop4: detected capacity change from 0 to 8 Jul 2 00:17:11.140594 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 00:17:11.144725 kernel: loop5: detected capacity change from 0 to 210664 Jul 2 00:17:11.169560 kernel: loop6: detected capacity change from 0 to 139904 Jul 2 00:17:11.194571 kernel: loop7: detected capacity change from 0 to 80568 Jul 2 00:17:11.209265 (sd-merge)[1181]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Jul 2 00:17:11.210062 (sd-merge)[1181]: Merged extensions into '/usr'. Jul 2 00:17:11.219107 systemd[1]: Reloading requested from client PID 1150 ('systemd-sysext') (unit systemd-sysext.service)... Jul 2 00:17:11.219294 systemd[1]: Reloading... Jul 2 00:17:11.347753 zram_generator::config[1203]: No configuration found. Jul 2 00:17:11.604630 ldconfig[1143]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 00:17:11.671223 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:17:11.744444 systemd[1]: Reloading finished in 524 ms. Jul 2 00:17:11.787671 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 2 00:17:11.791420 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 2 00:17:11.802946 systemd[1]: Starting ensure-sysext.service... Jul 2 00:17:11.806414 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 00:17:11.818495 systemd[1]: Reloading requested from client PID 1249 ('systemctl') (unit ensure-sysext.service)... Jul 2 00:17:11.818666 systemd[1]: Reloading... Jul 2 00:17:11.906124 systemd-tmpfiles[1250]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 00:17:11.909633 systemd-tmpfiles[1250]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 2 00:17:11.910698 systemd-tmpfiles[1250]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 00:17:11.911108 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Jul 2 00:17:11.911186 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Jul 2 00:17:11.925444 systemd-tmpfiles[1250]: Detected autofs mount point /boot during canonicalization of boot. Jul 2 00:17:11.925469 systemd-tmpfiles[1250]: Skipping /boot Jul 2 00:17:11.934566 zram_generator::config[1274]: No configuration found. Jul 2 00:17:11.969379 systemd-tmpfiles[1250]: Detected autofs mount point /boot during canonicalization of boot. Jul 2 00:17:11.969399 systemd-tmpfiles[1250]: Skipping /boot Jul 2 00:17:12.149681 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:17:12.224806 systemd[1]: Reloading finished in 405 ms. Jul 2 00:17:12.248842 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 2 00:17:12.254497 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 00:17:12.273914 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 00:17:12.283960 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 2 00:17:12.289851 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 2 00:17:12.299927 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 00:17:12.311954 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 00:17:12.316770 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 2 00:17:12.332055 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 2 00:17:12.336741 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:17:12.337076 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:17:12.347973 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 00:17:12.352889 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 00:17:12.361084 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 00:17:12.362141 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:17:12.362363 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:17:12.365343 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:17:12.366790 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:17:12.367135 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:17:12.367289 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:17:12.375309 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:17:12.375699 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:17:12.385936 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 00:17:12.386601 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:17:12.386800 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:17:12.393208 systemd[1]: Finished ensure-sysext.service. Jul 2 00:17:12.419730 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 2 00:17:12.455851 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 2 00:17:12.459230 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:17:12.461654 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 00:17:12.467955 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 00:17:12.473486 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 2 00:17:12.483059 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 2 00:17:12.484222 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:17:12.484599 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 00:17:12.490226 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 00:17:12.500931 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 2 00:17:12.502090 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:17:12.503677 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 00:17:12.507397 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 00:17:12.508277 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 00:17:12.511144 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 00:17:12.528843 systemd-udevd[1325]: Using default interface naming scheme 'v255'. Jul 2 00:17:12.535709 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 2 00:17:12.550554 augenrules[1360]: No rules Jul 2 00:17:12.555363 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 00:17:12.559227 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 2 00:17:12.593452 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 00:17:12.605822 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 00:17:12.659242 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 2 00:17:12.660301 systemd[1]: Reached target time-set.target - System Time Set. Jul 2 00:17:12.735197 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 2 00:17:12.770338 systemd-resolved[1323]: Positive Trust Anchors: Jul 2 00:17:12.770365 systemd-resolved[1323]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 00:17:12.770422 systemd-resolved[1323]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 00:17:12.778287 systemd-resolved[1323]: Using system hostname 'ci-3975.1.1-c-5be545c9fd'. Jul 2 00:17:12.781680 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 00:17:12.782573 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 00:17:12.820704 systemd-networkd[1371]: lo: Link UP Jul 2 00:17:12.820717 systemd-networkd[1371]: lo: Gained carrier Jul 2 00:17:12.824675 systemd-networkd[1371]: Enumeration completed Jul 2 00:17:12.825077 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 00:17:12.826020 systemd[1]: Reached target network.target - Network. Jul 2 00:17:12.828307 systemd-networkd[1371]: eth0: Configuring with /run/systemd/network/10-16:7d:b6:3d:79:c5.network. Jul 2 00:17:12.833885 systemd-networkd[1371]: eth1: Configuring with /run/systemd/network/10-ce:46:6b:e1:71:18.network. Jul 2 00:17:12.835889 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 2 00:17:12.836854 systemd-networkd[1371]: eth0: Link UP Jul 2 00:17:12.836865 systemd-networkd[1371]: eth0: Gained carrier Jul 2 00:17:12.843246 systemd-networkd[1371]: eth1: Link UP Jul 2 00:17:12.843431 systemd-networkd[1371]: eth1: Gained carrier Jul 2 00:17:12.847607 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1378) Jul 2 00:17:12.851739 systemd-timesyncd[1338]: Network configuration changed, trying to establish connection. Jul 2 00:17:12.855215 systemd-timesyncd[1338]: Network configuration changed, trying to establish connection. Jul 2 00:17:12.896350 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Jul 2 00:17:12.897402 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:17:12.897624 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:17:12.901499 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 00:17:12.910782 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 00:17:12.926573 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1370) Jul 2 00:17:12.922915 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 00:17:12.926181 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:17:12.926243 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 00:17:12.926265 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:17:12.947914 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:17:12.949962 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 00:17:12.957663 kernel: ISO 9660 Extensions: RRIP_1991A Jul 2 00:17:12.961602 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Jul 2 00:17:12.971968 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:17:12.975214 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 00:17:12.979387 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:17:12.979674 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 00:17:12.987203 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jul 2 00:17:12.990044 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 00:17:12.990199 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 00:17:12.993676 kernel: ACPI: button: Power Button [PWRF] Jul 2 00:17:13.055610 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jul 2 00:17:13.151737 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 2 00:17:13.154812 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jul 2 00:17:13.161822 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 2 00:17:13.186562 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jul 2 00:17:13.192545 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jul 2 00:17:13.194691 kernel: mousedev: PS/2 mouse device common for all mice Jul 2 00:17:13.201560 kernel: Console: switching to colour dummy device 80x25 Jul 2 00:17:13.209624 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jul 2 00:17:13.209749 kernel: [drm] features: -context_init Jul 2 00:17:13.209776 kernel: [drm] number of scanouts: 1 Jul 2 00:17:13.210240 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:17:13.222590 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 2 00:17:13.233363 kernel: [drm] number of cap sets: 0 Jul 2 00:17:13.239738 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:17:13.241960 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jul 2 00:17:13.243698 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:17:13.251907 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:17:13.255550 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jul 2 00:17:13.255675 kernel: Console: switching to colour frame buffer device 128x48 Jul 2 00:17:13.264567 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jul 2 00:17:13.329365 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:17:13.329702 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:17:13.406019 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:17:13.546575 kernel: EDAC MC: Ver: 3.0.0 Jul 2 00:17:13.551702 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:17:13.587415 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 2 00:17:13.599106 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 2 00:17:13.623658 lvm[1429]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 00:17:13.667315 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 2 00:17:13.668946 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 00:17:13.669112 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 00:17:13.669388 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 2 00:17:13.669558 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 2 00:17:13.672236 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 2 00:17:13.674348 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 2 00:17:13.674495 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 2 00:17:13.674604 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 00:17:13.674647 systemd[1]: Reached target paths.target - Path Units. Jul 2 00:17:13.674797 systemd[1]: Reached target timers.target - Timer Units. Jul 2 00:17:13.677087 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 2 00:17:13.679890 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 2 00:17:13.703355 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 2 00:17:13.705661 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 2 00:17:13.706911 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 2 00:17:13.708701 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 00:17:13.709474 systemd[1]: Reached target basic.target - Basic System. Jul 2 00:17:13.710245 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 2 00:17:13.710289 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 2 00:17:13.751704 systemd[1]: Starting containerd.service - containerd container runtime... Jul 2 00:17:13.766117 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 2 00:17:13.776577 lvm[1433]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 00:17:13.780159 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 2 00:17:13.787911 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 2 00:17:13.796942 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 2 00:17:13.799891 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 2 00:17:13.812097 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 2 00:17:13.827031 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 2 00:17:13.842048 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 2 00:17:13.852001 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 2 00:17:13.874024 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 2 00:17:13.884071 coreos-metadata[1435]: Jul 02 00:17:13.872 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jul 2 00:17:13.879270 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 2 00:17:13.900446 coreos-metadata[1435]: Jul 02 00:17:13.891 INFO Fetch successful Jul 2 00:17:13.880409 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 2 00:17:13.901130 systemd[1]: Starting update-engine.service - Update Engine... Jul 2 00:17:13.920332 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 2 00:17:13.934951 jq[1437]: false Jul 2 00:17:13.929394 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 2 00:17:13.955802 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 00:17:13.956274 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 2 00:17:14.045777 extend-filesystems[1440]: Found loop4 Jul 2 00:17:14.045777 extend-filesystems[1440]: Found loop5 Jul 2 00:17:14.045777 extend-filesystems[1440]: Found loop6 Jul 2 00:17:14.045777 extend-filesystems[1440]: Found loop7 Jul 2 00:17:14.045777 extend-filesystems[1440]: Found vda Jul 2 00:17:14.045777 extend-filesystems[1440]: Found vda1 Jul 2 00:17:14.131359 jq[1447]: true Jul 2 00:17:14.147023 extend-filesystems[1440]: Found vda2 Jul 2 00:17:14.147023 extend-filesystems[1440]: Found vda3 Jul 2 00:17:14.147023 extend-filesystems[1440]: Found usr Jul 2 00:17:14.147023 extend-filesystems[1440]: Found vda4 Jul 2 00:17:14.147023 extend-filesystems[1440]: Found vda6 Jul 2 00:17:14.147023 extend-filesystems[1440]: Found vda7 Jul 2 00:17:14.147023 extend-filesystems[1440]: Found vda9 Jul 2 00:17:14.147023 extend-filesystems[1440]: Checking size of /dev/vda9 Jul 2 00:17:14.048856 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 2 00:17:14.242319 tar[1451]: linux-amd64/helm Jul 2 00:17:14.290860 update_engine[1446]: I0702 00:17:14.174388 1446 main.cc:92] Flatcar Update Engine starting Jul 2 00:17:14.290860 update_engine[1446]: I0702 00:17:14.248371 1446 update_check_scheduler.cc:74] Next update check in 2m17s Jul 2 00:17:14.235363 dbus-daemon[1436]: [system] SELinux support is enabled Jul 2 00:17:14.068225 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 00:17:14.068655 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 2 00:17:14.160092 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 2 00:17:14.247432 (ntainerd)[1472]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 2 00:17:14.354940 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Jul 2 00:17:14.267041 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 00:17:14.355386 extend-filesystems[1440]: Resized partition /dev/vda9 Jul 2 00:17:14.268258 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 2 00:17:14.377352 extend-filesystems[1482]: resize2fs 1.47.0 (5-Feb-2023) Jul 2 00:17:14.276488 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 2 00:17:14.455598 jq[1471]: true Jul 2 00:17:14.299810 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 00:17:14.299897 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 2 00:17:14.304068 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 00:17:14.304214 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Jul 2 00:17:14.304253 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 2 00:17:14.309682 systemd[1]: Started update-engine.service - Update Engine. Jul 2 00:17:14.331434 systemd-logind[1445]: New seat seat0. Jul 2 00:17:14.332010 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 2 00:17:14.378471 systemd-logind[1445]: Watching system buttons on /dev/input/event1 (Power Button) Jul 2 00:17:14.378509 systemd-logind[1445]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 2 00:17:14.394710 systemd[1]: Started systemd-logind.service - User Login Management. Jul 2 00:17:14.454965 systemd-networkd[1371]: eth0: Gained IPv6LL Jul 2 00:17:14.456285 systemd-timesyncd[1338]: Network configuration changed, trying to establish connection. Jul 2 00:17:14.471456 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 2 00:17:14.487926 systemd[1]: Reached target network-online.target - Network is Online. Jul 2 00:17:14.508285 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:17:14.523462 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 2 00:17:14.586481 systemd-networkd[1371]: eth1: Gained IPv6LL Jul 2 00:17:14.592438 systemd-timesyncd[1338]: Network configuration changed, trying to establish connection. Jul 2 00:17:14.707653 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1375) Jul 2 00:17:14.744047 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 2 00:17:15.013835 bash[1507]: Updated "/home/core/.ssh/authorized_keys" Jul 2 00:17:15.067760 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 2 00:17:15.104072 systemd[1]: Starting sshkeys.service... Jul 2 00:17:15.170189 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jul 2 00:17:15.213034 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 2 00:17:15.235608 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 2 00:17:15.243134 extend-filesystems[1482]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 2 00:17:15.243134 extend-filesystems[1482]: old_desc_blocks = 1, new_desc_blocks = 8 Jul 2 00:17:15.243134 extend-filesystems[1482]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jul 2 00:17:15.255511 extend-filesystems[1440]: Resized filesystem in /dev/vda9 Jul 2 00:17:15.255511 extend-filesystems[1440]: Found vdb Jul 2 00:17:15.244886 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 00:17:15.247895 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 2 00:17:15.287557 locksmithd[1481]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 00:17:15.426265 coreos-metadata[1519]: Jul 02 00:17:15.392 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jul 2 00:17:15.426265 coreos-metadata[1519]: Jul 02 00:17:15.425 INFO Fetch successful Jul 2 00:17:15.518034 unknown[1519]: wrote ssh authorized keys file for user: core Jul 2 00:17:15.621267 update-ssh-keys[1528]: Updated "/home/core/.ssh/authorized_keys" Jul 2 00:17:15.624795 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 2 00:17:15.632365 systemd[1]: Finished sshkeys.service. Jul 2 00:17:15.718490 sshd_keygen[1465]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 00:17:15.792574 containerd[1472]: time="2024-07-02T00:17:15.789105906Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Jul 2 00:17:15.813633 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 2 00:17:15.838694 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 2 00:17:15.904706 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 00:17:15.905003 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 2 00:17:15.914383 containerd[1472]: time="2024-07-02T00:17:15.913272052Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 2 00:17:15.914383 containerd[1472]: time="2024-07-02T00:17:15.913358623Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:17:15.918618 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 2 00:17:15.923298 containerd[1472]: time="2024-07-02T00:17:15.922791514Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.36-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:17:15.923298 containerd[1472]: time="2024-07-02T00:17:15.922852002Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:17:15.926831 containerd[1472]: time="2024-07-02T00:17:15.924432371Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:17:15.926831 containerd[1472]: time="2024-07-02T00:17:15.924482463Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 00:17:15.926831 containerd[1472]: time="2024-07-02T00:17:15.924659473Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 2 00:17:15.926831 containerd[1472]: time="2024-07-02T00:17:15.924737050Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:17:15.926831 containerd[1472]: time="2024-07-02T00:17:15.924755144Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 00:17:15.926831 containerd[1472]: time="2024-07-02T00:17:15.924843990Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:17:15.926831 containerd[1472]: time="2024-07-02T00:17:15.925159957Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 00:17:15.926831 containerd[1472]: time="2024-07-02T00:17:15.925189709Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 00:17:15.926831 containerd[1472]: time="2024-07-02T00:17:15.925207274Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:17:15.926831 containerd[1472]: time="2024-07-02T00:17:15.925388736Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:17:15.926831 containerd[1472]: time="2024-07-02T00:17:15.925409339Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 00:17:15.927313 containerd[1472]: time="2024-07-02T00:17:15.925488819Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 00:17:15.927313 containerd[1472]: time="2024-07-02T00:17:15.925508524Z" level=info msg="metadata content store policy set" policy=shared Jul 2 00:17:15.950071 containerd[1472]: time="2024-07-02T00:17:15.949994927Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 00:17:15.950071 containerd[1472]: time="2024-07-02T00:17:15.950073888Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 00:17:15.950268 containerd[1472]: time="2024-07-02T00:17:15.950098220Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 00:17:15.950268 containerd[1472]: time="2024-07-02T00:17:15.950159691Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 2 00:17:15.950268 containerd[1472]: time="2024-07-02T00:17:15.950182815Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 2 00:17:15.950268 containerd[1472]: time="2024-07-02T00:17:15.950200677Z" level=info msg="NRI interface is disabled by configuration." Jul 2 00:17:15.950268 containerd[1472]: time="2024-07-02T00:17:15.950218497Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 00:17:15.951498 containerd[1472]: time="2024-07-02T00:17:15.950488552Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 2 00:17:15.952692 containerd[1472]: time="2024-07-02T00:17:15.952263049Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 2 00:17:15.952830 containerd[1472]: time="2024-07-02T00:17:15.952697692Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 2 00:17:15.952830 containerd[1472]: time="2024-07-02T00:17:15.952744545Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 2 00:17:15.952830 containerd[1472]: time="2024-07-02T00:17:15.952769656Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 00:17:15.952830 containerd[1472]: time="2024-07-02T00:17:15.952820775Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 00:17:15.952960 containerd[1472]: time="2024-07-02T00:17:15.952842931Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 00:17:15.952960 containerd[1472]: time="2024-07-02T00:17:15.952862217Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 00:17:15.952960 containerd[1472]: time="2024-07-02T00:17:15.952899093Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 00:17:15.952960 containerd[1472]: time="2024-07-02T00:17:15.952922257Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 00:17:15.952960 containerd[1472]: time="2024-07-02T00:17:15.952955163Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 00:17:15.953127 containerd[1472]: time="2024-07-02T00:17:15.952979443Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 00:17:15.953600 containerd[1472]: time="2024-07-02T00:17:15.953560540Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 00:17:15.957551 containerd[1472]: time="2024-07-02T00:17:15.956593692Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 00:17:15.957551 containerd[1472]: time="2024-07-02T00:17:15.956667603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 00:17:15.957551 containerd[1472]: time="2024-07-02T00:17:15.956684275Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 2 00:17:15.957551 containerd[1472]: time="2024-07-02T00:17:15.956723818Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 00:17:15.957551 containerd[1472]: time="2024-07-02T00:17:15.956846711Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 00:17:15.957835 containerd[1472]: time="2024-07-02T00:17:15.957602375Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 00:17:15.957835 containerd[1472]: time="2024-07-02T00:17:15.957661873Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 00:17:15.957835 containerd[1472]: time="2024-07-02T00:17:15.957683887Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 00:17:15.957835 containerd[1472]: time="2024-07-02T00:17:15.957705058Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 00:17:15.957835 containerd[1472]: time="2024-07-02T00:17:15.957742081Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 00:17:15.957835 containerd[1472]: time="2024-07-02T00:17:15.957761882Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 00:17:15.957835 containerd[1472]: time="2024-07-02T00:17:15.957779282Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 00:17:15.957835 containerd[1472]: time="2024-07-02T00:17:15.957793812Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 00:17:15.960549 containerd[1472]: time="2024-07-02T00:17:15.959488334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 2 00:17:15.960549 containerd[1472]: time="2024-07-02T00:17:15.959630413Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 2 00:17:15.960549 containerd[1472]: time="2024-07-02T00:17:15.959665209Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 00:17:15.960549 containerd[1472]: time="2024-07-02T00:17:15.959681711Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 2 00:17:15.960549 containerd[1472]: time="2024-07-02T00:17:15.959718493Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 00:17:15.960549 containerd[1472]: time="2024-07-02T00:17:15.959741661Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 2 00:17:15.960878 containerd[1472]: time="2024-07-02T00:17:15.960581609Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 00:17:15.960878 containerd[1472]: time="2024-07-02T00:17:15.960640912Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 00:17:15.961327 containerd[1472]: time="2024-07-02T00:17:15.961216237Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 00:17:15.961671 containerd[1472]: time="2024-07-02T00:17:15.961327799Z" level=info msg="Connect containerd service" Jul 2 00:17:15.965549 containerd[1472]: time="2024-07-02T00:17:15.963612634Z" level=info msg="using legacy CRI server" Jul 2 00:17:15.965549 containerd[1472]: time="2024-07-02T00:17:15.963668582Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 2 00:17:15.965549 containerd[1472]: time="2024-07-02T00:17:15.963858627Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 00:17:15.968110 containerd[1472]: time="2024-07-02T00:17:15.967457239Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 00:17:15.969266 containerd[1472]: time="2024-07-02T00:17:15.969188063Z" level=info msg="Start subscribing containerd event" Jul 2 00:17:15.969391 containerd[1472]: time="2024-07-02T00:17:15.969291830Z" level=info msg="Start recovering state" Jul 2 00:17:15.969444 containerd[1472]: time="2024-07-02T00:17:15.969408804Z" level=info msg="Start event monitor" Jul 2 00:17:15.969444 containerd[1472]: time="2024-07-02T00:17:15.969425551Z" level=info msg="Start snapshots syncer" Jul 2 00:17:15.969493 containerd[1472]: time="2024-07-02T00:17:15.969441128Z" level=info msg="Start cni network conf syncer for default" Jul 2 00:17:15.969493 containerd[1472]: time="2024-07-02T00:17:15.969453345Z" level=info msg="Start streaming server" Jul 2 00:17:15.977710 containerd[1472]: time="2024-07-02T00:17:15.969952231Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 00:17:15.977710 containerd[1472]: time="2024-07-02T00:17:15.970011184Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 2 00:17:15.977710 containerd[1472]: time="2024-07-02T00:17:15.970026578Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 00:17:15.977710 containerd[1472]: time="2024-07-02T00:17:15.970039510Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 2 00:17:15.977710 containerd[1472]: time="2024-07-02T00:17:15.970325973Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 00:17:15.977710 containerd[1472]: time="2024-07-02T00:17:15.970403230Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 00:17:15.977710 containerd[1472]: time="2024-07-02T00:17:15.970481070Z" level=info msg="containerd successfully booted in 0.190796s" Jul 2 00:17:15.970595 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 2 00:17:15.976018 systemd[1]: Started containerd.service - containerd container runtime. Jul 2 00:17:15.990865 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 2 00:17:16.005841 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 2 00:17:16.010397 systemd[1]: Reached target getty.target - Login Prompts. Jul 2 00:17:16.444984 tar[1451]: linux-amd64/LICENSE Jul 2 00:17:16.446995 tar[1451]: linux-amd64/README.md Jul 2 00:17:16.483187 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 2 00:17:17.160727 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:17:17.167234 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 2 00:17:17.169156 systemd[1]: Startup finished in 1.319s (kernel) + 6.728s (initrd) + 7.747s (userspace) = 15.795s. Jul 2 00:17:17.176357 (kubelet)[1559]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:17:18.165132 kubelet[1559]: E0702 00:17:18.165041 1559 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:17:18.169067 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:17:18.169284 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:17:18.170257 systemd[1]: kubelet.service: Consumed 1.514s CPU time. Jul 2 00:17:20.516997 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 2 00:17:20.521966 systemd[1]: Started sshd@0-64.23.132.250:22-147.75.109.163:33090.service - OpenSSH per-connection server daemon (147.75.109.163:33090). Jul 2 00:17:20.596166 sshd[1573]: Accepted publickey for core from 147.75.109.163 port 33090 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:17:20.599329 sshd[1573]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:17:20.610553 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 2 00:17:20.624103 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 2 00:17:20.628025 systemd-logind[1445]: New session 1 of user core. Jul 2 00:17:20.644768 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 2 00:17:20.661030 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 2 00:17:20.666135 (systemd)[1577]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:17:20.821581 systemd[1577]: Queued start job for default target default.target. Jul 2 00:17:20.829815 systemd[1577]: Created slice app.slice - User Application Slice. Jul 2 00:17:20.829851 systemd[1577]: Reached target paths.target - Paths. Jul 2 00:17:20.829867 systemd[1577]: Reached target timers.target - Timers. Jul 2 00:17:20.831443 systemd[1577]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 2 00:17:20.848457 systemd[1577]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 2 00:17:20.848646 systemd[1577]: Reached target sockets.target - Sockets. Jul 2 00:17:20.848663 systemd[1577]: Reached target basic.target - Basic System. Jul 2 00:17:20.848718 systemd[1577]: Reached target default.target - Main User Target. Jul 2 00:17:20.848753 systemd[1577]: Startup finished in 170ms. Jul 2 00:17:20.848973 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 2 00:17:20.858997 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 2 00:17:20.933053 systemd[1]: Started sshd@1-64.23.132.250:22-147.75.109.163:33100.service - OpenSSH per-connection server daemon (147.75.109.163:33100). Jul 2 00:17:20.986503 sshd[1588]: Accepted publickey for core from 147.75.109.163 port 33100 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:17:20.988652 sshd[1588]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:17:20.994675 systemd-logind[1445]: New session 2 of user core. Jul 2 00:17:21.001859 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 2 00:17:21.082115 sshd[1588]: pam_unix(sshd:session): session closed for user core Jul 2 00:17:21.095408 systemd[1]: sshd@1-64.23.132.250:22-147.75.109.163:33100.service: Deactivated successfully. Jul 2 00:17:21.097824 systemd[1]: session-2.scope: Deactivated successfully. Jul 2 00:17:21.100933 systemd-logind[1445]: Session 2 logged out. Waiting for processes to exit. Jul 2 00:17:21.107089 systemd[1]: Started sshd@2-64.23.132.250:22-147.75.109.163:33106.service - OpenSSH per-connection server daemon (147.75.109.163:33106). Jul 2 00:17:21.108714 systemd-logind[1445]: Removed session 2. Jul 2 00:17:21.152594 sshd[1595]: Accepted publickey for core from 147.75.109.163 port 33106 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:17:21.155358 sshd[1595]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:17:21.161785 systemd-logind[1445]: New session 3 of user core. Jul 2 00:17:21.171880 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 2 00:17:21.228862 sshd[1595]: pam_unix(sshd:session): session closed for user core Jul 2 00:17:21.239749 systemd[1]: sshd@2-64.23.132.250:22-147.75.109.163:33106.service: Deactivated successfully. Jul 2 00:17:21.241696 systemd[1]: session-3.scope: Deactivated successfully. Jul 2 00:17:21.242429 systemd-logind[1445]: Session 3 logged out. Waiting for processes to exit. Jul 2 00:17:21.248944 systemd[1]: Started sshd@3-64.23.132.250:22-147.75.109.163:54736.service - OpenSSH per-connection server daemon (147.75.109.163:54736). Jul 2 00:17:21.251071 systemd-logind[1445]: Removed session 3. Jul 2 00:17:21.295168 sshd[1602]: Accepted publickey for core from 147.75.109.163 port 54736 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:17:21.297164 sshd[1602]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:17:21.303962 systemd-logind[1445]: New session 4 of user core. Jul 2 00:17:21.313816 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 2 00:17:21.379866 sshd[1602]: pam_unix(sshd:session): session closed for user core Jul 2 00:17:21.393063 systemd[1]: sshd@3-64.23.132.250:22-147.75.109.163:54736.service: Deactivated successfully. Jul 2 00:17:21.395014 systemd[1]: session-4.scope: Deactivated successfully. Jul 2 00:17:21.396799 systemd-logind[1445]: Session 4 logged out. Waiting for processes to exit. Jul 2 00:17:21.400930 systemd[1]: Started sshd@4-64.23.132.250:22-147.75.109.163:54750.service - OpenSSH per-connection server daemon (147.75.109.163:54750). Jul 2 00:17:21.403348 systemd-logind[1445]: Removed session 4. Jul 2 00:17:21.454158 sshd[1609]: Accepted publickey for core from 147.75.109.163 port 54750 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:17:21.456322 sshd[1609]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:17:21.462083 systemd-logind[1445]: New session 5 of user core. Jul 2 00:17:21.469882 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 2 00:17:21.543895 sudo[1612]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 2 00:17:21.544330 sudo[1612]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:17:21.563207 sudo[1612]: pam_unix(sudo:session): session closed for user root Jul 2 00:17:21.567984 sshd[1609]: pam_unix(sshd:session): session closed for user core Jul 2 00:17:21.581381 systemd[1]: sshd@4-64.23.132.250:22-147.75.109.163:54750.service: Deactivated successfully. Jul 2 00:17:21.584463 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 00:17:21.588049 systemd-logind[1445]: Session 5 logged out. Waiting for processes to exit. Jul 2 00:17:21.592041 systemd[1]: Started sshd@5-64.23.132.250:22-147.75.109.163:54752.service - OpenSSH per-connection server daemon (147.75.109.163:54752). Jul 2 00:17:21.594329 systemd-logind[1445]: Removed session 5. Jul 2 00:17:21.645872 sshd[1617]: Accepted publickey for core from 147.75.109.163 port 54752 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:17:21.647756 sshd[1617]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:17:21.654972 systemd-logind[1445]: New session 6 of user core. Jul 2 00:17:21.660935 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 2 00:17:21.722590 sudo[1621]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 2 00:17:21.723028 sudo[1621]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:17:21.728093 sudo[1621]: pam_unix(sudo:session): session closed for user root Jul 2 00:17:21.736803 sudo[1620]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 2 00:17:21.737220 sudo[1620]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:17:21.753987 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 2 00:17:21.759416 auditctl[1624]: No rules Jul 2 00:17:21.759934 systemd[1]: audit-rules.service: Deactivated successfully. Jul 2 00:17:21.760219 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 2 00:17:21.768103 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 00:17:21.801219 augenrules[1642]: No rules Jul 2 00:17:21.803135 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 00:17:21.804754 sudo[1620]: pam_unix(sudo:session): session closed for user root Jul 2 00:17:21.809511 sshd[1617]: pam_unix(sshd:session): session closed for user core Jul 2 00:17:21.821720 systemd[1]: sshd@5-64.23.132.250:22-147.75.109.163:54752.service: Deactivated successfully. Jul 2 00:17:21.824175 systemd[1]: session-6.scope: Deactivated successfully. Jul 2 00:17:21.825137 systemd-logind[1445]: Session 6 logged out. Waiting for processes to exit. Jul 2 00:17:21.841049 systemd[1]: Started sshd@6-64.23.132.250:22-147.75.109.163:54766.service - OpenSSH per-connection server daemon (147.75.109.163:54766). Jul 2 00:17:21.844304 systemd-logind[1445]: Removed session 6. Jul 2 00:17:21.886313 sshd[1650]: Accepted publickey for core from 147.75.109.163 port 54766 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:17:21.888559 sshd[1650]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:17:21.895588 systemd-logind[1445]: New session 7 of user core. Jul 2 00:17:21.901955 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 2 00:17:21.963608 sudo[1653]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 00:17:21.963962 sudo[1653]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:17:22.164120 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 2 00:17:22.165062 (dockerd)[1663]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 2 00:17:22.670597 dockerd[1663]: time="2024-07-02T00:17:22.670499415Z" level=info msg="Starting up" Jul 2 00:17:22.733172 dockerd[1663]: time="2024-07-02T00:17:22.732841071Z" level=info msg="Loading containers: start." Jul 2 00:17:22.905561 kernel: Initializing XFRM netlink socket Jul 2 00:17:22.942165 systemd-timesyncd[1338]: Network configuration changed, trying to establish connection. Jul 2 00:17:24.291187 systemd-resolved[1323]: Clock change detected. Flushing caches. Jul 2 00:17:24.291962 systemd-timesyncd[1338]: Contacted time server 23.150.41.122:123 (2.flatcar.pool.ntp.org). Jul 2 00:17:24.292176 systemd-timesyncd[1338]: Initial clock synchronization to Tue 2024-07-02 00:17:24.291111 UTC. Jul 2 00:17:24.322667 systemd-networkd[1371]: docker0: Link UP Jul 2 00:17:24.347563 dockerd[1663]: time="2024-07-02T00:17:24.346982813Z" level=info msg="Loading containers: done." Jul 2 00:17:24.444099 dockerd[1663]: time="2024-07-02T00:17:24.444047780Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 2 00:17:24.444647 dockerd[1663]: time="2024-07-02T00:17:24.444608676Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jul 2 00:17:24.444896 dockerd[1663]: time="2024-07-02T00:17:24.444870981Z" level=info msg="Daemon has completed initialization" Jul 2 00:17:24.489289 dockerd[1663]: time="2024-07-02T00:17:24.488863735Z" level=info msg="API listen on /run/docker.sock" Jul 2 00:17:24.491233 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 2 00:17:25.462958 containerd[1472]: time="2024-07-02T00:17:25.462060903Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.2\"" Jul 2 00:17:26.171278 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1937749425.mount: Deactivated successfully. Jul 2 00:17:27.746287 containerd[1472]: time="2024-07-02T00:17:27.744937788Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:17:27.746287 containerd[1472]: time="2024-07-02T00:17:27.746220303Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.2: active requests=0, bytes read=32771801" Jul 2 00:17:27.747036 containerd[1472]: time="2024-07-02T00:17:27.746999687Z" level=info msg="ImageCreate event name:\"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:17:27.752330 containerd[1472]: time="2024-07-02T00:17:27.752277258Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:17:27.755819 containerd[1472]: time="2024-07-02T00:17:27.755759853Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.2\" with image id \"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d\", size \"32768601\" in 2.293645507s" Jul 2 00:17:27.755819 containerd[1472]: time="2024-07-02T00:17:27.755813247Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.2\" returns image reference \"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe\"" Jul 2 00:17:27.784666 containerd[1472]: time="2024-07-02T00:17:27.784629589Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.2\"" Jul 2 00:17:29.510890 containerd[1472]: time="2024-07-02T00:17:29.510816347Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:17:29.513556 containerd[1472]: time="2024-07-02T00:17:29.513475145Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.2: active requests=0, bytes read=29588674" Jul 2 00:17:29.514560 containerd[1472]: time="2024-07-02T00:17:29.514490885Z" level=info msg="ImageCreate event name:\"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:17:29.518384 containerd[1472]: time="2024-07-02T00:17:29.518334539Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:17:29.519946 containerd[1472]: time="2024-07-02T00:17:29.519884836Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.2\" with image id \"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e\", size \"31138657\" in 1.735027893s" Jul 2 00:17:29.520077 containerd[1472]: time="2024-07-02T00:17:29.519947903Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.2\" returns image reference \"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974\"" Jul 2 00:17:29.549100 containerd[1472]: time="2024-07-02T00:17:29.549044073Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.2\"" Jul 2 00:17:29.689743 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 2 00:17:29.696159 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:17:29.855455 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:17:29.866145 (kubelet)[1877]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:17:29.938339 kubelet[1877]: E0702 00:17:29.938271 1877 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:17:29.943055 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:17:29.943284 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:17:30.877163 containerd[1472]: time="2024-07-02T00:17:30.877060834Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:17:30.878927 containerd[1472]: time="2024-07-02T00:17:30.878651588Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.2: active requests=0, bytes read=17778120" Jul 2 00:17:30.880457 containerd[1472]: time="2024-07-02T00:17:30.879843521Z" level=info msg="ImageCreate event name:\"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:17:30.883807 containerd[1472]: time="2024-07-02T00:17:30.883759844Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:17:30.885308 containerd[1472]: time="2024-07-02T00:17:30.885264355Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.2\" with image id \"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc\", size \"19328121\" in 1.336174985s" Jul 2 00:17:30.885308 containerd[1472]: time="2024-07-02T00:17:30.885306710Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.2\" returns image reference \"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940\"" Jul 2 00:17:30.916812 containerd[1472]: time="2024-07-02T00:17:30.916763038Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.2\"" Jul 2 00:17:32.189832 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3832829789.mount: Deactivated successfully. Jul 2 00:17:32.888900 containerd[1472]: time="2024-07-02T00:17:32.888836527Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:17:32.890130 containerd[1472]: time="2024-07-02T00:17:32.889835761Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.2: active requests=0, bytes read=29035438" Jul 2 00:17:32.891265 containerd[1472]: time="2024-07-02T00:17:32.891170610Z" level=info msg="ImageCreate event name:\"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:17:32.894513 containerd[1472]: time="2024-07-02T00:17:32.893879789Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:17:32.894513 containerd[1472]: time="2024-07-02T00:17:32.894379802Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.2\" with image id \"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772\", repo tag \"registry.k8s.io/kube-proxy:v1.30.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec\", size \"29034457\" in 1.977574861s" Jul 2 00:17:32.894513 containerd[1472]: time="2024-07-02T00:17:32.894412376Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.2\" returns image reference \"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772\"" Jul 2 00:17:32.924383 containerd[1472]: time="2024-07-02T00:17:32.924330618Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jul 2 00:17:33.513784 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3375949850.mount: Deactivated successfully. Jul 2 00:17:34.548569 containerd[1472]: time="2024-07-02T00:17:34.547210893Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:17:34.548569 containerd[1472]: time="2024-07-02T00:17:34.548497495Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jul 2 00:17:34.550509 containerd[1472]: time="2024-07-02T00:17:34.549949459Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:17:34.555566 containerd[1472]: time="2024-07-02T00:17:34.555006826Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:17:34.556495 containerd[1472]: time="2024-07-02T00:17:34.556456133Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.63207661s" Jul 2 00:17:34.556619 containerd[1472]: time="2024-07-02T00:17:34.556501232Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jul 2 00:17:34.589242 containerd[1472]: time="2024-07-02T00:17:34.588949247Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jul 2 00:17:35.189738 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2166448629.mount: Deactivated successfully. Jul 2 00:17:35.197981 containerd[1472]: time="2024-07-02T00:17:35.197098207Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:17:35.199317 containerd[1472]: time="2024-07-02T00:17:35.199261733Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jul 2 00:17:35.200551 containerd[1472]: time="2024-07-02T00:17:35.200492942Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:17:35.203576 containerd[1472]: time="2024-07-02T00:17:35.203453112Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:17:35.204584 containerd[1472]: time="2024-07-02T00:17:35.204479189Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 615.48328ms" Jul 2 00:17:35.204685 containerd[1472]: time="2024-07-02T00:17:35.204585261Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jul 2 00:17:35.236808 containerd[1472]: time="2024-07-02T00:17:35.236747486Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jul 2 00:17:35.870214 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1160711265.mount: Deactivated successfully. Jul 2 00:17:38.431210 containerd[1472]: time="2024-07-02T00:17:38.431125342Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:17:38.433864 containerd[1472]: time="2024-07-02T00:17:38.433774109Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Jul 2 00:17:38.436592 containerd[1472]: time="2024-07-02T00:17:38.436497870Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:17:38.447774 containerd[1472]: time="2024-07-02T00:17:38.444512365Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:17:38.447774 containerd[1472]: time="2024-07-02T00:17:38.446403323Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 3.209596497s" Jul 2 00:17:38.447774 containerd[1472]: time="2024-07-02T00:17:38.446458215Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jul 2 00:17:40.190262 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 2 00:17:40.203754 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:17:40.498900 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:17:40.501103 (kubelet)[2075]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:17:40.601495 kubelet[2075]: E0702 00:17:40.601408 2075 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:17:40.606656 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:17:40.606841 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:17:43.535495 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:17:43.550304 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:17:43.575222 systemd[1]: Reloading requested from client PID 2088 ('systemctl') (unit session-7.scope)... Jul 2 00:17:43.575395 systemd[1]: Reloading... Jul 2 00:17:43.740563 zram_generator::config[2129]: No configuration found. Jul 2 00:17:43.899142 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:17:44.001691 systemd[1]: Reloading finished in 425 ms. Jul 2 00:17:44.060860 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 2 00:17:44.060980 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 2 00:17:44.061269 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:17:44.067042 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:17:44.214273 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:17:44.226123 (kubelet)[2180]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 2 00:17:44.294888 kubelet[2180]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:17:44.295332 kubelet[2180]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 00:17:44.295399 kubelet[2180]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:17:44.296908 kubelet[2180]: I0702 00:17:44.296833 2180 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 00:17:44.829977 kubelet[2180]: I0702 00:17:44.829918 2180 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jul 2 00:17:44.829977 kubelet[2180]: I0702 00:17:44.829961 2180 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 00:17:44.830242 kubelet[2180]: I0702 00:17:44.830213 2180 server.go:927] "Client rotation is on, will bootstrap in background" Jul 2 00:17:44.853602 kubelet[2180]: I0702 00:17:44.853552 2180 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 00:17:44.855420 kubelet[2180]: E0702 00:17:44.855320 2180 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://64.23.132.250:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 64.23.132.250:6443: connect: connection refused Jul 2 00:17:44.871730 kubelet[2180]: I0702 00:17:44.871295 2180 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 00:17:44.873586 kubelet[2180]: I0702 00:17:44.873278 2180 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 00:17:44.873717 kubelet[2180]: I0702 00:17:44.873350 2180 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3975.1.1-c-5be545c9fd","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 00:17:44.874339 kubelet[2180]: I0702 00:17:44.874290 2180 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 00:17:44.874339 kubelet[2180]: I0702 00:17:44.874340 2180 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 00:17:44.874567 kubelet[2180]: I0702 00:17:44.874523 2180 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:17:44.875550 kubelet[2180]: I0702 00:17:44.875455 2180 kubelet.go:400] "Attempting to sync node with API server" Jul 2 00:17:44.875550 kubelet[2180]: I0702 00:17:44.875486 2180 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 00:17:44.875550 kubelet[2180]: I0702 00:17:44.875519 2180 kubelet.go:312] "Adding apiserver pod source" Jul 2 00:17:44.875550 kubelet[2180]: I0702 00:17:44.875556 2180 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 00:17:44.877368 kubelet[2180]: W0702 00:17:44.876139 2180 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://64.23.132.250:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.1.1-c-5be545c9fd&limit=500&resourceVersion=0": dial tcp 64.23.132.250:6443: connect: connection refused Jul 2 00:17:44.877368 kubelet[2180]: E0702 00:17:44.876218 2180 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://64.23.132.250:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.1.1-c-5be545c9fd&limit=500&resourceVersion=0": dial tcp 64.23.132.250:6443: connect: connection refused Jul 2 00:17:44.880240 kubelet[2180]: W0702 00:17:44.879522 2180 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://64.23.132.250:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 64.23.132.250:6443: connect: connection refused Jul 2 00:17:44.880240 kubelet[2180]: E0702 00:17:44.879737 2180 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://64.23.132.250:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 64.23.132.250:6443: connect: connection refused Jul 2 00:17:44.880629 kubelet[2180]: I0702 00:17:44.880606 2180 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Jul 2 00:17:44.883263 kubelet[2180]: I0702 00:17:44.883225 2180 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 00:17:44.883744 kubelet[2180]: W0702 00:17:44.883718 2180 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 00:17:44.888469 kubelet[2180]: I0702 00:17:44.888226 2180 server.go:1264] "Started kubelet" Jul 2 00:17:44.890559 kubelet[2180]: I0702 00:17:44.890134 2180 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 00:17:44.891875 kubelet[2180]: I0702 00:17:44.891844 2180 server.go:455] "Adding debug handlers to kubelet server" Jul 2 00:17:44.894803 kubelet[2180]: I0702 00:17:44.894722 2180 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 00:17:44.895264 kubelet[2180]: I0702 00:17:44.895219 2180 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 00:17:44.897863 kubelet[2180]: E0702 00:17:44.897741 2180 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://64.23.132.250:6443/api/v1/namespaces/default/events\": dial tcp 64.23.132.250:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3975.1.1-c-5be545c9fd.17de3d42d1262e6a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3975.1.1-c-5be545c9fd,UID:ci-3975.1.1-c-5be545c9fd,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3975.1.1-c-5be545c9fd,},FirstTimestamp:2024-07-02 00:17:44.888184426 +0000 UTC m=+0.651274107,LastTimestamp:2024-07-02 00:17:44.888184426 +0000 UTC m=+0.651274107,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3975.1.1-c-5be545c9fd,}" Jul 2 00:17:44.899897 kubelet[2180]: I0702 00:17:44.899864 2180 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 00:17:44.904564 kubelet[2180]: E0702 00:17:44.904537 2180 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 00:17:44.904819 kubelet[2180]: E0702 00:17:44.904788 2180 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3975.1.1-c-5be545c9fd\" not found" Jul 2 00:17:44.904935 kubelet[2180]: I0702 00:17:44.904925 2180 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 00:17:44.905151 kubelet[2180]: I0702 00:17:44.905137 2180 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jul 2 00:17:44.905274 kubelet[2180]: I0702 00:17:44.905266 2180 reconciler.go:26] "Reconciler: start to sync state" Jul 2 00:17:44.905745 kubelet[2180]: W0702 00:17:44.905702 2180 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://64.23.132.250:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.23.132.250:6443: connect: connection refused Jul 2 00:17:44.905842 kubelet[2180]: E0702 00:17:44.905832 2180 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://64.23.132.250:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.23.132.250:6443: connect: connection refused Jul 2 00:17:44.906685 kubelet[2180]: I0702 00:17:44.906666 2180 factory.go:221] Registration of the systemd container factory successfully Jul 2 00:17:44.906856 kubelet[2180]: I0702 00:17:44.906841 2180 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 00:17:44.907453 kubelet[2180]: E0702 00:17:44.907424 2180 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.132.250:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.1.1-c-5be545c9fd?timeout=10s\": dial tcp 64.23.132.250:6443: connect: connection refused" interval="200ms" Jul 2 00:17:44.908584 kubelet[2180]: I0702 00:17:44.908560 2180 factory.go:221] Registration of the containerd container factory successfully Jul 2 00:17:44.921261 kubelet[2180]: I0702 00:17:44.921205 2180 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 00:17:44.922682 kubelet[2180]: I0702 00:17:44.922646 2180 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 00:17:44.922682 kubelet[2180]: I0702 00:17:44.922684 2180 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 00:17:44.922855 kubelet[2180]: I0702 00:17:44.922729 2180 kubelet.go:2337] "Starting kubelet main sync loop" Jul 2 00:17:44.922855 kubelet[2180]: E0702 00:17:44.922783 2180 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 00:17:44.934186 kubelet[2180]: W0702 00:17:44.934117 2180 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://64.23.132.250:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.23.132.250:6443: connect: connection refused Jul 2 00:17:44.934186 kubelet[2180]: E0702 00:17:44.934191 2180 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://64.23.132.250:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.23.132.250:6443: connect: connection refused Jul 2 00:17:44.936118 kubelet[2180]: I0702 00:17:44.936092 2180 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 00:17:44.936817 kubelet[2180]: I0702 00:17:44.936677 2180 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 00:17:44.937961 kubelet[2180]: I0702 00:17:44.937725 2180 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:17:44.941846 kubelet[2180]: I0702 00:17:44.941687 2180 policy_none.go:49] "None policy: Start" Jul 2 00:17:44.943353 kubelet[2180]: I0702 00:17:44.942787 2180 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 00:17:44.943353 kubelet[2180]: I0702 00:17:44.942840 2180 state_mem.go:35] "Initializing new in-memory state store" Jul 2 00:17:44.952752 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 2 00:17:44.965601 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 2 00:17:44.979678 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 2 00:17:44.981318 kubelet[2180]: I0702 00:17:44.981279 2180 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 00:17:44.981551 kubelet[2180]: I0702 00:17:44.981484 2180 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 2 00:17:44.981623 kubelet[2180]: I0702 00:17:44.981612 2180 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 00:17:44.983805 kubelet[2180]: E0702 00:17:44.983769 2180 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3975.1.1-c-5be545c9fd\" not found" Jul 2 00:17:45.007480 kubelet[2180]: I0702 00:17:45.007021 2180 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.1.1-c-5be545c9fd" Jul 2 00:17:45.007716 kubelet[2180]: E0702 00:17:45.007486 2180 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://64.23.132.250:6443/api/v1/nodes\": dial tcp 64.23.132.250:6443: connect: connection refused" node="ci-3975.1.1-c-5be545c9fd" Jul 2 00:17:45.023036 kubelet[2180]: I0702 00:17:45.022923 2180 topology_manager.go:215] "Topology Admit Handler" podUID="719914f378be2aaf21aeb0dd3749c8a7" podNamespace="kube-system" podName="kube-apiserver-ci-3975.1.1-c-5be545c9fd" Jul 2 00:17:45.026026 kubelet[2180]: I0702 00:17:45.024904 2180 topology_manager.go:215] "Topology Admit Handler" podUID="eb225d3c264732439e92eb60b1cc4e8a" podNamespace="kube-system" podName="kube-controller-manager-ci-3975.1.1-c-5be545c9fd" Jul 2 00:17:45.026482 kubelet[2180]: I0702 00:17:45.026448 2180 topology_manager.go:215] "Topology Admit Handler" podUID="e368948e9fa485aeec91a5fc29e0f5b1" podNamespace="kube-system" podName="kube-scheduler-ci-3975.1.1-c-5be545c9fd" Jul 2 00:17:45.035960 systemd[1]: Created slice kubepods-burstable-pod719914f378be2aaf21aeb0dd3749c8a7.slice - libcontainer container kubepods-burstable-pod719914f378be2aaf21aeb0dd3749c8a7.slice. Jul 2 00:17:45.062080 systemd[1]: Created slice kubepods-burstable-podeb225d3c264732439e92eb60b1cc4e8a.slice - libcontainer container kubepods-burstable-podeb225d3c264732439e92eb60b1cc4e8a.slice. Jul 2 00:17:45.070043 systemd[1]: Created slice kubepods-burstable-pode368948e9fa485aeec91a5fc29e0f5b1.slice - libcontainer container kubepods-burstable-pode368948e9fa485aeec91a5fc29e0f5b1.slice. Jul 2 00:17:45.109880 kubelet[2180]: I0702 00:17:45.106738 2180 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/719914f378be2aaf21aeb0dd3749c8a7-ca-certs\") pod \"kube-apiserver-ci-3975.1.1-c-5be545c9fd\" (UID: \"719914f378be2aaf21aeb0dd3749c8a7\") " pod="kube-system/kube-apiserver-ci-3975.1.1-c-5be545c9fd" Jul 2 00:17:45.109880 kubelet[2180]: I0702 00:17:45.106783 2180 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/719914f378be2aaf21aeb0dd3749c8a7-k8s-certs\") pod \"kube-apiserver-ci-3975.1.1-c-5be545c9fd\" (UID: \"719914f378be2aaf21aeb0dd3749c8a7\") " pod="kube-system/kube-apiserver-ci-3975.1.1-c-5be545c9fd" Jul 2 00:17:45.109880 kubelet[2180]: I0702 00:17:45.106808 2180 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/eb225d3c264732439e92eb60b1cc4e8a-ca-certs\") pod \"kube-controller-manager-ci-3975.1.1-c-5be545c9fd\" (UID: \"eb225d3c264732439e92eb60b1cc4e8a\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-c-5be545c9fd" Jul 2 00:17:45.109880 kubelet[2180]: I0702 00:17:45.106834 2180 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/eb225d3c264732439e92eb60b1cc4e8a-kubeconfig\") pod \"kube-controller-manager-ci-3975.1.1-c-5be545c9fd\" (UID: \"eb225d3c264732439e92eb60b1cc4e8a\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-c-5be545c9fd" Jul 2 00:17:45.109880 kubelet[2180]: I0702 00:17:45.106853 2180 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e368948e9fa485aeec91a5fc29e0f5b1-kubeconfig\") pod \"kube-scheduler-ci-3975.1.1-c-5be545c9fd\" (UID: \"e368948e9fa485aeec91a5fc29e0f5b1\") " pod="kube-system/kube-scheduler-ci-3975.1.1-c-5be545c9fd" Jul 2 00:17:45.110123 kubelet[2180]: I0702 00:17:45.106871 2180 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/719914f378be2aaf21aeb0dd3749c8a7-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3975.1.1-c-5be545c9fd\" (UID: \"719914f378be2aaf21aeb0dd3749c8a7\") " pod="kube-system/kube-apiserver-ci-3975.1.1-c-5be545c9fd" Jul 2 00:17:45.110123 kubelet[2180]: I0702 00:17:45.106888 2180 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/eb225d3c264732439e92eb60b1cc4e8a-flexvolume-dir\") pod \"kube-controller-manager-ci-3975.1.1-c-5be545c9fd\" (UID: \"eb225d3c264732439e92eb60b1cc4e8a\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-c-5be545c9fd" Jul 2 00:17:45.110123 kubelet[2180]: I0702 00:17:45.106903 2180 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/eb225d3c264732439e92eb60b1cc4e8a-k8s-certs\") pod \"kube-controller-manager-ci-3975.1.1-c-5be545c9fd\" (UID: \"eb225d3c264732439e92eb60b1cc4e8a\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-c-5be545c9fd" Jul 2 00:17:45.110123 kubelet[2180]: I0702 00:17:45.106921 2180 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/eb225d3c264732439e92eb60b1cc4e8a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3975.1.1-c-5be545c9fd\" (UID: \"eb225d3c264732439e92eb60b1cc4e8a\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-c-5be545c9fd" Jul 2 00:17:45.110757 kubelet[2180]: E0702 00:17:45.110714 2180 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.132.250:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.1.1-c-5be545c9fd?timeout=10s\": dial tcp 64.23.132.250:6443: connect: connection refused" interval="400ms" Jul 2 00:17:45.208834 kubelet[2180]: I0702 00:17:45.208804 2180 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.1.1-c-5be545c9fd" Jul 2 00:17:45.209441 kubelet[2180]: E0702 00:17:45.209407 2180 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://64.23.132.250:6443/api/v1/nodes\": dial tcp 64.23.132.250:6443: connect: connection refused" node="ci-3975.1.1-c-5be545c9fd" Jul 2 00:17:45.358044 kubelet[2180]: E0702 00:17:45.358004 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:17:45.359383 containerd[1472]: time="2024-07-02T00:17:45.359339585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3975.1.1-c-5be545c9fd,Uid:719914f378be2aaf21aeb0dd3749c8a7,Namespace:kube-system,Attempt:0,}" Jul 2 00:17:45.367710 kubelet[2180]: E0702 00:17:45.367207 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:17:45.372922 containerd[1472]: time="2024-07-02T00:17:45.372868408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3975.1.1-c-5be545c9fd,Uid:eb225d3c264732439e92eb60b1cc4e8a,Namespace:kube-system,Attempt:0,}" Jul 2 00:17:45.373911 kubelet[2180]: E0702 00:17:45.373630 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:17:45.374151 containerd[1472]: time="2024-07-02T00:17:45.374117029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3975.1.1-c-5be545c9fd,Uid:e368948e9fa485aeec91a5fc29e0f5b1,Namespace:kube-system,Attempt:0,}" Jul 2 00:17:45.512261 kubelet[2180]: E0702 00:17:45.512194 2180 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.132.250:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.1.1-c-5be545c9fd?timeout=10s\": dial tcp 64.23.132.250:6443: connect: connection refused" interval="800ms" Jul 2 00:17:45.611752 kubelet[2180]: I0702 00:17:45.611591 2180 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.1.1-c-5be545c9fd" Jul 2 00:17:45.612319 kubelet[2180]: E0702 00:17:45.612213 2180 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://64.23.132.250:6443/api/v1/nodes\": dial tcp 64.23.132.250:6443: connect: connection refused" node="ci-3975.1.1-c-5be545c9fd" Jul 2 00:17:45.770776 kubelet[2180]: W0702 00:17:45.770698 2180 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://64.23.132.250:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.23.132.250:6443: connect: connection refused Jul 2 00:17:45.770776 kubelet[2180]: E0702 00:17:45.770747 2180 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://64.23.132.250:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.23.132.250:6443: connect: connection refused Jul 2 00:17:45.984682 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2113153203.mount: Deactivated successfully. Jul 2 00:17:45.990729 containerd[1472]: time="2024-07-02T00:17:45.990641374Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:17:45.992234 containerd[1472]: time="2024-07-02T00:17:45.992157096Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jul 2 00:17:45.996751 containerd[1472]: time="2024-07-02T00:17:45.996633086Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:17:45.998270 containerd[1472]: time="2024-07-02T00:17:45.998176110Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:17:45.999140 containerd[1472]: time="2024-07-02T00:17:45.999082881Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 00:17:46.001555 containerd[1472]: time="2024-07-02T00:17:46.000744439Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:17:46.002707 containerd[1472]: time="2024-07-02T00:17:46.002652722Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 00:17:46.005172 containerd[1472]: time="2024-07-02T00:17:46.005123475Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:17:46.008212 containerd[1472]: time="2024-07-02T00:17:46.008155324Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 635.166986ms" Jul 2 00:17:46.010888 containerd[1472]: time="2024-07-02T00:17:46.010609748Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 650.606912ms" Jul 2 00:17:46.019046 containerd[1472]: time="2024-07-02T00:17:46.018958566Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 644.745134ms" Jul 2 00:17:46.026307 kubelet[2180]: W0702 00:17:46.025318 2180 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://64.23.132.250:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 64.23.132.250:6443: connect: connection refused Jul 2 00:17:46.026307 kubelet[2180]: E0702 00:17:46.025400 2180 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://64.23.132.250:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 64.23.132.250:6443: connect: connection refused Jul 2 00:17:46.064294 kubelet[2180]: W0702 00:17:46.063955 2180 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://64.23.132.250:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.23.132.250:6443: connect: connection refused Jul 2 00:17:46.064294 kubelet[2180]: E0702 00:17:46.064022 2180 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://64.23.132.250:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.23.132.250:6443: connect: connection refused Jul 2 00:17:46.207137 containerd[1472]: time="2024-07-02T00:17:46.207008980Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:17:46.207479 containerd[1472]: time="2024-07-02T00:17:46.207111407Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:17:46.207479 containerd[1472]: time="2024-07-02T00:17:46.207136991Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:17:46.207479 containerd[1472]: time="2024-07-02T00:17:46.207150603Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:17:46.216261 containerd[1472]: time="2024-07-02T00:17:46.215230568Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:17:46.216518 containerd[1472]: time="2024-07-02T00:17:46.216482323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:17:46.216650 containerd[1472]: time="2024-07-02T00:17:46.216628196Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:17:46.216754 containerd[1472]: time="2024-07-02T00:17:46.216724650Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:17:46.221103 containerd[1472]: time="2024-07-02T00:17:46.221002645Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:17:46.221103 containerd[1472]: time="2024-07-02T00:17:46.221066508Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:17:46.221355 containerd[1472]: time="2024-07-02T00:17:46.221087685Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:17:46.221508 containerd[1472]: time="2024-07-02T00:17:46.221164033Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:17:46.252852 systemd[1]: Started cri-containerd-42d0ee4c7c6dcd53255f4999e409d4dde0ec95e90cc407a7f7dcc1d5a1d80176.scope - libcontainer container 42d0ee4c7c6dcd53255f4999e409d4dde0ec95e90cc407a7f7dcc1d5a1d80176. Jul 2 00:17:46.280849 systemd[1]: Started cri-containerd-5c0aaa58cbc9203f3a44d10fe975d713f53c030bd60fe2a1c37e80abbfd5630d.scope - libcontainer container 5c0aaa58cbc9203f3a44d10fe975d713f53c030bd60fe2a1c37e80abbfd5630d. Jul 2 00:17:46.283570 systemd[1]: Started cri-containerd-f065271e6ff489d608518279919f26326fbb0a5f6c522b977684243a17a1ec37.scope - libcontainer container f065271e6ff489d608518279919f26326fbb0a5f6c522b977684243a17a1ec37. Jul 2 00:17:46.303580 kubelet[2180]: W0702 00:17:46.303263 2180 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://64.23.132.250:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.1.1-c-5be545c9fd&limit=500&resourceVersion=0": dial tcp 64.23.132.250:6443: connect: connection refused Jul 2 00:17:46.304079 kubelet[2180]: E0702 00:17:46.303477 2180 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://64.23.132.250:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.1.1-c-5be545c9fd&limit=500&resourceVersion=0": dial tcp 64.23.132.250:6443: connect: connection refused Jul 2 00:17:46.313888 kubelet[2180]: E0702 00:17:46.313763 2180 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.132.250:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.1.1-c-5be545c9fd?timeout=10s\": dial tcp 64.23.132.250:6443: connect: connection refused" interval="1.6s" Jul 2 00:17:46.380355 containerd[1472]: time="2024-07-02T00:17:46.380063721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3975.1.1-c-5be545c9fd,Uid:e368948e9fa485aeec91a5fc29e0f5b1,Namespace:kube-system,Attempt:0,} returns sandbox id \"5c0aaa58cbc9203f3a44d10fe975d713f53c030bd60fe2a1c37e80abbfd5630d\"" Jul 2 00:17:46.392373 kubelet[2180]: E0702 00:17:46.392337 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:17:46.394025 containerd[1472]: time="2024-07-02T00:17:46.393959673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3975.1.1-c-5be545c9fd,Uid:eb225d3c264732439e92eb60b1cc4e8a,Namespace:kube-system,Attempt:0,} returns sandbox id \"42d0ee4c7c6dcd53255f4999e409d4dde0ec95e90cc407a7f7dcc1d5a1d80176\"" Jul 2 00:17:46.398020 kubelet[2180]: E0702 00:17:46.397988 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:17:46.399947 containerd[1472]: time="2024-07-02T00:17:46.397895400Z" level=info msg="CreateContainer within sandbox \"5c0aaa58cbc9203f3a44d10fe975d713f53c030bd60fe2a1c37e80abbfd5630d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 2 00:17:46.399947 containerd[1472]: time="2024-07-02T00:17:46.399104305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3975.1.1-c-5be545c9fd,Uid:719914f378be2aaf21aeb0dd3749c8a7,Namespace:kube-system,Attempt:0,} returns sandbox id \"f065271e6ff489d608518279919f26326fbb0a5f6c522b977684243a17a1ec37\"" Jul 2 00:17:46.401234 kubelet[2180]: E0702 00:17:46.401207 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:17:46.401851 containerd[1472]: time="2024-07-02T00:17:46.401799985Z" level=info msg="CreateContainer within sandbox \"42d0ee4c7c6dcd53255f4999e409d4dde0ec95e90cc407a7f7dcc1d5a1d80176\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 2 00:17:46.408490 containerd[1472]: time="2024-07-02T00:17:46.408439438Z" level=info msg="CreateContainer within sandbox \"f065271e6ff489d608518279919f26326fbb0a5f6c522b977684243a17a1ec37\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 2 00:17:46.413512 kubelet[2180]: I0702 00:17:46.413434 2180 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.1.1-c-5be545c9fd" Jul 2 00:17:46.414365 kubelet[2180]: E0702 00:17:46.414051 2180 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://64.23.132.250:6443/api/v1/nodes\": dial tcp 64.23.132.250:6443: connect: connection refused" node="ci-3975.1.1-c-5be545c9fd" Jul 2 00:17:46.433566 containerd[1472]: time="2024-07-02T00:17:46.433461080Z" level=info msg="CreateContainer within sandbox \"42d0ee4c7c6dcd53255f4999e409d4dde0ec95e90cc407a7f7dcc1d5a1d80176\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e825cdf75da8117e38d95488b22917065e02dea77d3b40a592ef5a02b88cd263\"" Jul 2 00:17:46.435562 containerd[1472]: time="2024-07-02T00:17:46.434937724Z" level=info msg="StartContainer for \"e825cdf75da8117e38d95488b22917065e02dea77d3b40a592ef5a02b88cd263\"" Jul 2 00:17:46.442861 containerd[1472]: time="2024-07-02T00:17:46.442695113Z" level=info msg="CreateContainer within sandbox \"5c0aaa58cbc9203f3a44d10fe975d713f53c030bd60fe2a1c37e80abbfd5630d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"98a6b4b152b09860da68176548960acbdd87ebd96ac612cf2fca09ea53c96a56\"" Jul 2 00:17:46.443404 containerd[1472]: time="2024-07-02T00:17:46.443355902Z" level=info msg="StartContainer for \"98a6b4b152b09860da68176548960acbdd87ebd96ac612cf2fca09ea53c96a56\"" Jul 2 00:17:46.449091 containerd[1472]: time="2024-07-02T00:17:46.449033147Z" level=info msg="CreateContainer within sandbox \"f065271e6ff489d608518279919f26326fbb0a5f6c522b977684243a17a1ec37\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8c7b4088e2238df4617aa7fe26c79205325c916cfb81fe567239041959975e9a\"" Jul 2 00:17:46.450102 containerd[1472]: time="2024-07-02T00:17:46.449935222Z" level=info msg="StartContainer for \"8c7b4088e2238df4617aa7fe26c79205325c916cfb81fe567239041959975e9a\"" Jul 2 00:17:46.492826 systemd[1]: Started cri-containerd-e825cdf75da8117e38d95488b22917065e02dea77d3b40a592ef5a02b88cd263.scope - libcontainer container e825cdf75da8117e38d95488b22917065e02dea77d3b40a592ef5a02b88cd263. Jul 2 00:17:46.509518 systemd[1]: Started cri-containerd-8c7b4088e2238df4617aa7fe26c79205325c916cfb81fe567239041959975e9a.scope - libcontainer container 8c7b4088e2238df4617aa7fe26c79205325c916cfb81fe567239041959975e9a. Jul 2 00:17:46.520832 systemd[1]: Started cri-containerd-98a6b4b152b09860da68176548960acbdd87ebd96ac612cf2fca09ea53c96a56.scope - libcontainer container 98a6b4b152b09860da68176548960acbdd87ebd96ac612cf2fca09ea53c96a56. Jul 2 00:17:46.601140 containerd[1472]: time="2024-07-02T00:17:46.600717797Z" level=info msg="StartContainer for \"e825cdf75da8117e38d95488b22917065e02dea77d3b40a592ef5a02b88cd263\" returns successfully" Jul 2 00:17:46.618438 containerd[1472]: time="2024-07-02T00:17:46.618375535Z" level=info msg="StartContainer for \"8c7b4088e2238df4617aa7fe26c79205325c916cfb81fe567239041959975e9a\" returns successfully" Jul 2 00:17:46.648911 containerd[1472]: time="2024-07-02T00:17:46.648452505Z" level=info msg="StartContainer for \"98a6b4b152b09860da68176548960acbdd87ebd96ac612cf2fca09ea53c96a56\" returns successfully" Jul 2 00:17:46.947578 kubelet[2180]: E0702 00:17:46.946511 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:17:46.951506 kubelet[2180]: E0702 00:17:46.951473 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:17:46.953272 kubelet[2180]: E0702 00:17:46.953210 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:17:47.958044 kubelet[2180]: E0702 00:17:47.957978 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:17:48.016313 kubelet[2180]: I0702 00:17:48.016258 2180 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.1.1-c-5be545c9fd" Jul 2 00:17:48.880519 kubelet[2180]: I0702 00:17:48.880461 2180 apiserver.go:52] "Watching apiserver" Jul 2 00:17:48.951375 kubelet[2180]: I0702 00:17:48.951304 2180 kubelet_node_status.go:76] "Successfully registered node" node="ci-3975.1.1-c-5be545c9fd" Jul 2 00:17:48.951768 kubelet[2180]: E0702 00:17:48.951717 2180 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3975.1.1-c-5be545c9fd\" not found" node="ci-3975.1.1-c-5be545c9fd" Jul 2 00:17:49.006371 kubelet[2180]: I0702 00:17:49.006312 2180 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jul 2 00:17:51.187905 systemd[1]: Reloading requested from client PID 2455 ('systemctl') (unit session-7.scope)... Jul 2 00:17:51.187930 systemd[1]: Reloading... Jul 2 00:17:51.324588 zram_generator::config[2492]: No configuration found. Jul 2 00:17:51.519522 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:17:51.676313 systemd[1]: Reloading finished in 487 ms. Jul 2 00:17:51.728175 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:17:51.741152 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 00:17:51.741491 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:17:51.741594 systemd[1]: kubelet.service: Consumed 1.107s CPU time, 113.3M memory peak, 0B memory swap peak. Jul 2 00:17:51.748059 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:17:51.911866 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:17:51.920192 (kubelet)[2543]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 2 00:17:52.031992 kubelet[2543]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:17:52.031992 kubelet[2543]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 00:17:52.031992 kubelet[2543]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:17:52.037033 kubelet[2543]: I0702 00:17:52.036881 2543 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 00:17:52.047069 kubelet[2543]: I0702 00:17:52.047011 2543 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jul 2 00:17:52.047240 kubelet[2543]: I0702 00:17:52.047229 2543 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 00:17:52.047656 kubelet[2543]: I0702 00:17:52.047633 2543 server.go:927] "Client rotation is on, will bootstrap in background" Jul 2 00:17:52.049717 kubelet[2543]: I0702 00:17:52.049452 2543 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 2 00:17:52.051442 kubelet[2543]: I0702 00:17:52.051418 2543 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 00:17:52.069844 kubelet[2543]: I0702 00:17:52.069778 2543 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 00:17:52.070819 kubelet[2543]: I0702 00:17:52.070327 2543 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 00:17:52.070819 kubelet[2543]: I0702 00:17:52.070365 2543 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3975.1.1-c-5be545c9fd","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 00:17:52.070819 kubelet[2543]: I0702 00:17:52.070567 2543 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 00:17:52.070819 kubelet[2543]: I0702 00:17:52.070578 2543 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 00:17:52.071091 kubelet[2543]: I0702 00:17:52.070623 2543 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:17:52.072180 kubelet[2543]: I0702 00:17:52.071441 2543 kubelet.go:400] "Attempting to sync node with API server" Jul 2 00:17:52.072338 kubelet[2543]: I0702 00:17:52.072318 2543 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 00:17:52.072436 kubelet[2543]: I0702 00:17:52.072428 2543 kubelet.go:312] "Adding apiserver pod source" Jul 2 00:17:52.074189 kubelet[2543]: I0702 00:17:52.074166 2543 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 00:17:52.091714 kubelet[2543]: I0702 00:17:52.091674 2543 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Jul 2 00:17:52.094416 kubelet[2543]: I0702 00:17:52.093619 2543 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 00:17:52.094416 kubelet[2543]: I0702 00:17:52.094174 2543 server.go:1264] "Started kubelet" Jul 2 00:17:52.101316 kubelet[2543]: I0702 00:17:52.101278 2543 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 00:17:52.109243 kubelet[2543]: I0702 00:17:52.103159 2543 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 00:17:52.111573 kubelet[2543]: I0702 00:17:52.110828 2543 server.go:455] "Adding debug handlers to kubelet server" Jul 2 00:17:52.113575 kubelet[2543]: I0702 00:17:52.113370 2543 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 00:17:52.115146 kubelet[2543]: I0702 00:17:52.114246 2543 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jul 2 00:17:52.115146 kubelet[2543]: I0702 00:17:52.114443 2543 reconciler.go:26] "Reconciler: start to sync state" Jul 2 00:17:52.119124 kubelet[2543]: I0702 00:17:52.103223 2543 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 00:17:52.119665 kubelet[2543]: I0702 00:17:52.119638 2543 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 00:17:52.122499 kubelet[2543]: I0702 00:17:52.121077 2543 factory.go:221] Registration of the systemd container factory successfully Jul 2 00:17:52.123304 kubelet[2543]: I0702 00:17:52.123263 2543 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 00:17:52.124685 kubelet[2543]: E0702 00:17:52.124268 2543 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 00:17:52.129489 kubelet[2543]: I0702 00:17:52.129443 2543 factory.go:221] Registration of the containerd container factory successfully Jul 2 00:17:52.134097 kubelet[2543]: I0702 00:17:52.134052 2543 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 00:17:52.135980 kubelet[2543]: I0702 00:17:52.135943 2543 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 00:17:52.136620 kubelet[2543]: I0702 00:17:52.136186 2543 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 00:17:52.136620 kubelet[2543]: I0702 00:17:52.136222 2543 kubelet.go:2337] "Starting kubelet main sync loop" Jul 2 00:17:52.136620 kubelet[2543]: E0702 00:17:52.136293 2543 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 00:17:52.216284 kubelet[2543]: I0702 00:17:52.215687 2543 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.1.1-c-5be545c9fd" Jul 2 00:17:52.220065 kubelet[2543]: I0702 00:17:52.219923 2543 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 00:17:52.220065 kubelet[2543]: I0702 00:17:52.219944 2543 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 00:17:52.220065 kubelet[2543]: I0702 00:17:52.219979 2543 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:17:52.220272 kubelet[2543]: I0702 00:17:52.220189 2543 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 2 00:17:52.220272 kubelet[2543]: I0702 00:17:52.220201 2543 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 2 00:17:52.220272 kubelet[2543]: I0702 00:17:52.220220 2543 policy_none.go:49] "None policy: Start" Jul 2 00:17:52.222564 kubelet[2543]: I0702 00:17:52.222393 2543 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 00:17:52.222564 kubelet[2543]: I0702 00:17:52.222442 2543 state_mem.go:35] "Initializing new in-memory state store" Jul 2 00:17:52.223676 kubelet[2543]: I0702 00:17:52.223223 2543 state_mem.go:75] "Updated machine memory state" Jul 2 00:17:52.235157 kubelet[2543]: I0702 00:17:52.234478 2543 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 00:17:52.235458 kubelet[2543]: I0702 00:17:52.235194 2543 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 2 00:17:52.236197 kubelet[2543]: I0702 00:17:52.235770 2543 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 00:17:52.237118 kubelet[2543]: I0702 00:17:52.236679 2543 topology_manager.go:215] "Topology Admit Handler" podUID="719914f378be2aaf21aeb0dd3749c8a7" podNamespace="kube-system" podName="kube-apiserver-ci-3975.1.1-c-5be545c9fd" Jul 2 00:17:52.238786 kubelet[2543]: I0702 00:17:52.237384 2543 kubelet_node_status.go:112] "Node was previously registered" node="ci-3975.1.1-c-5be545c9fd" Jul 2 00:17:52.240025 kubelet[2543]: I0702 00:17:52.240002 2543 kubelet_node_status.go:76] "Successfully registered node" node="ci-3975.1.1-c-5be545c9fd" Jul 2 00:17:52.240406 kubelet[2543]: I0702 00:17:52.239249 2543 topology_manager.go:215] "Topology Admit Handler" podUID="eb225d3c264732439e92eb60b1cc4e8a" podNamespace="kube-system" podName="kube-controller-manager-ci-3975.1.1-c-5be545c9fd" Jul 2 00:17:52.240671 kubelet[2543]: I0702 00:17:52.240644 2543 topology_manager.go:215] "Topology Admit Handler" podUID="e368948e9fa485aeec91a5fc29e0f5b1" podNamespace="kube-system" podName="kube-scheduler-ci-3975.1.1-c-5be545c9fd" Jul 2 00:17:52.270261 kubelet[2543]: W0702 00:17:52.270219 2543 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 00:17:52.273661 kubelet[2543]: W0702 00:17:52.273623 2543 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 00:17:52.280600 kubelet[2543]: W0702 00:17:52.279819 2543 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 00:17:52.416136 kubelet[2543]: I0702 00:17:52.416078 2543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/eb225d3c264732439e92eb60b1cc4e8a-ca-certs\") pod \"kube-controller-manager-ci-3975.1.1-c-5be545c9fd\" (UID: \"eb225d3c264732439e92eb60b1cc4e8a\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-c-5be545c9fd" Jul 2 00:17:52.416136 kubelet[2543]: I0702 00:17:52.416128 2543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/eb225d3c264732439e92eb60b1cc4e8a-flexvolume-dir\") pod \"kube-controller-manager-ci-3975.1.1-c-5be545c9fd\" (UID: \"eb225d3c264732439e92eb60b1cc4e8a\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-c-5be545c9fd" Jul 2 00:17:52.417136 kubelet[2543]: I0702 00:17:52.416148 2543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/eb225d3c264732439e92eb60b1cc4e8a-kubeconfig\") pod \"kube-controller-manager-ci-3975.1.1-c-5be545c9fd\" (UID: \"eb225d3c264732439e92eb60b1cc4e8a\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-c-5be545c9fd" Jul 2 00:17:52.417136 kubelet[2543]: I0702 00:17:52.416637 2543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/719914f378be2aaf21aeb0dd3749c8a7-ca-certs\") pod \"kube-apiserver-ci-3975.1.1-c-5be545c9fd\" (UID: \"719914f378be2aaf21aeb0dd3749c8a7\") " pod="kube-system/kube-apiserver-ci-3975.1.1-c-5be545c9fd" Jul 2 00:17:52.417136 kubelet[2543]: I0702 00:17:52.416666 2543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/719914f378be2aaf21aeb0dd3749c8a7-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3975.1.1-c-5be545c9fd\" (UID: \"719914f378be2aaf21aeb0dd3749c8a7\") " pod="kube-system/kube-apiserver-ci-3975.1.1-c-5be545c9fd" Jul 2 00:17:52.417136 kubelet[2543]: I0702 00:17:52.416684 2543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/eb225d3c264732439e92eb60b1cc4e8a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3975.1.1-c-5be545c9fd\" (UID: \"eb225d3c264732439e92eb60b1cc4e8a\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-c-5be545c9fd" Jul 2 00:17:52.417136 kubelet[2543]: I0702 00:17:52.416702 2543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e368948e9fa485aeec91a5fc29e0f5b1-kubeconfig\") pod \"kube-scheduler-ci-3975.1.1-c-5be545c9fd\" (UID: \"e368948e9fa485aeec91a5fc29e0f5b1\") " pod="kube-system/kube-scheduler-ci-3975.1.1-c-5be545c9fd" Jul 2 00:17:52.418186 kubelet[2543]: I0702 00:17:52.416718 2543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/719914f378be2aaf21aeb0dd3749c8a7-k8s-certs\") pod \"kube-apiserver-ci-3975.1.1-c-5be545c9fd\" (UID: \"719914f378be2aaf21aeb0dd3749c8a7\") " pod="kube-system/kube-apiserver-ci-3975.1.1-c-5be545c9fd" Jul 2 00:17:52.418186 kubelet[2543]: I0702 00:17:52.416732 2543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/eb225d3c264732439e92eb60b1cc4e8a-k8s-certs\") pod \"kube-controller-manager-ci-3975.1.1-c-5be545c9fd\" (UID: \"eb225d3c264732439e92eb60b1cc4e8a\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-c-5be545c9fd" Jul 2 00:17:52.573726 kubelet[2543]: E0702 00:17:52.573511 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:17:52.575902 kubelet[2543]: E0702 00:17:52.575757 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:17:52.581827 kubelet[2543]: E0702 00:17:52.581752 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:17:53.075171 kubelet[2543]: I0702 00:17:53.075118 2543 apiserver.go:52] "Watching apiserver" Jul 2 00:17:53.114679 kubelet[2543]: I0702 00:17:53.114589 2543 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jul 2 00:17:53.188183 kubelet[2543]: E0702 00:17:53.188142 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:17:53.189808 kubelet[2543]: E0702 00:17:53.189093 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:17:53.252274 kubelet[2543]: W0702 00:17:53.252242 2543 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 00:17:53.252579 kubelet[2543]: E0702 00:17:53.252552 2543 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3975.1.1-c-5be545c9fd\" already exists" pod="kube-system/kube-apiserver-ci-3975.1.1-c-5be545c9fd" Jul 2 00:17:53.253185 kubelet[2543]: E0702 00:17:53.253165 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:17:53.374064 kubelet[2543]: I0702 00:17:53.373891 2543 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3975.1.1-c-5be545c9fd" podStartSLOduration=1.373867411 podStartE2EDuration="1.373867411s" podCreationTimestamp="2024-07-02 00:17:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:17:53.329376901 +0000 UTC m=+1.391416179" watchObservedRunningTime="2024-07-02 00:17:53.373867411 +0000 UTC m=+1.435906681" Jul 2 00:17:53.397907 kubelet[2543]: I0702 00:17:53.397787 2543 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3975.1.1-c-5be545c9fd" podStartSLOduration=1.397762871 podStartE2EDuration="1.397762871s" podCreationTimestamp="2024-07-02 00:17:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:17:53.374948293 +0000 UTC m=+1.436987619" watchObservedRunningTime="2024-07-02 00:17:53.397762871 +0000 UTC m=+1.459802150" Jul 2 00:17:53.426990 kubelet[2543]: I0702 00:17:53.426582 2543 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3975.1.1-c-5be545c9fd" podStartSLOduration=1.426490973 podStartE2EDuration="1.426490973s" podCreationTimestamp="2024-07-02 00:17:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:17:53.401322965 +0000 UTC m=+1.463362242" watchObservedRunningTime="2024-07-02 00:17:53.426490973 +0000 UTC m=+1.488530251" Jul 2 00:17:54.195567 kubelet[2543]: E0702 00:17:54.191762 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:17:57.496586 sudo[1653]: pam_unix(sudo:session): session closed for user root Jul 2 00:17:57.500943 sshd[1650]: pam_unix(sshd:session): session closed for user core Jul 2 00:17:57.506398 systemd-logind[1445]: Session 7 logged out. Waiting for processes to exit. Jul 2 00:17:57.506914 systemd[1]: sshd@6-64.23.132.250:22-147.75.109.163:54766.service: Deactivated successfully. Jul 2 00:17:57.509441 systemd[1]: session-7.scope: Deactivated successfully. Jul 2 00:17:57.510037 systemd[1]: session-7.scope: Consumed 7.363s CPU time, 137.8M memory peak, 0B memory swap peak. Jul 2 00:17:57.511278 systemd-logind[1445]: Removed session 7. Jul 2 00:18:00.469756 kubelet[2543]: E0702 00:18:00.468866 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:18:00.681750 update_engine[1446]: I0702 00:18:00.681593 1446 update_attempter.cc:509] Updating boot flags... Jul 2 00:18:00.775719 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2626) Jul 2 00:18:00.920957 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2630) Jul 2 00:18:01.211725 kubelet[2543]: E0702 00:18:01.211509 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:18:02.137905 kubelet[2543]: E0702 00:18:02.137187 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:18:02.214273 kubelet[2543]: E0702 00:18:02.214207 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:18:02.216991 kubelet[2543]: E0702 00:18:02.216894 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:18:03.231456 kubelet[2543]: E0702 00:18:03.219760 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:18:05.386740 kubelet[2543]: I0702 00:18:05.384810 2543 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 2 00:18:05.402843 containerd[1472]: time="2024-07-02T00:18:05.401907834Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 00:18:05.403710 kubelet[2543]: I0702 00:18:05.402272 2543 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 2 00:18:05.758909 kubelet[2543]: I0702 00:18:05.758617 2543 topology_manager.go:215] "Topology Admit Handler" podUID="dbbfecd7-667f-4607-83ba-29c8690f8a12" podNamespace="kube-system" podName="kube-proxy-hjxvw" Jul 2 00:18:05.811835 systemd[1]: Created slice kubepods-besteffort-poddbbfecd7_667f_4607_83ba_29c8690f8a12.slice - libcontainer container kubepods-besteffort-poddbbfecd7_667f_4607_83ba_29c8690f8a12.slice. Jul 2 00:18:05.955228 kubelet[2543]: I0702 00:18:05.952972 2543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mjvh\" (UniqueName: \"kubernetes.io/projected/dbbfecd7-667f-4607-83ba-29c8690f8a12-kube-api-access-4mjvh\") pod \"kube-proxy-hjxvw\" (UID: \"dbbfecd7-667f-4607-83ba-29c8690f8a12\") " pod="kube-system/kube-proxy-hjxvw" Jul 2 00:18:05.955228 kubelet[2543]: I0702 00:18:05.953074 2543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/dbbfecd7-667f-4607-83ba-29c8690f8a12-kube-proxy\") pod \"kube-proxy-hjxvw\" (UID: \"dbbfecd7-667f-4607-83ba-29c8690f8a12\") " pod="kube-system/kube-proxy-hjxvw" Jul 2 00:18:05.955228 kubelet[2543]: I0702 00:18:05.953108 2543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dbbfecd7-667f-4607-83ba-29c8690f8a12-xtables-lock\") pod \"kube-proxy-hjxvw\" (UID: \"dbbfecd7-667f-4607-83ba-29c8690f8a12\") " pod="kube-system/kube-proxy-hjxvw" Jul 2 00:18:05.955228 kubelet[2543]: I0702 00:18:05.953133 2543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dbbfecd7-667f-4607-83ba-29c8690f8a12-lib-modules\") pod \"kube-proxy-hjxvw\" (UID: \"dbbfecd7-667f-4607-83ba-29c8690f8a12\") " pod="kube-system/kube-proxy-hjxvw" Jul 2 00:18:06.089396 kubelet[2543]: E0702 00:18:06.089035 2543 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 2 00:18:06.089396 kubelet[2543]: E0702 00:18:06.089135 2543 projected.go:200] Error preparing data for projected volume kube-api-access-4mjvh for pod kube-system/kube-proxy-hjxvw: configmap "kube-root-ca.crt" not found Jul 2 00:18:06.089396 kubelet[2543]: E0702 00:18:06.089253 2543 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/dbbfecd7-667f-4607-83ba-29c8690f8a12-kube-api-access-4mjvh podName:dbbfecd7-667f-4607-83ba-29c8690f8a12 nodeName:}" failed. No retries permitted until 2024-07-02 00:18:06.589221504 +0000 UTC m=+14.651260881 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-4mjvh" (UniqueName: "kubernetes.io/projected/dbbfecd7-667f-4607-83ba-29c8690f8a12-kube-api-access-4mjvh") pod "kube-proxy-hjxvw" (UID: "dbbfecd7-667f-4607-83ba-29c8690f8a12") : configmap "kube-root-ca.crt" not found Jul 2 00:18:06.401395 kubelet[2543]: I0702 00:18:06.400547 2543 topology_manager.go:215] "Topology Admit Handler" podUID="bcfb890f-b6bb-4a5d-ac41-5a40c5712b50" podNamespace="tigera-operator" podName="tigera-operator-76ff79f7fd-thv2s" Jul 2 00:18:06.414726 systemd[1]: Created slice kubepods-besteffort-podbcfb890f_b6bb_4a5d_ac41_5a40c5712b50.slice - libcontainer container kubepods-besteffort-podbcfb890f_b6bb_4a5d_ac41_5a40c5712b50.slice. Jul 2 00:18:06.568029 kubelet[2543]: I0702 00:18:06.567915 2543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7xkw\" (UniqueName: \"kubernetes.io/projected/bcfb890f-b6bb-4a5d-ac41-5a40c5712b50-kube-api-access-z7xkw\") pod \"tigera-operator-76ff79f7fd-thv2s\" (UID: \"bcfb890f-b6bb-4a5d-ac41-5a40c5712b50\") " pod="tigera-operator/tigera-operator-76ff79f7fd-thv2s" Jul 2 00:18:06.568274 kubelet[2543]: I0702 00:18:06.568048 2543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bcfb890f-b6bb-4a5d-ac41-5a40c5712b50-var-lib-calico\") pod \"tigera-operator-76ff79f7fd-thv2s\" (UID: \"bcfb890f-b6bb-4a5d-ac41-5a40c5712b50\") " pod="tigera-operator/tigera-operator-76ff79f7fd-thv2s" Jul 2 00:18:06.722660 containerd[1472]: time="2024-07-02T00:18:06.722599017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76ff79f7fd-thv2s,Uid:bcfb890f-b6bb-4a5d-ac41-5a40c5712b50,Namespace:tigera-operator,Attempt:0,}" Jul 2 00:18:06.745596 kubelet[2543]: E0702 00:18:06.744382 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:18:06.745803 containerd[1472]: time="2024-07-02T00:18:06.745289946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hjxvw,Uid:dbbfecd7-667f-4607-83ba-29c8690f8a12,Namespace:kube-system,Attempt:0,}" Jul 2 00:18:06.818096 containerd[1472]: time="2024-07-02T00:18:06.817016133Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:18:06.818096 containerd[1472]: time="2024-07-02T00:18:06.817125235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:18:06.818096 containerd[1472]: time="2024-07-02T00:18:06.817155940Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:18:06.818096 containerd[1472]: time="2024-07-02T00:18:06.817174932Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:18:06.841851 containerd[1472]: time="2024-07-02T00:18:06.841364493Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:18:06.843245 containerd[1472]: time="2024-07-02T00:18:06.842781106Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:18:06.843245 containerd[1472]: time="2024-07-02T00:18:06.842832555Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:18:06.843245 containerd[1472]: time="2024-07-02T00:18:06.842847175Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:18:06.892944 systemd[1]: Started cri-containerd-dc525f522b1cacd5356da43c02a75c281dbcbd81e712b54d00aec55fb70a8ed5.scope - libcontainer container dc525f522b1cacd5356da43c02a75c281dbcbd81e712b54d00aec55fb70a8ed5. Jul 2 00:18:06.908788 systemd[1]: Started cri-containerd-0cdcba90c7ae64c2e56b40aea8f601926f0a4ef384c18346cd4c3b3d4f0968b1.scope - libcontainer container 0cdcba90c7ae64c2e56b40aea8f601926f0a4ef384c18346cd4c3b3d4f0968b1. Jul 2 00:18:06.956886 containerd[1472]: time="2024-07-02T00:18:06.956836640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hjxvw,Uid:dbbfecd7-667f-4607-83ba-29c8690f8a12,Namespace:kube-system,Attempt:0,} returns sandbox id \"0cdcba90c7ae64c2e56b40aea8f601926f0a4ef384c18346cd4c3b3d4f0968b1\"" Jul 2 00:18:06.965237 kubelet[2543]: E0702 00:18:06.964850 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:18:06.975231 containerd[1472]: time="2024-07-02T00:18:06.974158390Z" level=info msg="CreateContainer within sandbox \"0cdcba90c7ae64c2e56b40aea8f601926f0a4ef384c18346cd4c3b3d4f0968b1\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 00:18:07.012962 containerd[1472]: time="2024-07-02T00:18:07.012898919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76ff79f7fd-thv2s,Uid:bcfb890f-b6bb-4a5d-ac41-5a40c5712b50,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"dc525f522b1cacd5356da43c02a75c281dbcbd81e712b54d00aec55fb70a8ed5\"" Jul 2 00:18:07.016425 containerd[1472]: time="2024-07-02T00:18:07.016033362Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Jul 2 00:18:07.027841 containerd[1472]: time="2024-07-02T00:18:07.027777729Z" level=info msg="CreateContainer within sandbox \"0cdcba90c7ae64c2e56b40aea8f601926f0a4ef384c18346cd4c3b3d4f0968b1\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"171a2e6c8e43b879ef885b82b20f87f311427f6899bf2afe37135fa51c4f99b0\"" Jul 2 00:18:07.031647 containerd[1472]: time="2024-07-02T00:18:07.029776246Z" level=info msg="StartContainer for \"171a2e6c8e43b879ef885b82b20f87f311427f6899bf2afe37135fa51c4f99b0\"" Jul 2 00:18:07.064809 systemd[1]: Started cri-containerd-171a2e6c8e43b879ef885b82b20f87f311427f6899bf2afe37135fa51c4f99b0.scope - libcontainer container 171a2e6c8e43b879ef885b82b20f87f311427f6899bf2afe37135fa51c4f99b0. Jul 2 00:18:07.113296 containerd[1472]: time="2024-07-02T00:18:07.113001267Z" level=info msg="StartContainer for \"171a2e6c8e43b879ef885b82b20f87f311427f6899bf2afe37135fa51c4f99b0\" returns successfully" Jul 2 00:18:07.246984 kubelet[2543]: E0702 00:18:07.246161 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:18:07.264743 kubelet[2543]: I0702 00:18:07.264680 2543 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hjxvw" podStartSLOduration=2.264659043 podStartE2EDuration="2.264659043s" podCreationTimestamp="2024-07-02 00:18:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:18:07.264578251 +0000 UTC m=+15.326617532" watchObservedRunningTime="2024-07-02 00:18:07.264659043 +0000 UTC m=+15.326698321" Jul 2 00:18:08.556912 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1502336331.mount: Deactivated successfully. Jul 2 00:18:09.275159 containerd[1472]: time="2024-07-02T00:18:09.274995371Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:18:09.276480 containerd[1472]: time="2024-07-02T00:18:09.276403890Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=22076108" Jul 2 00:18:09.277815 containerd[1472]: time="2024-07-02T00:18:09.277747699Z" level=info msg="ImageCreate event name:\"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:18:09.280280 containerd[1472]: time="2024-07-02T00:18:09.280241832Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:18:09.281356 containerd[1472]: time="2024-07-02T00:18:09.281027759Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"22070263\" in 2.264947314s" Jul 2 00:18:09.281356 containerd[1472]: time="2024-07-02T00:18:09.281094221Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\"" Jul 2 00:18:09.289597 containerd[1472]: time="2024-07-02T00:18:09.289115055Z" level=info msg="CreateContainer within sandbox \"dc525f522b1cacd5356da43c02a75c281dbcbd81e712b54d00aec55fb70a8ed5\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 2 00:18:09.339149 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4031533502.mount: Deactivated successfully. Jul 2 00:18:09.360999 containerd[1472]: time="2024-07-02T00:18:09.360927055Z" level=info msg="CreateContainer within sandbox \"dc525f522b1cacd5356da43c02a75c281dbcbd81e712b54d00aec55fb70a8ed5\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"9d0f1f4034b20875860361db9ee245916f54c962945a3f38208815c8444bcbeb\"" Jul 2 00:18:09.362011 containerd[1472]: time="2024-07-02T00:18:09.361969992Z" level=info msg="StartContainer for \"9d0f1f4034b20875860361db9ee245916f54c962945a3f38208815c8444bcbeb\"" Jul 2 00:18:09.408884 systemd[1]: Started cri-containerd-9d0f1f4034b20875860361db9ee245916f54c962945a3f38208815c8444bcbeb.scope - libcontainer container 9d0f1f4034b20875860361db9ee245916f54c962945a3f38208815c8444bcbeb. Jul 2 00:18:09.484199 containerd[1472]: time="2024-07-02T00:18:09.484139412Z" level=info msg="StartContainer for \"9d0f1f4034b20875860361db9ee245916f54c962945a3f38208815c8444bcbeb\" returns successfully" Jul 2 00:18:10.283698 kubelet[2543]: I0702 00:18:10.281217 2543 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76ff79f7fd-thv2s" podStartSLOduration=2.013812102 podStartE2EDuration="4.281197856s" podCreationTimestamp="2024-07-02 00:18:06 +0000 UTC" firstStartedPulling="2024-07-02 00:18:07.01533411 +0000 UTC m=+15.077373382" lastFinishedPulling="2024-07-02 00:18:09.282719877 +0000 UTC m=+17.344759136" observedRunningTime="2024-07-02 00:18:10.280880918 +0000 UTC m=+18.342920198" watchObservedRunningTime="2024-07-02 00:18:10.281197856 +0000 UTC m=+18.343237135" Jul 2 00:18:12.694798 kubelet[2543]: I0702 00:18:12.693114 2543 topology_manager.go:215] "Topology Admit Handler" podUID="6e8a9d99-8d13-42e1-b56f-2cd5e2bc933c" podNamespace="calico-system" podName="calico-typha-f94f5f8bb-grskq" Jul 2 00:18:12.704173 systemd[1]: Created slice kubepods-besteffort-pod6e8a9d99_8d13_42e1_b56f_2cd5e2bc933c.slice - libcontainer container kubepods-besteffort-pod6e8a9d99_8d13_42e1_b56f_2cd5e2bc933c.slice. Jul 2 00:18:12.806228 kubelet[2543]: I0702 00:18:12.806016 2543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phcr8\" (UniqueName: \"kubernetes.io/projected/6e8a9d99-8d13-42e1-b56f-2cd5e2bc933c-kube-api-access-phcr8\") pod \"calico-typha-f94f5f8bb-grskq\" (UID: \"6e8a9d99-8d13-42e1-b56f-2cd5e2bc933c\") " pod="calico-system/calico-typha-f94f5f8bb-grskq" Jul 2 00:18:12.806228 kubelet[2543]: I0702 00:18:12.806099 2543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e8a9d99-8d13-42e1-b56f-2cd5e2bc933c-tigera-ca-bundle\") pod \"calico-typha-f94f5f8bb-grskq\" (UID: \"6e8a9d99-8d13-42e1-b56f-2cd5e2bc933c\") " pod="calico-system/calico-typha-f94f5f8bb-grskq" Jul 2 00:18:12.806228 kubelet[2543]: I0702 00:18:12.806123 2543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/6e8a9d99-8d13-42e1-b56f-2cd5e2bc933c-typha-certs\") pod \"calico-typha-f94f5f8bb-grskq\" (UID: \"6e8a9d99-8d13-42e1-b56f-2cd5e2bc933c\") " pod="calico-system/calico-typha-f94f5f8bb-grskq" Jul 2 00:18:12.913557 kubelet[2543]: I0702 00:18:12.911271 2543 topology_manager.go:215] "Topology Admit Handler" podUID="c54f3f28-1765-4f64-a375-409c62d9adde" podNamespace="calico-system" podName="calico-node-wb8pv" Jul 2 00:18:12.953681 systemd[1]: Created slice kubepods-besteffort-podc54f3f28_1765_4f64_a375_409c62d9adde.slice - libcontainer container kubepods-besteffort-podc54f3f28_1765_4f64_a375_409c62d9adde.slice. Jul 2 00:18:13.007112 kubelet[2543]: I0702 00:18:13.006968 2543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgmdv\" (UniqueName: \"kubernetes.io/projected/c54f3f28-1765-4f64-a375-409c62d9adde-kube-api-access-pgmdv\") pod \"calico-node-wb8pv\" (UID: \"c54f3f28-1765-4f64-a375-409c62d9adde\") " pod="calico-system/calico-node-wb8pv" Jul 2 00:18:13.007112 kubelet[2543]: I0702 00:18:13.007012 2543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c54f3f28-1765-4f64-a375-409c62d9adde-lib-modules\") pod \"calico-node-wb8pv\" (UID: \"c54f3f28-1765-4f64-a375-409c62d9adde\") " pod="calico-system/calico-node-wb8pv" Jul 2 00:18:13.007112 kubelet[2543]: I0702 00:18:13.007036 2543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c54f3f28-1765-4f64-a375-409c62d9adde-node-certs\") pod \"calico-node-wb8pv\" (UID: \"c54f3f28-1765-4f64-a375-409c62d9adde\") " pod="calico-system/calico-node-wb8pv" Jul 2 00:18:13.007112 kubelet[2543]: I0702 00:18:13.007061 2543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c54f3f28-1765-4f64-a375-409c62d9adde-tigera-ca-bundle\") pod \"calico-node-wb8pv\" (UID: \"c54f3f28-1765-4f64-a375-409c62d9adde\") " pod="calico-system/calico-node-wb8pv" Jul 2 00:18:13.007112 kubelet[2543]: I0702 00:18:13.007078 2543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c54f3f28-1765-4f64-a375-409c62d9adde-cni-bin-dir\") pod \"calico-node-wb8pv\" (UID: \"c54f3f28-1765-4f64-a375-409c62d9adde\") " pod="calico-system/calico-node-wb8pv" Jul 2 00:18:13.007505 kubelet[2543]: I0702 00:18:13.007099 2543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c54f3f28-1765-4f64-a375-409c62d9adde-cni-log-dir\") pod \"calico-node-wb8pv\" (UID: \"c54f3f28-1765-4f64-a375-409c62d9adde\") " pod="calico-system/calico-node-wb8pv" Jul 2 00:18:13.007505 kubelet[2543]: I0702 00:18:13.007114 2543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c54f3f28-1765-4f64-a375-409c62d9adde-flexvol-driver-host\") pod \"calico-node-wb8pv\" (UID: \"c54f3f28-1765-4f64-a375-409c62d9adde\") " pod="calico-system/calico-node-wb8pv" Jul 2 00:18:13.007505 kubelet[2543]: I0702 00:18:13.007133 2543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c54f3f28-1765-4f64-a375-409c62d9adde-policysync\") pod \"calico-node-wb8pv\" (UID: \"c54f3f28-1765-4f64-a375-409c62d9adde\") " pod="calico-system/calico-node-wb8pv" Jul 2 00:18:13.007505 kubelet[2543]: I0702 00:18:13.007209 2543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c54f3f28-1765-4f64-a375-409c62d9adde-var-run-calico\") pod \"calico-node-wb8pv\" (UID: \"c54f3f28-1765-4f64-a375-409c62d9adde\") " pod="calico-system/calico-node-wb8pv" Jul 2 00:18:13.007505 kubelet[2543]: I0702 00:18:13.007233 2543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c54f3f28-1765-4f64-a375-409c62d9adde-var-lib-calico\") pod \"calico-node-wb8pv\" (UID: \"c54f3f28-1765-4f64-a375-409c62d9adde\") " pod="calico-system/calico-node-wb8pv" Jul 2 00:18:13.007947 kubelet[2543]: I0702 00:18:13.007255 2543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c54f3f28-1765-4f64-a375-409c62d9adde-xtables-lock\") pod \"calico-node-wb8pv\" (UID: \"c54f3f28-1765-4f64-a375-409c62d9adde\") " pod="calico-system/calico-node-wb8pv" Jul 2 00:18:13.007947 kubelet[2543]: I0702 00:18:13.007282 2543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c54f3f28-1765-4f64-a375-409c62d9adde-cni-net-dir\") pod \"calico-node-wb8pv\" (UID: \"c54f3f28-1765-4f64-a375-409c62d9adde\") " pod="calico-system/calico-node-wb8pv" Jul 2 00:18:13.010583 kubelet[2543]: E0702 00:18:13.010219 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:18:13.011320 containerd[1472]: time="2024-07-02T00:18:13.011278598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-f94f5f8bb-grskq,Uid:6e8a9d99-8d13-42e1-b56f-2cd5e2bc933c,Namespace:calico-system,Attempt:0,}" Jul 2 00:18:13.065888 kubelet[2543]: I0702 00:18:13.063414 2543 topology_manager.go:215] "Topology Admit Handler" podUID="d3d95f80-f22a-4d64-99eb-0d72b7beb76e" podNamespace="calico-system" podName="csi-node-driver-dpgqd" Jul 2 00:18:13.067908 kubelet[2543]: E0702 00:18:13.067636 2543 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dpgqd" podUID="d3d95f80-f22a-4d64-99eb-0d72b7beb76e" Jul 2 00:18:13.068053 containerd[1472]: time="2024-07-02T00:18:13.066908086Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:18:13.068053 containerd[1472]: time="2024-07-02T00:18:13.066990367Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:18:13.068053 containerd[1472]: time="2024-07-02T00:18:13.067026314Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:18:13.068053 containerd[1472]: time="2024-07-02T00:18:13.067043977Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:18:13.113767 systemd[1]: Started cri-containerd-2fddbbf7ef31edcc21d2d167bf089806a4220864ed5560eb1bc3a0ea47459a43.scope - libcontainer container 2fddbbf7ef31edcc21d2d167bf089806a4220864ed5560eb1bc3a0ea47459a43. Jul 2 00:18:13.117566 kubelet[2543]: E0702 00:18:13.117035 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:13.117566 kubelet[2543]: W0702 00:18:13.117070 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:13.117566 kubelet[2543]: E0702 00:18:13.117104 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:13.118094 kubelet[2543]: E0702 00:18:13.118067 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:13.118094 kubelet[2543]: W0702 00:18:13.118083 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:13.118155 kubelet[2543]: E0702 00:18:13.118102 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:13.119794 kubelet[2543]: E0702 00:18:13.119575 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:13.119794 kubelet[2543]: W0702 00:18:13.119599 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:13.119794 kubelet[2543]: E0702 00:18:13.119644 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:13.120347 kubelet[2543]: E0702 00:18:13.119875 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:13.120347 kubelet[2543]: W0702 00:18:13.119885 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:13.120347 kubelet[2543]: E0702 00:18:13.119903 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:13.120347 kubelet[2543]: E0702 00:18:13.120171 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:13.120347 kubelet[2543]: W0702 00:18:13.120181 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:13.120347 kubelet[2543]: E0702 00:18:13.120196 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:13.122113 kubelet[2543]: E0702 00:18:13.122088 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:13.122113 kubelet[2543]: W0702 00:18:13.122108 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:13.122347 kubelet[2543]: E0702 00:18:13.122128 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:13.123375 kubelet[2543]: E0702 00:18:13.123238 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:13.123375 kubelet[2543]: W0702 00:18:13.123260 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:13.123375 kubelet[2543]: E0702 00:18:13.123283 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:13.123926 kubelet[2543]: E0702 00:18:13.123654 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:13.123926 kubelet[2543]: W0702 00:18:13.123666 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:13.123926 kubelet[2543]: E0702 00:18:13.123681 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:13.124659 kubelet[2543]: E0702 00:18:13.124633 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:13.124659 kubelet[2543]: W0702 00:18:13.124652 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:13.124789 kubelet[2543]: E0702 00:18:13.124670 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:13.127083 kubelet[2543]: E0702 00:18:13.127030 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:13.127083 kubelet[2543]: W0702 00:18:13.127053 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:13.127083 kubelet[2543]: E0702 00:18:13.127074 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:13.127742 kubelet[2543]: E0702 00:18:13.127723 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:13.127742 kubelet[2543]: W0702 00:18:13.127738 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:13.128684 kubelet[2543]: E0702 00:18:13.127752 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:13.128684 kubelet[2543]: E0702 00:18:13.128295 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:13.128684 kubelet[2543]: W0702 00:18:13.128306 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:13.128684 kubelet[2543]: E0702 00:18:13.128318 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:13.128850 kubelet[2543]: E0702 00:18:13.128833 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:13.128850 kubelet[2543]: W0702 00:18:13.128847 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:13.128961 kubelet[2543]: E0702 00:18:13.128859 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:13.129412 kubelet[2543]: E0702 00:18:13.129397 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:13.129412 kubelet[2543]: W0702 00:18:13.129409 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:13.129611 kubelet[2543]: E0702 00:18:13.129421 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:13.129974 kubelet[2543]: E0702 00:18:13.129947 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:13.129974 kubelet[2543]: W0702 00:18:13.129959 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:13.129974 kubelet[2543]: E0702 00:18:13.129971 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:13.130467 kubelet[2543]: E0702 00:18:13.130452 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:13.130467 kubelet[2543]: W0702 00:18:13.130465 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:13.132643 kubelet[2543]: E0702 00:18:13.132604 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:13.132892 kubelet[2543]: E0702 00:18:13.132878 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:13.132892 kubelet[2543]: W0702 00:18:13.132891 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:13.133001 kubelet[2543]: E0702 00:18:13.132908 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:13.133195 kubelet[2543]: E0702 00:18:13.133175 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:13.133260 kubelet[2543]: W0702 00:18:13.133193 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:13.133260 kubelet[2543]: E0702 00:18:13.133211 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:13.133420 kubelet[2543]: E0702 00:18:13.133406 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:13.133465 kubelet[2543]: W0702 00:18:13.133421 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:13.133465 kubelet[2543]: E0702 00:18:13.133434 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:13.133669 kubelet[2543]: E0702 00:18:13.133654 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:13.133715 kubelet[2543]: W0702 00:18:13.133669 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:13.133715 kubelet[2543]: E0702 00:18:13.133685 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:13.134218 kubelet[2543]: E0702 00:18:13.134200 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:13.134218 kubelet[2543]: W0702 00:18:13.134215 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:13.134318 kubelet[2543]: E0702 00:18:13.134229 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:13.135722 kubelet[2543]: E0702 00:18:13.135698 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:13.135722 kubelet[2543]: W0702 00:18:13.135720 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:13.135834 kubelet[2543]: E0702 00:18:13.135736 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:13.135977 kubelet[2543]: E0702 00:18:13.135957 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:13.135977 kubelet[2543]: W0702 00:18:13.135972 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:13.136063 kubelet[2543]: E0702 00:18:13.135983 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:13.136156 kubelet[2543]: E0702 00:18:13.136143 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:13.136156 kubelet[2543]: W0702 00:18:13.136153 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:13.136211 kubelet[2543]: E0702 00:18:13.136162 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:13.136382 kubelet[2543]: E0702 00:18:13.136362 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:13.136382 kubelet[2543]: W0702 00:18:13.136375 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:13.136507 kubelet[2543]: E0702 00:18:13.136386 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:13.136585 kubelet[2543]: E0702 00:18:13.136573 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:13.136585 kubelet[2543]: W0702 00:18:13.136583 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:13.136647 kubelet[2543]: E0702 00:18:13.136592 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:13.136979 kubelet[2543]: E0702 00:18:13.136962 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:13.136979 kubelet[2543]: W0702 00:18:13.136979 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:13.137068 kubelet[2543]: E0702 00:18:13.136990 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:13.137721 kubelet[2543]: E0702 00:18:13.137701 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:13.137721 kubelet[2543]: W0702 00:18:13.137719 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:13.137849 kubelet[2543]: E0702 00:18:13.137734 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:13.138055 kubelet[2543]: E0702 00:18:13.138020 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:13.138055 kubelet[2543]: W0702 00:18:13.138033 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:13.138270 kubelet[2543]: E0702 00:18:13.138044 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:13.151846 kubelet[2543]: E0702 00:18:13.151640 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:13.151846 kubelet[2543]: W0702 00:18:13.151663 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:13.151846 kubelet[2543]: E0702 00:18:13.151686 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:13.213903 kubelet[2543]: E0702 00:18:13.212215 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:13.213903 kubelet[2543]: W0702 00:18:13.212244 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:13.213903 kubelet[2543]: E0702 00:18:13.212466 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:13.213903 kubelet[2543]: I0702 00:18:13.212511 2543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d3d95f80-f22a-4d64-99eb-0d72b7beb76e-kubelet-dir\") pod \"csi-node-driver-dpgqd\" (UID: \"d3d95f80-f22a-4d64-99eb-0d72b7beb76e\") " pod="calico-system/csi-node-driver-dpgqd" Jul 2 00:18:13.214988 kubelet[2543]: E0702 00:18:13.214116 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:13.214988 kubelet[2543]: W0702 00:18:13.214136 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:13.214988 kubelet[2543]: E0702 00:18:13.214172 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:13.214988 kubelet[2543]: I0702 00:18:13.214209 2543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d3d95f80-f22a-4d64-99eb-0d72b7beb76e-registration-dir\") pod \"csi-node-driver-dpgqd\" (UID: \"d3d95f80-f22a-4d64-99eb-0d72b7beb76e\") " pod="calico-system/csi-node-driver-dpgqd" Jul 2 00:18:13.219038 kubelet[2543]: E0702 00:18:13.218898 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:13.219038 kubelet[2543]: W0702 00:18:13.219030 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:13.219038 kubelet[2543]: E0702 00:18:13.219116 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:13.221739 kubelet[2543]: E0702 00:18:13.219818 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:13.221739 kubelet[2543]: W0702 00:18:13.220597 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:13.221739 kubelet[2543]: I0702 00:18:13.220934 2543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npsr2\" (UniqueName: \"kubernetes.io/projected/d3d95f80-f22a-4d64-99eb-0d72b7beb76e-kube-api-access-npsr2\") pod \"csi-node-driver-dpgqd\" (UID: \"d3d95f80-f22a-4d64-99eb-0d72b7beb76e\") " pod="calico-system/csi-node-driver-dpgqd" Jul 2 00:18:13.222742 kubelet[2543]: E0702 00:18:13.221922 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:13.224428 kubelet[2543]: E0702 00:18:13.224188 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:13.224428 kubelet[2543]: W0702 00:18:13.224257 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:13.224428 kubelet[2543]: E0702 00:18:13.224397 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:13.225098 kubelet[2543]: E0702 00:18:13.225074 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:13.225286 kubelet[2543]: W0702 00:18:13.225213 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:13.225621 kubelet[2543]: E0702 00:18:13.225462 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:13.225960 kubelet[2543]: E0702 00:18:13.225892 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:13.225960 kubelet[2543]: W0702 00:18:13.225907 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:13.226318 kubelet[2543]: E0702 00:18:13.226197 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:13.226318 kubelet[2543]: I0702 00:18:13.226258 2543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d3d95f80-f22a-4d64-99eb-0d72b7beb76e-varrun\") pod \"csi-node-driver-dpgqd\" (UID: \"d3d95f80-f22a-4d64-99eb-0d72b7beb76e\") " pod="calico-system/csi-node-driver-dpgqd" Jul 2 00:18:13.227014 kubelet[2543]: E0702 00:18:13.226772 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:13.227014 kubelet[2543]: W0702 00:18:13.226803 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:13.227014 kubelet[2543]: E0702 00:18:13.226822 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:13.227379 kubelet[2543]: E0702 00:18:13.227343 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:13.227379 kubelet[2543]: W0702 00:18:13.227360 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:13.227638 kubelet[2543]: E0702 00:18:13.227614 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:13.227907 kubelet[2543]: E0702 00:18:13.227886 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:13.227907 kubelet[2543]: W0702 00:18:13.227905 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:13.228051 kubelet[2543]: E0702 00:18:13.227923 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:13.228942 kubelet[2543]: E0702 00:18:13.228912 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:13.228942 kubelet[2543]: W0702 00:18:13.228932 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:13.229294 kubelet[2543]: E0702 00:18:13.228955 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:13.229294 kubelet[2543]: I0702 00:18:13.228983 2543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d3d95f80-f22a-4d64-99eb-0d72b7beb76e-socket-dir\") pod \"csi-node-driver-dpgqd\" (UID: \"d3d95f80-f22a-4d64-99eb-0d72b7beb76e\") " pod="calico-system/csi-node-driver-dpgqd" Jul 2 00:18:13.229294 kubelet[2543]: E0702 00:18:13.229233 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:13.229294 kubelet[2543]: W0702 00:18:13.229245 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:13.229294 kubelet[2543]: E0702 00:18:13.229256 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:13.230212 kubelet[2543]: E0702 00:18:13.230190 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:13.230212 kubelet[2543]: W0702 00:18:13.230208 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:13.230320 kubelet[2543]: E0702 00:18:13.230228 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:13.230677 kubelet[2543]: E0702 00:18:13.230656 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:13.230677 kubelet[2543]: W0702 00:18:13.230676 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:13.230807 kubelet[2543]: E0702 00:18:13.230693 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:13.231616 kubelet[2543]: E0702 00:18:13.231587 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:13.231616 kubelet[2543]: W0702 00:18:13.231607 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:13.231729 kubelet[2543]: E0702 00:18:13.231624 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:13.260799 kubelet[2543]: E0702 00:18:13.260757 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:18:13.262674 containerd[1472]: time="2024-07-02T00:18:13.262020778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wb8pv,Uid:c54f3f28-1765-4f64-a375-409c62d9adde,Namespace:calico-system,Attempt:0,}" Jul 2 00:18:13.275319 containerd[1472]: time="2024-07-02T00:18:13.275265232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-f94f5f8bb-grskq,Uid:6e8a9d99-8d13-42e1-b56f-2cd5e2bc933c,Namespace:calico-system,Attempt:0,} returns sandbox id \"2fddbbf7ef31edcc21d2d167bf089806a4220864ed5560eb1bc3a0ea47459a43\"" Jul 2 00:18:13.276939 kubelet[2543]: E0702 00:18:13.276270 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:18:13.278989 containerd[1472]: time="2024-07-02T00:18:13.278943619Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Jul 2 00:18:13.331808 kubelet[2543]: E0702 00:18:13.331603 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:13.331808 kubelet[2543]: W0702 00:18:13.331659 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:13.331808 kubelet[2543]: E0702 00:18:13.331694 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:13.333347 kubelet[2543]: E0702 00:18:13.332834 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:13.333347 kubelet[2543]: W0702 00:18:13.332862 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:13.333347 kubelet[2543]: E0702 00:18:13.332889 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:13.334507 kubelet[2543]: E0702 00:18:13.333788 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:13.334507 kubelet[2543]: W0702 00:18:13.333811 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:13.334507 kubelet[2543]: E0702 00:18:13.333864 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:13.334954 kubelet[2543]: E0702 00:18:13.334842 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:13.334954 kubelet[2543]: W0702 00:18:13.334863 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:13.334954 kubelet[2543]: E0702 00:18:13.334902 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:13.337204 kubelet[2543]: E0702 00:18:13.335475 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:13.337204 kubelet[2543]: W0702 00:18:13.335502 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:13.337204 kubelet[2543]: E0702 00:18:13.335550 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:13.337204 kubelet[2543]: E0702 00:18:13.336930 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:13.337204 kubelet[2543]: W0702 00:18:13.336948 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:13.337204 kubelet[2543]: E0702 00:18:13.337001 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:13.337961 kubelet[2543]: E0702 00:18:13.337640 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:13.337961 kubelet[2543]: W0702 00:18:13.337658 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:13.337961 kubelet[2543]: E0702 00:18:13.337758 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:13.338242 kubelet[2543]: E0702 00:18:13.338219 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:13.338334 kubelet[2543]: W0702 00:18:13.338319 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:13.338452 kubelet[2543]: E0702 00:18:13.338422 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:13.338796 kubelet[2543]: E0702 00:18:13.338780 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:13.338917 kubelet[2543]: W0702 00:18:13.338902 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:13.339028 kubelet[2543]: E0702 00:18:13.339010 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:13.339476 kubelet[2543]: E0702 00:18:13.339458 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:13.339802 kubelet[2543]: W0702 00:18:13.339780 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:13.340620 kubelet[2543]: E0702 00:18:13.340268 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:13.340807 kubelet[2543]: E0702 00:18:13.340603 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:13.340807 kubelet[2543]: W0702 00:18:13.340754 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:13.340904 kubelet[2543]: E0702 00:18:13.340811 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:13.341776 kubelet[2543]: E0702 00:18:13.341602 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:13.341776 kubelet[2543]: W0702 00:18:13.341621 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:13.341776 kubelet[2543]: E0702 00:18:13.341654 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:13.342199 kubelet[2543]: E0702 00:18:13.342049 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:13.342199 kubelet[2543]: W0702 00:18:13.342074 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:13.342199 kubelet[2543]: E0702 00:18:13.342110 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:13.342886 kubelet[2543]: E0702 00:18:13.342769 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:13.342886 kubelet[2543]: W0702 00:18:13.342788 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:13.342886 kubelet[2543]: E0702 00:18:13.342858 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:13.343388 kubelet[2543]: E0702 00:18:13.343299 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:13.343388 kubelet[2543]: W0702 00:18:13.343317 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:13.345680 kubelet[2543]: E0702 00:18:13.343803 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:13.345887 kubelet[2543]: E0702 00:18:13.345868 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:13.346133 kubelet[2543]: W0702 00:18:13.345970 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:13.346133 kubelet[2543]: E0702 00:18:13.346014 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:13.346362 kubelet[2543]: E0702 00:18:13.346345 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:13.346591 kubelet[2543]: W0702 00:18:13.346436 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:13.346591 kubelet[2543]: E0702 00:18:13.346475 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:13.346915 kubelet[2543]: E0702 00:18:13.346897 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:13.347102 kubelet[2543]: W0702 00:18:13.346996 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:13.347102 kubelet[2543]: E0702 00:18:13.347034 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:13.348680 kubelet[2543]: E0702 00:18:13.348658 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:13.348913 kubelet[2543]: W0702 00:18:13.348776 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:13.348913 kubelet[2543]: E0702 00:18:13.348817 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:13.349119 kubelet[2543]: E0702 00:18:13.349102 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:13.349264 kubelet[2543]: W0702 00:18:13.349188 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:13.349264 kubelet[2543]: E0702 00:18:13.349223 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:13.349650 kubelet[2543]: E0702 00:18:13.349633 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:13.349747 kubelet[2543]: W0702 00:18:13.349731 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:13.349926 kubelet[2543]: E0702 00:18:13.349823 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:13.350073 kubelet[2543]: E0702 00:18:13.350060 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:13.350264 kubelet[2543]: W0702 00:18:13.350134 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:13.350264 kubelet[2543]: E0702 00:18:13.350164 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:13.350650 kubelet[2543]: E0702 00:18:13.350609 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:13.350650 kubelet[2543]: W0702 00:18:13.350628 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:13.350866 kubelet[2543]: E0702 00:18:13.350847 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:13.351416 kubelet[2543]: E0702 00:18:13.351323 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:13.351416 kubelet[2543]: W0702 00:18:13.351341 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:13.351416 kubelet[2543]: E0702 00:18:13.351374 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:13.352930 kubelet[2543]: E0702 00:18:13.352909 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:13.353228 kubelet[2543]: W0702 00:18:13.353107 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:13.353228 kubelet[2543]: E0702 00:18:13.353138 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:13.355762 containerd[1472]: time="2024-07-02T00:18:13.354946097Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:18:13.355762 containerd[1472]: time="2024-07-02T00:18:13.355247614Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:18:13.355762 containerd[1472]: time="2024-07-02T00:18:13.355339673Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:18:13.355762 containerd[1472]: time="2024-07-02T00:18:13.355354370Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:18:13.370140 kubelet[2543]: E0702 00:18:13.369837 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:13.370140 kubelet[2543]: W0702 00:18:13.369885 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:13.370140 kubelet[2543]: E0702 00:18:13.369917 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:13.408225 systemd[1]: Started cri-containerd-ba87404fbf5eb0bf45b945bc63ac08d665feeca9c221cab7f5408ef4cea5b998.scope - libcontainer container ba87404fbf5eb0bf45b945bc63ac08d665feeca9c221cab7f5408ef4cea5b998. Jul 2 00:18:13.453657 containerd[1472]: time="2024-07-02T00:18:13.453462765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wb8pv,Uid:c54f3f28-1765-4f64-a375-409c62d9adde,Namespace:calico-system,Attempt:0,} returns sandbox id \"ba87404fbf5eb0bf45b945bc63ac08d665feeca9c221cab7f5408ef4cea5b998\"" Jul 2 00:18:13.455208 kubelet[2543]: E0702 00:18:13.455174 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:18:15.140751 kubelet[2543]: E0702 00:18:15.139891 2543 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dpgqd" podUID="d3d95f80-f22a-4d64-99eb-0d72b7beb76e" Jul 2 00:18:15.638129 containerd[1472]: time="2024-07-02T00:18:15.638072278Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:18:15.640125 containerd[1472]: time="2024-07-02T00:18:15.640037850Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=29458030" Jul 2 00:18:15.642359 containerd[1472]: time="2024-07-02T00:18:15.641169425Z" level=info msg="ImageCreate event name:\"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:18:15.644374 containerd[1472]: time="2024-07-02T00:18:15.644319356Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:18:15.645174 containerd[1472]: time="2024-07-02T00:18:15.644968678Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"30905782\" in 2.365978909s" Jul 2 00:18:15.645174 containerd[1472]: time="2024-07-02T00:18:15.645019971Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\"" Jul 2 00:18:15.648472 containerd[1472]: time="2024-07-02T00:18:15.648402670Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Jul 2 00:18:15.669218 containerd[1472]: time="2024-07-02T00:18:15.669151493Z" level=info msg="CreateContainer within sandbox \"2fddbbf7ef31edcc21d2d167bf089806a4220864ed5560eb1bc3a0ea47459a43\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 2 00:18:15.770419 containerd[1472]: time="2024-07-02T00:18:15.770337935Z" level=info msg="CreateContainer within sandbox \"2fddbbf7ef31edcc21d2d167bf089806a4220864ed5560eb1bc3a0ea47459a43\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"e15e2e818f24761a6bb0ae4fcd2e3c3f5431d6ef316d5e434703e1c16ad85657\"" Jul 2 00:18:15.773664 containerd[1472]: time="2024-07-02T00:18:15.772969371Z" level=info msg="StartContainer for \"e15e2e818f24761a6bb0ae4fcd2e3c3f5431d6ef316d5e434703e1c16ad85657\"" Jul 2 00:18:15.828799 systemd[1]: Started cri-containerd-e15e2e818f24761a6bb0ae4fcd2e3c3f5431d6ef316d5e434703e1c16ad85657.scope - libcontainer container e15e2e818f24761a6bb0ae4fcd2e3c3f5431d6ef316d5e434703e1c16ad85657. Jul 2 00:18:15.925540 containerd[1472]: time="2024-07-02T00:18:15.924614065Z" level=info msg="StartContainer for \"e15e2e818f24761a6bb0ae4fcd2e3c3f5431d6ef316d5e434703e1c16ad85657\" returns successfully" Jul 2 00:18:16.281061 containerd[1472]: time="2024-07-02T00:18:16.281010627Z" level=info msg="StopContainer for \"e15e2e818f24761a6bb0ae4fcd2e3c3f5431d6ef316d5e434703e1c16ad85657\" with timeout 300 (s)" Jul 2 00:18:16.283252 containerd[1472]: time="2024-07-02T00:18:16.282486401Z" level=info msg="Stop container \"e15e2e818f24761a6bb0ae4fcd2e3c3f5431d6ef316d5e434703e1c16ad85657\" with signal terminated" Jul 2 00:18:16.302852 systemd[1]: cri-containerd-e15e2e818f24761a6bb0ae4fcd2e3c3f5431d6ef316d5e434703e1c16ad85657.scope: Deactivated successfully. Jul 2 00:18:16.310712 kubelet[2543]: I0702 00:18:16.309521 2543 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-f94f5f8bb-grskq" podStartSLOduration=1.939300352 podStartE2EDuration="4.309490719s" podCreationTimestamp="2024-07-02 00:18:12 +0000 UTC" firstStartedPulling="2024-07-02 00:18:13.277322931 +0000 UTC m=+21.339362196" lastFinishedPulling="2024-07-02 00:18:15.647513303 +0000 UTC m=+23.709552563" observedRunningTime="2024-07-02 00:18:16.304607308 +0000 UTC m=+24.366646586" watchObservedRunningTime="2024-07-02 00:18:16.309490719 +0000 UTC m=+24.371530001" Jul 2 00:18:16.364772 containerd[1472]: time="2024-07-02T00:18:16.363504877Z" level=info msg="shim disconnected" id=e15e2e818f24761a6bb0ae4fcd2e3c3f5431d6ef316d5e434703e1c16ad85657 namespace=k8s.io Jul 2 00:18:16.364772 containerd[1472]: time="2024-07-02T00:18:16.363625231Z" level=warning msg="cleaning up after shim disconnected" id=e15e2e818f24761a6bb0ae4fcd2e3c3f5431d6ef316d5e434703e1c16ad85657 namespace=k8s.io Jul 2 00:18:16.364772 containerd[1472]: time="2024-07-02T00:18:16.363638904Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:18:16.391587 containerd[1472]: time="2024-07-02T00:18:16.391192457Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:18:16Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 2 00:18:16.403946 containerd[1472]: time="2024-07-02T00:18:16.403769014Z" level=info msg="StopContainer for \"e15e2e818f24761a6bb0ae4fcd2e3c3f5431d6ef316d5e434703e1c16ad85657\" returns successfully" Jul 2 00:18:16.406266 containerd[1472]: time="2024-07-02T00:18:16.405923697Z" level=info msg="StopPodSandbox for \"2fddbbf7ef31edcc21d2d167bf089806a4220864ed5560eb1bc3a0ea47459a43\"" Jul 2 00:18:16.406266 containerd[1472]: time="2024-07-02T00:18:16.405990442Z" level=info msg="Container to stop \"e15e2e818f24761a6bb0ae4fcd2e3c3f5431d6ef316d5e434703e1c16ad85657\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:18:16.423742 systemd[1]: cri-containerd-2fddbbf7ef31edcc21d2d167bf089806a4220864ed5560eb1bc3a0ea47459a43.scope: Deactivated successfully. Jul 2 00:18:16.475794 containerd[1472]: time="2024-07-02T00:18:16.474943475Z" level=info msg="shim disconnected" id=2fddbbf7ef31edcc21d2d167bf089806a4220864ed5560eb1bc3a0ea47459a43 namespace=k8s.io Jul 2 00:18:16.476134 containerd[1472]: time="2024-07-02T00:18:16.475794398Z" level=warning msg="cleaning up after shim disconnected" id=2fddbbf7ef31edcc21d2d167bf089806a4220864ed5560eb1bc3a0ea47459a43 namespace=k8s.io Jul 2 00:18:16.476134 containerd[1472]: time="2024-07-02T00:18:16.475818350Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:18:16.507438 containerd[1472]: time="2024-07-02T00:18:16.506678297Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:18:16Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 2 00:18:16.509365 containerd[1472]: time="2024-07-02T00:18:16.509317738Z" level=info msg="TearDown network for sandbox \"2fddbbf7ef31edcc21d2d167bf089806a4220864ed5560eb1bc3a0ea47459a43\" successfully" Jul 2 00:18:16.509596 containerd[1472]: time="2024-07-02T00:18:16.509573750Z" level=info msg="StopPodSandbox for \"2fddbbf7ef31edcc21d2d167bf089806a4220864ed5560eb1bc3a0ea47459a43\" returns successfully" Jul 2 00:18:16.552697 kubelet[2543]: I0702 00:18:16.552507 2543 topology_manager.go:215] "Topology Admit Handler" podUID="36f28dd5-3300-4e13-8481-384a11689f64" podNamespace="calico-system" podName="calico-typha-545b7b6997-b9ckw" Jul 2 00:18:16.556778 kubelet[2543]: E0702 00:18:16.554600 2543 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6e8a9d99-8d13-42e1-b56f-2cd5e2bc933c" containerName="calico-typha" Jul 2 00:18:16.556778 kubelet[2543]: I0702 00:18:16.554701 2543 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e8a9d99-8d13-42e1-b56f-2cd5e2bc933c" containerName="calico-typha" Jul 2 00:18:16.571713 systemd[1]: Created slice kubepods-besteffort-pod36f28dd5_3300_4e13_8481_384a11689f64.slice - libcontainer container kubepods-besteffort-pod36f28dd5_3300_4e13_8481_384a11689f64.slice. Jul 2 00:18:16.575013 kubelet[2543]: E0702 00:18:16.574955 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:16.575013 kubelet[2543]: W0702 00:18:16.574983 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:16.575606 kubelet[2543]: E0702 00:18:16.575237 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:16.578561 kubelet[2543]: E0702 00:18:16.578385 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:16.578561 kubelet[2543]: W0702 00:18:16.578423 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:16.578561 kubelet[2543]: E0702 00:18:16.578454 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:16.580101 kubelet[2543]: E0702 00:18:16.579852 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:16.580101 kubelet[2543]: W0702 00:18:16.579885 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:16.580101 kubelet[2543]: E0702 00:18:16.579911 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:16.582237 kubelet[2543]: E0702 00:18:16.582098 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:16.582237 kubelet[2543]: W0702 00:18:16.582123 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:16.582237 kubelet[2543]: E0702 00:18:16.582164 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:16.583505 kubelet[2543]: E0702 00:18:16.583468 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:16.583695 kubelet[2543]: W0702 00:18:16.583504 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:16.583781 kubelet[2543]: E0702 00:18:16.583755 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:16.584119 kubelet[2543]: E0702 00:18:16.584101 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:16.584193 kubelet[2543]: W0702 00:18:16.584118 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:16.584193 kubelet[2543]: E0702 00:18:16.584140 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:16.584384 kubelet[2543]: E0702 00:18:16.584369 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:16.584633 kubelet[2543]: W0702 00:18:16.584385 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:16.584633 kubelet[2543]: E0702 00:18:16.584413 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:16.585706 kubelet[2543]: E0702 00:18:16.585678 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:16.585870 kubelet[2543]: W0702 00:18:16.585707 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:16.585870 kubelet[2543]: E0702 00:18:16.585733 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:16.586095 kubelet[2543]: E0702 00:18:16.586079 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:16.586095 kubelet[2543]: W0702 00:18:16.586095 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:16.586258 kubelet[2543]: E0702 00:18:16.586111 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:16.586345 kubelet[2543]: E0702 00:18:16.586334 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:16.586423 kubelet[2543]: W0702 00:18:16.586346 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:16.586423 kubelet[2543]: E0702 00:18:16.586364 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:16.586640 kubelet[2543]: E0702 00:18:16.586624 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:16.586640 kubelet[2543]: W0702 00:18:16.586640 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:16.586920 kubelet[2543]: E0702 00:18:16.586653 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:16.587050 kubelet[2543]: E0702 00:18:16.587018 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:16.587050 kubelet[2543]: W0702 00:18:16.587035 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:16.587583 kubelet[2543]: E0702 00:18:16.587050 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:16.587955 kubelet[2543]: E0702 00:18:16.587933 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:16.587955 kubelet[2543]: W0702 00:18:16.587956 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:16.588068 kubelet[2543]: E0702 00:18:16.587977 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:16.588068 kubelet[2543]: I0702 00:18:16.588031 2543 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phcr8\" (UniqueName: \"kubernetes.io/projected/6e8a9d99-8d13-42e1-b56f-2cd5e2bc933c-kube-api-access-phcr8\") pod \"6e8a9d99-8d13-42e1-b56f-2cd5e2bc933c\" (UID: \"6e8a9d99-8d13-42e1-b56f-2cd5e2bc933c\") " Jul 2 00:18:16.588680 kubelet[2543]: E0702 00:18:16.588614 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:16.588680 kubelet[2543]: W0702 00:18:16.588633 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:16.588680 kubelet[2543]: E0702 00:18:16.588659 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:16.588778 kubelet[2543]: I0702 00:18:16.588697 2543 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/6e8a9d99-8d13-42e1-b56f-2cd5e2bc933c-typha-certs\") pod \"6e8a9d99-8d13-42e1-b56f-2cd5e2bc933c\" (UID: \"6e8a9d99-8d13-42e1-b56f-2cd5e2bc933c\") " Jul 2 00:18:16.589672 kubelet[2543]: E0702 00:18:16.589645 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:16.589672 kubelet[2543]: W0702 00:18:16.589671 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:16.589777 kubelet[2543]: E0702 00:18:16.589700 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:16.589777 kubelet[2543]: I0702 00:18:16.589738 2543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/36f28dd5-3300-4e13-8481-384a11689f64-tigera-ca-bundle\") pod \"calico-typha-545b7b6997-b9ckw\" (UID: \"36f28dd5-3300-4e13-8481-384a11689f64\") " pod="calico-system/calico-typha-545b7b6997-b9ckw" Jul 2 00:18:16.590394 kubelet[2543]: E0702 00:18:16.590371 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:16.590458 kubelet[2543]: W0702 00:18:16.590394 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:16.590627 kubelet[2543]: E0702 00:18:16.590512 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:16.590627 kubelet[2543]: I0702 00:18:16.590583 2543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/36f28dd5-3300-4e13-8481-384a11689f64-typha-certs\") pod \"calico-typha-545b7b6997-b9ckw\" (UID: \"36f28dd5-3300-4e13-8481-384a11689f64\") " pod="calico-system/calico-typha-545b7b6997-b9ckw" Jul 2 00:18:16.591751 kubelet[2543]: E0702 00:18:16.591695 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:16.591751 kubelet[2543]: W0702 00:18:16.591724 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:16.592557 kubelet[2543]: E0702 00:18:16.591862 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:16.592557 kubelet[2543]: E0702 00:18:16.592046 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:16.592557 kubelet[2543]: W0702 00:18:16.592059 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:16.592557 kubelet[2543]: E0702 00:18:16.592476 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:16.592557 kubelet[2543]: W0702 00:18:16.592493 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:16.593923 kubelet[2543]: E0702 00:18:16.593889 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:16.593923 kubelet[2543]: W0702 00:18:16.593916 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:16.594238 kubelet[2543]: E0702 00:18:16.594220 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:16.594238 kubelet[2543]: W0702 00:18:16.594237 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:16.594332 kubelet[2543]: E0702 00:18:16.594258 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:16.594491 kubelet[2543]: E0702 00:18:16.594474 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:16.594568 kubelet[2543]: W0702 00:18:16.594489 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:16.594568 kubelet[2543]: E0702 00:18:16.594513 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:16.594654 kubelet[2543]: E0702 00:18:16.594592 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:16.606368 kubelet[2543]: I0702 00:18:16.604423 2543 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e8a9d99-8d13-42e1-b56f-2cd5e2bc933c-kube-api-access-phcr8" (OuterVolumeSpecName: "kube-api-access-phcr8") pod "6e8a9d99-8d13-42e1-b56f-2cd5e2bc933c" (UID: "6e8a9d99-8d13-42e1-b56f-2cd5e2bc933c"). InnerVolumeSpecName "kube-api-access-phcr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 00:18:16.606368 kubelet[2543]: E0702 00:18:16.604461 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:16.606368 kubelet[2543]: I0702 00:18:16.604426 2543 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e8a9d99-8d13-42e1-b56f-2cd5e2bc933c-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "6e8a9d99-8d13-42e1-b56f-2cd5e2bc933c" (UID: "6e8a9d99-8d13-42e1-b56f-2cd5e2bc933c"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 00:18:16.606368 kubelet[2543]: E0702 00:18:16.605396 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:16.606368 kubelet[2543]: I0702 00:18:16.605444 2543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ktd9\" (UniqueName: \"kubernetes.io/projected/36f28dd5-3300-4e13-8481-384a11689f64-kube-api-access-7ktd9\") pod \"calico-typha-545b7b6997-b9ckw\" (UID: \"36f28dd5-3300-4e13-8481-384a11689f64\") " pod="calico-system/calico-typha-545b7b6997-b9ckw" Jul 2 00:18:16.609299 kubelet[2543]: E0702 00:18:16.609253 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:16.609299 kubelet[2543]: W0702 00:18:16.609295 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:16.609495 kubelet[2543]: E0702 00:18:16.609336 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:16.609698 kubelet[2543]: I0702 00:18:16.609618 2543 reconciler_common.go:289] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/6e8a9d99-8d13-42e1-b56f-2cd5e2bc933c-typha-certs\") on node \"ci-3975.1.1-c-5be545c9fd\" DevicePath \"\"" Jul 2 00:18:16.609698 kubelet[2543]: I0702 00:18:16.609642 2543 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-phcr8\" (UniqueName: \"kubernetes.io/projected/6e8a9d99-8d13-42e1-b56f-2cd5e2bc933c-kube-api-access-phcr8\") on node \"ci-3975.1.1-c-5be545c9fd\" DevicePath \"\"" Jul 2 00:18:16.610116 kubelet[2543]: E0702 00:18:16.609968 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:16.610116 kubelet[2543]: W0702 00:18:16.609991 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:16.610116 kubelet[2543]: E0702 00:18:16.610008 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:16.611925 kubelet[2543]: E0702 00:18:16.611880 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:16.611925 kubelet[2543]: W0702 00:18:16.611914 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:16.612680 kubelet[2543]: E0702 00:18:16.611949 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:16.659965 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e15e2e818f24761a6bb0ae4fcd2e3c3f5431d6ef316d5e434703e1c16ad85657-rootfs.mount: Deactivated successfully. Jul 2 00:18:16.660163 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2fddbbf7ef31edcc21d2d167bf089806a4220864ed5560eb1bc3a0ea47459a43-rootfs.mount: Deactivated successfully. Jul 2 00:18:16.660261 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2fddbbf7ef31edcc21d2d167bf089806a4220864ed5560eb1bc3a0ea47459a43-shm.mount: Deactivated successfully. Jul 2 00:18:16.660367 systemd[1]: var-lib-kubelet-pods-6e8a9d99\x2d8d13\x2d42e1\x2db56f\x2d2cd5e2bc933c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dphcr8.mount: Deactivated successfully. Jul 2 00:18:16.660496 systemd[1]: var-lib-kubelet-pods-6e8a9d99\x2d8d13\x2d42e1\x2db56f\x2d2cd5e2bc933c-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. Jul 2 00:18:16.711520 kubelet[2543]: E0702 00:18:16.711111 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:16.712027 kubelet[2543]: W0702 00:18:16.711814 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:16.712027 kubelet[2543]: E0702 00:18:16.711857 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:16.712027 kubelet[2543]: I0702 00:18:16.711914 2543 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e8a9d99-8d13-42e1-b56f-2cd5e2bc933c-tigera-ca-bundle\") pod \"6e8a9d99-8d13-42e1-b56f-2cd5e2bc933c\" (UID: \"6e8a9d99-8d13-42e1-b56f-2cd5e2bc933c\") " Jul 2 00:18:16.712519 kubelet[2543]: E0702 00:18:16.712381 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:16.712519 kubelet[2543]: W0702 00:18:16.712398 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:16.712519 kubelet[2543]: E0702 00:18:16.712416 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:16.712980 kubelet[2543]: E0702 00:18:16.712830 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:16.712980 kubelet[2543]: W0702 00:18:16.712845 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:16.712980 kubelet[2543]: E0702 00:18:16.712865 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:16.713312 kubelet[2543]: E0702 00:18:16.713184 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:16.713312 kubelet[2543]: W0702 00:18:16.713194 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:16.713312 kubelet[2543]: E0702 00:18:16.713205 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:16.713983 kubelet[2543]: E0702 00:18:16.713857 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:16.713983 kubelet[2543]: W0702 00:18:16.713871 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:16.713983 kubelet[2543]: E0702 00:18:16.713885 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:16.714474 kubelet[2543]: E0702 00:18:16.714347 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:16.714474 kubelet[2543]: W0702 00:18:16.714364 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:16.714474 kubelet[2543]: E0702 00:18:16.714380 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:16.715878 kubelet[2543]: E0702 00:18:16.715703 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:16.715878 kubelet[2543]: W0702 00:18:16.715721 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:16.715878 kubelet[2543]: E0702 00:18:16.715738 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:16.716293 kubelet[2543]: E0702 00:18:16.716242 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:16.716375 kubelet[2543]: W0702 00:18:16.716362 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:16.716440 kubelet[2543]: E0702 00:18:16.716429 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:16.717193 kubelet[2543]: E0702 00:18:16.717176 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:16.717303 kubelet[2543]: W0702 00:18:16.717289 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:16.717478 kubelet[2543]: E0702 00:18:16.717365 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:16.717715 kubelet[2543]: E0702 00:18:16.717703 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:16.717775 kubelet[2543]: W0702 00:18:16.717767 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:16.717842 kubelet[2543]: E0702 00:18:16.717832 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:16.718184 kubelet[2543]: E0702 00:18:16.718073 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:16.718184 kubelet[2543]: W0702 00:18:16.718084 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:16.718184 kubelet[2543]: E0702 00:18:16.718095 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:16.719602 kubelet[2543]: E0702 00:18:16.718748 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:16.719602 kubelet[2543]: W0702 00:18:16.718766 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:16.719602 kubelet[2543]: E0702 00:18:16.718797 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:16.720016 kubelet[2543]: E0702 00:18:16.720000 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:16.720097 kubelet[2543]: W0702 00:18:16.720087 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:16.720147 kubelet[2543]: E0702 00:18:16.720138 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:16.720405 kubelet[2543]: E0702 00:18:16.720394 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:16.720470 kubelet[2543]: W0702 00:18:16.720461 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:16.720681 kubelet[2543]: E0702 00:18:16.720667 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:16.720907 kubelet[2543]: E0702 00:18:16.720896 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:16.720970 kubelet[2543]: W0702 00:18:16.720961 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:16.721033 kubelet[2543]: E0702 00:18:16.721019 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:16.721278 kubelet[2543]: E0702 00:18:16.721261 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:16.721389 kubelet[2543]: W0702 00:18:16.721349 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:16.721503 kubelet[2543]: E0702 00:18:16.721476 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:16.722666 kubelet[2543]: E0702 00:18:16.722633 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:16.724390 kubelet[2543]: W0702 00:18:16.724355 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:16.724668 kubelet[2543]: E0702 00:18:16.724652 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:16.727331 systemd[1]: var-lib-kubelet-pods-6e8a9d99\x2d8d13\x2d42e1\x2db56f\x2d2cd5e2bc933c-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. Jul 2 00:18:16.728383 kubelet[2543]: E0702 00:18:16.727794 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:16.728383 kubelet[2543]: W0702 00:18:16.727821 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:16.728383 kubelet[2543]: E0702 00:18:16.727851 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:16.728383 kubelet[2543]: I0702 00:18:16.728280 2543 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e8a9d99-8d13-42e1-b56f-2cd5e2bc933c-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "6e8a9d99-8d13-42e1-b56f-2cd5e2bc933c" (UID: "6e8a9d99-8d13-42e1-b56f-2cd5e2bc933c"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 00:18:16.738801 kubelet[2543]: E0702 00:18:16.738765 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:16.738946 kubelet[2543]: W0702 00:18:16.738930 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:16.739039 kubelet[2543]: E0702 00:18:16.739027 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:16.748456 kubelet[2543]: E0702 00:18:16.747864 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:16.748456 kubelet[2543]: W0702 00:18:16.747893 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:16.748456 kubelet[2543]: E0702 00:18:16.747918 2543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:16.814102 kubelet[2543]: I0702 00:18:16.813851 2543 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e8a9d99-8d13-42e1-b56f-2cd5e2bc933c-tigera-ca-bundle\") on node \"ci-3975.1.1-c-5be545c9fd\" DevicePath \"\"" Jul 2 00:18:16.882066 kubelet[2543]: E0702 00:18:16.879709 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:18:16.882648 containerd[1472]: time="2024-07-02T00:18:16.882605922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-545b7b6997-b9ckw,Uid:36f28dd5-3300-4e13-8481-384a11689f64,Namespace:calico-system,Attempt:0,}" Jul 2 00:18:16.972353 containerd[1472]: time="2024-07-02T00:18:16.971287938Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:18:16.972353 containerd[1472]: time="2024-07-02T00:18:16.971379143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:18:16.972353 containerd[1472]: time="2024-07-02T00:18:16.971401862Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:18:16.972353 containerd[1472]: time="2024-07-02T00:18:16.971414911Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:18:17.007728 systemd[1]: Started cri-containerd-e4be3bc0223d528eb8cb19e1889fa49e4ba957c1ab73b9988ee482faba3c3e03.scope - libcontainer container e4be3bc0223d528eb8cb19e1889fa49e4ba957c1ab73b9988ee482faba3c3e03. Jul 2 00:18:17.139207 kubelet[2543]: E0702 00:18:17.137085 2543 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dpgqd" podUID="d3d95f80-f22a-4d64-99eb-0d72b7beb76e" Jul 2 00:18:17.151504 containerd[1472]: time="2024-07-02T00:18:17.151443900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-545b7b6997-b9ckw,Uid:36f28dd5-3300-4e13-8481-384a11689f64,Namespace:calico-system,Attempt:0,} returns sandbox id \"e4be3bc0223d528eb8cb19e1889fa49e4ba957c1ab73b9988ee482faba3c3e03\"" Jul 2 00:18:17.156641 kubelet[2543]: E0702 00:18:17.155882 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:18:17.160569 containerd[1472]: time="2024-07-02T00:18:17.159473340Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:18:17.163307 containerd[1472]: time="2024-07-02T00:18:17.163226109Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=5140568" Jul 2 00:18:17.164522 containerd[1472]: time="2024-07-02T00:18:17.164441299Z" level=info msg="ImageCreate event name:\"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:18:17.169663 containerd[1472]: time="2024-07-02T00:18:17.169252395Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:18:17.172542 containerd[1472]: time="2024-07-02T00:18:17.172172858Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6588288\" in 1.523724535s" Jul 2 00:18:17.172542 containerd[1472]: time="2024-07-02T00:18:17.172222195Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\"" Jul 2 00:18:17.177059 containerd[1472]: time="2024-07-02T00:18:17.176728224Z" level=info msg="CreateContainer within sandbox \"e4be3bc0223d528eb8cb19e1889fa49e4ba957c1ab73b9988ee482faba3c3e03\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 2 00:18:17.180562 containerd[1472]: time="2024-07-02T00:18:17.179458686Z" level=info msg="CreateContainer within sandbox \"ba87404fbf5eb0bf45b945bc63ac08d665feeca9c221cab7f5408ef4cea5b998\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 2 00:18:17.208020 containerd[1472]: time="2024-07-02T00:18:17.207973288Z" level=info msg="CreateContainer within sandbox \"e4be3bc0223d528eb8cb19e1889fa49e4ba957c1ab73b9988ee482faba3c3e03\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"a8a908c1feaf9ebfdfb0add17cde560a735442be4364152eb9bea6876d5d64a8\"" Jul 2 00:18:17.209256 containerd[1472]: time="2024-07-02T00:18:17.209212181Z" level=info msg="StartContainer for \"a8a908c1feaf9ebfdfb0add17cde560a735442be4364152eb9bea6876d5d64a8\"" Jul 2 00:18:17.232387 containerd[1472]: time="2024-07-02T00:18:17.232341711Z" level=info msg="CreateContainer within sandbox \"ba87404fbf5eb0bf45b945bc63ac08d665feeca9c221cab7f5408ef4cea5b998\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"985aa03d626fd2b26b3b8a3bb28689a93b866b431575e0fec77027df2e9c87f6\"" Jul 2 00:18:17.233432 containerd[1472]: time="2024-07-02T00:18:17.233385324Z" level=info msg="StartContainer for \"985aa03d626fd2b26b3b8a3bb28689a93b866b431575e0fec77027df2e9c87f6\"" Jul 2 00:18:17.266506 systemd[1]: Started cri-containerd-a8a908c1feaf9ebfdfb0add17cde560a735442be4364152eb9bea6876d5d64a8.scope - libcontainer container a8a908c1feaf9ebfdfb0add17cde560a735442be4364152eb9bea6876d5d64a8. Jul 2 00:18:17.286162 kubelet[2543]: I0702 00:18:17.286133 2543 scope.go:117] "RemoveContainer" containerID="e15e2e818f24761a6bb0ae4fcd2e3c3f5431d6ef316d5e434703e1c16ad85657" Jul 2 00:18:17.296562 systemd[1]: Removed slice kubepods-besteffort-pod6e8a9d99_8d13_42e1_b56f_2cd5e2bc933c.slice - libcontainer container kubepods-besteffort-pod6e8a9d99_8d13_42e1_b56f_2cd5e2bc933c.slice. Jul 2 00:18:17.300224 containerd[1472]: time="2024-07-02T00:18:17.299741574Z" level=info msg="RemoveContainer for \"e15e2e818f24761a6bb0ae4fcd2e3c3f5431d6ef316d5e434703e1c16ad85657\"" Jul 2 00:18:17.313775 containerd[1472]: time="2024-07-02T00:18:17.313723727Z" level=info msg="RemoveContainer for \"e15e2e818f24761a6bb0ae4fcd2e3c3f5431d6ef316d5e434703e1c16ad85657\" returns successfully" Jul 2 00:18:17.314162 kubelet[2543]: I0702 00:18:17.314125 2543 scope.go:117] "RemoveContainer" containerID="e15e2e818f24761a6bb0ae4fcd2e3c3f5431d6ef316d5e434703e1c16ad85657" Jul 2 00:18:17.322880 kubelet[2543]: E0702 00:18:17.315024 2543 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e15e2e818f24761a6bb0ae4fcd2e3c3f5431d6ef316d5e434703e1c16ad85657\": not found" containerID="e15e2e818f24761a6bb0ae4fcd2e3c3f5431d6ef316d5e434703e1c16ad85657" Jul 2 00:18:17.322880 kubelet[2543]: I0702 00:18:17.315091 2543 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e15e2e818f24761a6bb0ae4fcd2e3c3f5431d6ef316d5e434703e1c16ad85657"} err="failed to get container status \"e15e2e818f24761a6bb0ae4fcd2e3c3f5431d6ef316d5e434703e1c16ad85657\": rpc error: code = NotFound desc = an error occurred when try to find container \"e15e2e818f24761a6bb0ae4fcd2e3c3f5431d6ef316d5e434703e1c16ad85657\": not found" Jul 2 00:18:17.322788 systemd[1]: Started cri-containerd-985aa03d626fd2b26b3b8a3bb28689a93b866b431575e0fec77027df2e9c87f6.scope - libcontainer container 985aa03d626fd2b26b3b8a3bb28689a93b866b431575e0fec77027df2e9c87f6. Jul 2 00:18:17.323087 containerd[1472]: time="2024-07-02T00:18:17.314803701Z" level=error msg="ContainerStatus for \"e15e2e818f24761a6bb0ae4fcd2e3c3f5431d6ef316d5e434703e1c16ad85657\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e15e2e818f24761a6bb0ae4fcd2e3c3f5431d6ef316d5e434703e1c16ad85657\": not found" Jul 2 00:18:17.404014 containerd[1472]: time="2024-07-02T00:18:17.403833623Z" level=info msg="StartContainer for \"985aa03d626fd2b26b3b8a3bb28689a93b866b431575e0fec77027df2e9c87f6\" returns successfully" Jul 2 00:18:17.415856 containerd[1472]: time="2024-07-02T00:18:17.415787846Z" level=info msg="StartContainer for \"a8a908c1feaf9ebfdfb0add17cde560a735442be4364152eb9bea6876d5d64a8\" returns successfully" Jul 2 00:18:17.456681 systemd[1]: cri-containerd-985aa03d626fd2b26b3b8a3bb28689a93b866b431575e0fec77027df2e9c87f6.scope: Deactivated successfully. Jul 2 00:18:17.528050 containerd[1472]: time="2024-07-02T00:18:17.527942165Z" level=info msg="shim disconnected" id=985aa03d626fd2b26b3b8a3bb28689a93b866b431575e0fec77027df2e9c87f6 namespace=k8s.io Jul 2 00:18:17.528433 containerd[1472]: time="2024-07-02T00:18:17.528402399Z" level=warning msg="cleaning up after shim disconnected" id=985aa03d626fd2b26b3b8a3bb28689a93b866b431575e0fec77027df2e9c87f6 namespace=k8s.io Jul 2 00:18:17.528544 containerd[1472]: time="2024-07-02T00:18:17.528508038Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:18:18.141407 kubelet[2543]: I0702 00:18:18.141128 2543 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e8a9d99-8d13-42e1-b56f-2cd5e2bc933c" path="/var/lib/kubelet/pods/6e8a9d99-8d13-42e1-b56f-2cd5e2bc933c/volumes" Jul 2 00:18:18.324067 containerd[1472]: time="2024-07-02T00:18:18.324010935Z" level=info msg="StopPodSandbox for \"ba87404fbf5eb0bf45b945bc63ac08d665feeca9c221cab7f5408ef4cea5b998\"" Jul 2 00:18:18.326328 containerd[1472]: time="2024-07-02T00:18:18.324066190Z" level=info msg="Container to stop \"985aa03d626fd2b26b3b8a3bb28689a93b866b431575e0fec77027df2e9c87f6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:18:18.328988 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ba87404fbf5eb0bf45b945bc63ac08d665feeca9c221cab7f5408ef4cea5b998-shm.mount: Deactivated successfully. Jul 2 00:18:18.342344 kubelet[2543]: E0702 00:18:18.342074 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:18:18.354405 systemd[1]: cri-containerd-ba87404fbf5eb0bf45b945bc63ac08d665feeca9c221cab7f5408ef4cea5b998.scope: Deactivated successfully. Jul 2 00:18:18.421941 containerd[1472]: time="2024-07-02T00:18:18.421318904Z" level=info msg="shim disconnected" id=ba87404fbf5eb0bf45b945bc63ac08d665feeca9c221cab7f5408ef4cea5b998 namespace=k8s.io Jul 2 00:18:18.421941 containerd[1472]: time="2024-07-02T00:18:18.421394238Z" level=warning msg="cleaning up after shim disconnected" id=ba87404fbf5eb0bf45b945bc63ac08d665feeca9c221cab7f5408ef4cea5b998 namespace=k8s.io Jul 2 00:18:18.421941 containerd[1472]: time="2024-07-02T00:18:18.421413775Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:18:18.422270 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ba87404fbf5eb0bf45b945bc63ac08d665feeca9c221cab7f5408ef4cea5b998-rootfs.mount: Deactivated successfully. Jul 2 00:18:18.445392 containerd[1472]: time="2024-07-02T00:18:18.445325524Z" level=info msg="TearDown network for sandbox \"ba87404fbf5eb0bf45b945bc63ac08d665feeca9c221cab7f5408ef4cea5b998\" successfully" Jul 2 00:18:18.445392 containerd[1472]: time="2024-07-02T00:18:18.445381385Z" level=info msg="StopPodSandbox for \"ba87404fbf5eb0bf45b945bc63ac08d665feeca9c221cab7f5408ef4cea5b998\" returns successfully" Jul 2 00:18:18.481568 kubelet[2543]: I0702 00:18:18.480098 2543 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-545b7b6997-b9ckw" podStartSLOduration=5.480076001 podStartE2EDuration="5.480076001s" podCreationTimestamp="2024-07-02 00:18:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:18:18.391646023 +0000 UTC m=+26.453685302" watchObservedRunningTime="2024-07-02 00:18:18.480076001 +0000 UTC m=+26.542115279" Jul 2 00:18:18.528665 kubelet[2543]: I0702 00:18:18.528128 2543 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c54f3f28-1765-4f64-a375-409c62d9adde-cni-net-dir\") pod \"c54f3f28-1765-4f64-a375-409c62d9adde\" (UID: \"c54f3f28-1765-4f64-a375-409c62d9adde\") " Jul 2 00:18:18.528665 kubelet[2543]: I0702 00:18:18.528179 2543 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c54f3f28-1765-4f64-a375-409c62d9adde-flexvol-driver-host\") pod \"c54f3f28-1765-4f64-a375-409c62d9adde\" (UID: \"c54f3f28-1765-4f64-a375-409c62d9adde\") " Jul 2 00:18:18.528665 kubelet[2543]: I0702 00:18:18.528203 2543 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c54f3f28-1765-4f64-a375-409c62d9adde-policysync\") pod \"c54f3f28-1765-4f64-a375-409c62d9adde\" (UID: \"c54f3f28-1765-4f64-a375-409c62d9adde\") " Jul 2 00:18:18.528665 kubelet[2543]: I0702 00:18:18.528219 2543 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c54f3f28-1765-4f64-a375-409c62d9adde-var-lib-calico\") pod \"c54f3f28-1765-4f64-a375-409c62d9adde\" (UID: \"c54f3f28-1765-4f64-a375-409c62d9adde\") " Jul 2 00:18:18.528665 kubelet[2543]: I0702 00:18:18.528254 2543 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c54f3f28-1765-4f64-a375-409c62d9adde-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "c54f3f28-1765-4f64-a375-409c62d9adde" (UID: "c54f3f28-1765-4f64-a375-409c62d9adde"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:18:18.528665 kubelet[2543]: I0702 00:18:18.528272 2543 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c54f3f28-1765-4f64-a375-409c62d9adde-node-certs\") pod \"c54f3f28-1765-4f64-a375-409c62d9adde\" (UID: \"c54f3f28-1765-4f64-a375-409c62d9adde\") " Jul 2 00:18:18.529095 kubelet[2543]: I0702 00:18:18.528358 2543 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgmdv\" (UniqueName: \"kubernetes.io/projected/c54f3f28-1765-4f64-a375-409c62d9adde-kube-api-access-pgmdv\") pod \"c54f3f28-1765-4f64-a375-409c62d9adde\" (UID: \"c54f3f28-1765-4f64-a375-409c62d9adde\") " Jul 2 00:18:18.529095 kubelet[2543]: I0702 00:18:18.528380 2543 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c54f3f28-1765-4f64-a375-409c62d9adde-lib-modules\") pod \"c54f3f28-1765-4f64-a375-409c62d9adde\" (UID: \"c54f3f28-1765-4f64-a375-409c62d9adde\") " Jul 2 00:18:18.529095 kubelet[2543]: I0702 00:18:18.528408 2543 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c54f3f28-1765-4f64-a375-409c62d9adde-tigera-ca-bundle\") pod \"c54f3f28-1765-4f64-a375-409c62d9adde\" (UID: \"c54f3f28-1765-4f64-a375-409c62d9adde\") " Jul 2 00:18:18.529095 kubelet[2543]: I0702 00:18:18.528426 2543 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c54f3f28-1765-4f64-a375-409c62d9adde-cni-bin-dir\") pod \"c54f3f28-1765-4f64-a375-409c62d9adde\" (UID: \"c54f3f28-1765-4f64-a375-409c62d9adde\") " Jul 2 00:18:18.529095 kubelet[2543]: I0702 00:18:18.528442 2543 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c54f3f28-1765-4f64-a375-409c62d9adde-cni-log-dir\") pod \"c54f3f28-1765-4f64-a375-409c62d9adde\" (UID: \"c54f3f28-1765-4f64-a375-409c62d9adde\") " Jul 2 00:18:18.529095 kubelet[2543]: I0702 00:18:18.528461 2543 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c54f3f28-1765-4f64-a375-409c62d9adde-xtables-lock\") pod \"c54f3f28-1765-4f64-a375-409c62d9adde\" (UID: \"c54f3f28-1765-4f64-a375-409c62d9adde\") " Jul 2 00:18:18.529306 kubelet[2543]: I0702 00:18:18.528482 2543 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c54f3f28-1765-4f64-a375-409c62d9adde-var-run-calico\") pod \"c54f3f28-1765-4f64-a375-409c62d9adde\" (UID: \"c54f3f28-1765-4f64-a375-409c62d9adde\") " Jul 2 00:18:18.529306 kubelet[2543]: I0702 00:18:18.528564 2543 reconciler_common.go:289] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c54f3f28-1765-4f64-a375-409c62d9adde-cni-net-dir\") on node \"ci-3975.1.1-c-5be545c9fd\" DevicePath \"\"" Jul 2 00:18:18.529306 kubelet[2543]: I0702 00:18:18.528606 2543 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c54f3f28-1765-4f64-a375-409c62d9adde-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "c54f3f28-1765-4f64-a375-409c62d9adde" (UID: "c54f3f28-1765-4f64-a375-409c62d9adde"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:18:18.530557 kubelet[2543]: I0702 00:18:18.529616 2543 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c54f3f28-1765-4f64-a375-409c62d9adde-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c54f3f28-1765-4f64-a375-409c62d9adde" (UID: "c54f3f28-1765-4f64-a375-409c62d9adde"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:18:18.530808 kubelet[2543]: I0702 00:18:18.530779 2543 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c54f3f28-1765-4f64-a375-409c62d9adde-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "c54f3f28-1765-4f64-a375-409c62d9adde" (UID: "c54f3f28-1765-4f64-a375-409c62d9adde"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:18:18.530903 kubelet[2543]: I0702 00:18:18.530892 2543 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c54f3f28-1765-4f64-a375-409c62d9adde-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "c54f3f28-1765-4f64-a375-409c62d9adde" (UID: "c54f3f28-1765-4f64-a375-409c62d9adde"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:18:18.530974 kubelet[2543]: I0702 00:18:18.530964 2543 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c54f3f28-1765-4f64-a375-409c62d9adde-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c54f3f28-1765-4f64-a375-409c62d9adde" (UID: "c54f3f28-1765-4f64-a375-409c62d9adde"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:18:18.531035 kubelet[2543]: I0702 00:18:18.530992 2543 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c54f3f28-1765-4f64-a375-409c62d9adde-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "c54f3f28-1765-4f64-a375-409c62d9adde" (UID: "c54f3f28-1765-4f64-a375-409c62d9adde"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:18:18.531101 kubelet[2543]: I0702 00:18:18.531023 2543 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c54f3f28-1765-4f64-a375-409c62d9adde-policysync" (OuterVolumeSpecName: "policysync") pod "c54f3f28-1765-4f64-a375-409c62d9adde" (UID: "c54f3f28-1765-4f64-a375-409c62d9adde"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:18:18.531151 kubelet[2543]: I0702 00:18:18.531051 2543 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c54f3f28-1765-4f64-a375-409c62d9adde-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "c54f3f28-1765-4f64-a375-409c62d9adde" (UID: "c54f3f28-1765-4f64-a375-409c62d9adde"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:18:18.532520 kubelet[2543]: I0702 00:18:18.532362 2543 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c54f3f28-1765-4f64-a375-409c62d9adde-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "c54f3f28-1765-4f64-a375-409c62d9adde" (UID: "c54f3f28-1765-4f64-a375-409c62d9adde"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 00:18:18.536005 kubelet[2543]: I0702 00:18:18.535939 2543 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c54f3f28-1765-4f64-a375-409c62d9adde-kube-api-access-pgmdv" (OuterVolumeSpecName: "kube-api-access-pgmdv") pod "c54f3f28-1765-4f64-a375-409c62d9adde" (UID: "c54f3f28-1765-4f64-a375-409c62d9adde"). InnerVolumeSpecName "kube-api-access-pgmdv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 00:18:18.537164 kubelet[2543]: I0702 00:18:18.537112 2543 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c54f3f28-1765-4f64-a375-409c62d9adde-node-certs" (OuterVolumeSpecName: "node-certs") pod "c54f3f28-1765-4f64-a375-409c62d9adde" (UID: "c54f3f28-1765-4f64-a375-409c62d9adde"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 00:18:18.537949 systemd[1]: var-lib-kubelet-pods-c54f3f28\x2d1765\x2d4f64\x2da375\x2d409c62d9adde-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Jul 2 00:18:18.541370 systemd[1]: var-lib-kubelet-pods-c54f3f28\x2d1765\x2d4f64\x2da375\x2d409c62d9adde-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpgmdv.mount: Deactivated successfully. Jul 2 00:18:18.629004 kubelet[2543]: I0702 00:18:18.628759 2543 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c54f3f28-1765-4f64-a375-409c62d9adde-xtables-lock\") on node \"ci-3975.1.1-c-5be545c9fd\" DevicePath \"\"" Jul 2 00:18:18.629004 kubelet[2543]: I0702 00:18:18.628801 2543 reconciler_common.go:289] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c54f3f28-1765-4f64-a375-409c62d9adde-var-run-calico\") on node \"ci-3975.1.1-c-5be545c9fd\" DevicePath \"\"" Jul 2 00:18:18.629004 kubelet[2543]: I0702 00:18:18.628817 2543 reconciler_common.go:289] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c54f3f28-1765-4f64-a375-409c62d9adde-flexvol-driver-host\") on node \"ci-3975.1.1-c-5be545c9fd\" DevicePath \"\"" Jul 2 00:18:18.629004 kubelet[2543]: I0702 00:18:18.628835 2543 reconciler_common.go:289] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c54f3f28-1765-4f64-a375-409c62d9adde-policysync\") on node \"ci-3975.1.1-c-5be545c9fd\" DevicePath \"\"" Jul 2 00:18:18.629004 kubelet[2543]: I0702 00:18:18.628848 2543 reconciler_common.go:289] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c54f3f28-1765-4f64-a375-409c62d9adde-var-lib-calico\") on node \"ci-3975.1.1-c-5be545c9fd\" DevicePath \"\"" Jul 2 00:18:18.629004 kubelet[2543]: I0702 00:18:18.628859 2543 reconciler_common.go:289] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c54f3f28-1765-4f64-a375-409c62d9adde-node-certs\") on node \"ci-3975.1.1-c-5be545c9fd\" DevicePath \"\"" Jul 2 00:18:18.629004 kubelet[2543]: I0702 00:18:18.628871 2543 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-pgmdv\" (UniqueName: \"kubernetes.io/projected/c54f3f28-1765-4f64-a375-409c62d9adde-kube-api-access-pgmdv\") on node \"ci-3975.1.1-c-5be545c9fd\" DevicePath \"\"" Jul 2 00:18:18.629004 kubelet[2543]: I0702 00:18:18.628883 2543 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c54f3f28-1765-4f64-a375-409c62d9adde-lib-modules\") on node \"ci-3975.1.1-c-5be545c9fd\" DevicePath \"\"" Jul 2 00:18:18.630675 kubelet[2543]: I0702 00:18:18.628898 2543 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c54f3f28-1765-4f64-a375-409c62d9adde-tigera-ca-bundle\") on node \"ci-3975.1.1-c-5be545c9fd\" DevicePath \"\"" Jul 2 00:18:18.630675 kubelet[2543]: I0702 00:18:18.628909 2543 reconciler_common.go:289] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c54f3f28-1765-4f64-a375-409c62d9adde-cni-bin-dir\") on node \"ci-3975.1.1-c-5be545c9fd\" DevicePath \"\"" Jul 2 00:18:18.630675 kubelet[2543]: I0702 00:18:18.628922 2543 reconciler_common.go:289] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c54f3f28-1765-4f64-a375-409c62d9adde-cni-log-dir\") on node \"ci-3975.1.1-c-5be545c9fd\" DevicePath \"\"" Jul 2 00:18:19.136948 kubelet[2543]: E0702 00:18:19.136850 2543 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dpgqd" podUID="d3d95f80-f22a-4d64-99eb-0d72b7beb76e" Jul 2 00:18:19.349622 kubelet[2543]: I0702 00:18:19.349580 2543 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 00:18:19.350445 kubelet[2543]: I0702 00:18:19.350285 2543 scope.go:117] "RemoveContainer" containerID="985aa03d626fd2b26b3b8a3bb28689a93b866b431575e0fec77027df2e9c87f6" Jul 2 00:18:19.354153 kubelet[2543]: E0702 00:18:19.354122 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:18:19.358402 containerd[1472]: time="2024-07-02T00:18:19.357595172Z" level=info msg="RemoveContainer for \"985aa03d626fd2b26b3b8a3bb28689a93b866b431575e0fec77027df2e9c87f6\"" Jul 2 00:18:19.363760 containerd[1472]: time="2024-07-02T00:18:19.363380109Z" level=info msg="RemoveContainer for \"985aa03d626fd2b26b3b8a3bb28689a93b866b431575e0fec77027df2e9c87f6\" returns successfully" Jul 2 00:18:19.367503 systemd[1]: Removed slice kubepods-besteffort-podc54f3f28_1765_4f64_a375_409c62d9adde.slice - libcontainer container kubepods-besteffort-podc54f3f28_1765_4f64_a375_409c62d9adde.slice. Jul 2 00:18:19.417939 kubelet[2543]: I0702 00:18:19.417520 2543 topology_manager.go:215] "Topology Admit Handler" podUID="4f68e158-9391-4785-9426-f0afb502ba98" podNamespace="calico-system" podName="calico-node-n6dkq" Jul 2 00:18:19.417939 kubelet[2543]: E0702 00:18:19.417610 2543 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c54f3f28-1765-4f64-a375-409c62d9adde" containerName="flexvol-driver" Jul 2 00:18:19.417939 kubelet[2543]: I0702 00:18:19.417634 2543 memory_manager.go:354] "RemoveStaleState removing state" podUID="c54f3f28-1765-4f64-a375-409c62d9adde" containerName="flexvol-driver" Jul 2 00:18:19.427863 systemd[1]: Created slice kubepods-besteffort-pod4f68e158_9391_4785_9426_f0afb502ba98.slice - libcontainer container kubepods-besteffort-pod4f68e158_9391_4785_9426_f0afb502ba98.slice. Jul 2 00:18:19.535479 kubelet[2543]: I0702 00:18:19.535407 2543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4f68e158-9391-4785-9426-f0afb502ba98-xtables-lock\") pod \"calico-node-n6dkq\" (UID: \"4f68e158-9391-4785-9426-f0afb502ba98\") " pod="calico-system/calico-node-n6dkq" Jul 2 00:18:19.535479 kubelet[2543]: I0702 00:18:19.535484 2543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/4f68e158-9391-4785-9426-f0afb502ba98-flexvol-driver-host\") pod \"calico-node-n6dkq\" (UID: \"4f68e158-9391-4785-9426-f0afb502ba98\") " pod="calico-system/calico-node-n6dkq" Jul 2 00:18:19.535776 kubelet[2543]: I0702 00:18:19.535524 2543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/4f68e158-9391-4785-9426-f0afb502ba98-node-certs\") pod \"calico-node-n6dkq\" (UID: \"4f68e158-9391-4785-9426-f0afb502ba98\") " pod="calico-system/calico-node-n6dkq" Jul 2 00:18:19.535807 kubelet[2543]: I0702 00:18:19.535785 2543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/4f68e158-9391-4785-9426-f0afb502ba98-cni-log-dir\") pod \"calico-node-n6dkq\" (UID: \"4f68e158-9391-4785-9426-f0afb502ba98\") " pod="calico-system/calico-node-n6dkq" Jul 2 00:18:19.535868 kubelet[2543]: I0702 00:18:19.535823 2543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4f68e158-9391-4785-9426-f0afb502ba98-tigera-ca-bundle\") pod \"calico-node-n6dkq\" (UID: \"4f68e158-9391-4785-9426-f0afb502ba98\") " pod="calico-system/calico-node-n6dkq" Jul 2 00:18:19.535921 kubelet[2543]: I0702 00:18:19.535904 2543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/4f68e158-9391-4785-9426-f0afb502ba98-var-run-calico\") pod \"calico-node-n6dkq\" (UID: \"4f68e158-9391-4785-9426-f0afb502ba98\") " pod="calico-system/calico-node-n6dkq" Jul 2 00:18:19.536141 kubelet[2543]: I0702 00:18:19.535938 2543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4f68e158-9391-4785-9426-f0afb502ba98-lib-modules\") pod \"calico-node-n6dkq\" (UID: \"4f68e158-9391-4785-9426-f0afb502ba98\") " pod="calico-system/calico-node-n6dkq" Jul 2 00:18:19.536188 kubelet[2543]: I0702 00:18:19.536160 2543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/4f68e158-9391-4785-9426-f0afb502ba98-policysync\") pod \"calico-node-n6dkq\" (UID: \"4f68e158-9391-4785-9426-f0afb502ba98\") " pod="calico-system/calico-node-n6dkq" Jul 2 00:18:19.536238 kubelet[2543]: I0702 00:18:19.536193 2543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/4f68e158-9391-4785-9426-f0afb502ba98-cni-bin-dir\") pod \"calico-node-n6dkq\" (UID: \"4f68e158-9391-4785-9426-f0afb502ba98\") " pod="calico-system/calico-node-n6dkq" Jul 2 00:18:19.536238 kubelet[2543]: I0702 00:18:19.536217 2543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27dz2\" (UniqueName: \"kubernetes.io/projected/4f68e158-9391-4785-9426-f0afb502ba98-kube-api-access-27dz2\") pod \"calico-node-n6dkq\" (UID: \"4f68e158-9391-4785-9426-f0afb502ba98\") " pod="calico-system/calico-node-n6dkq" Jul 2 00:18:19.536315 kubelet[2543]: I0702 00:18:19.536240 2543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4f68e158-9391-4785-9426-f0afb502ba98-var-lib-calico\") pod \"calico-node-n6dkq\" (UID: \"4f68e158-9391-4785-9426-f0afb502ba98\") " pod="calico-system/calico-node-n6dkq" Jul 2 00:18:19.536315 kubelet[2543]: I0702 00:18:19.536265 2543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/4f68e158-9391-4785-9426-f0afb502ba98-cni-net-dir\") pod \"calico-node-n6dkq\" (UID: \"4f68e158-9391-4785-9426-f0afb502ba98\") " pod="calico-system/calico-node-n6dkq" Jul 2 00:18:19.733515 kubelet[2543]: E0702 00:18:19.733134 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:18:19.734549 containerd[1472]: time="2024-07-02T00:18:19.734026775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-n6dkq,Uid:4f68e158-9391-4785-9426-f0afb502ba98,Namespace:calico-system,Attempt:0,}" Jul 2 00:18:19.766168 containerd[1472]: time="2024-07-02T00:18:19.765766024Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:18:19.766168 containerd[1472]: time="2024-07-02T00:18:19.765950790Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:18:19.766168 containerd[1472]: time="2024-07-02T00:18:19.765995539Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:18:19.766168 containerd[1472]: time="2024-07-02T00:18:19.766017792Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:18:19.794854 systemd[1]: Started cri-containerd-fa3cb5814ab7dad73668d33a5254f410a1eb8783dfc3dfd0c7f711be5df9ff37.scope - libcontainer container fa3cb5814ab7dad73668d33a5254f410a1eb8783dfc3dfd0c7f711be5df9ff37. Jul 2 00:18:19.822559 containerd[1472]: time="2024-07-02T00:18:19.822470612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-n6dkq,Uid:4f68e158-9391-4785-9426-f0afb502ba98,Namespace:calico-system,Attempt:0,} returns sandbox id \"fa3cb5814ab7dad73668d33a5254f410a1eb8783dfc3dfd0c7f711be5df9ff37\"" Jul 2 00:18:19.824845 kubelet[2543]: E0702 00:18:19.823523 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:18:19.831017 containerd[1472]: time="2024-07-02T00:18:19.830488714Z" level=info msg="CreateContainer within sandbox \"fa3cb5814ab7dad73668d33a5254f410a1eb8783dfc3dfd0c7f711be5df9ff37\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 2 00:18:19.853107 containerd[1472]: time="2024-07-02T00:18:19.852966489Z" level=info msg="CreateContainer within sandbox \"fa3cb5814ab7dad73668d33a5254f410a1eb8783dfc3dfd0c7f711be5df9ff37\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"49d28f9a38ae15bec1dda70f4cb3fde27db64626104282520dfc94582eead5e1\"" Jul 2 00:18:19.857940 containerd[1472]: time="2024-07-02T00:18:19.854762814Z" level=info msg="StartContainer for \"49d28f9a38ae15bec1dda70f4cb3fde27db64626104282520dfc94582eead5e1\"" Jul 2 00:18:19.897924 systemd[1]: Started cri-containerd-49d28f9a38ae15bec1dda70f4cb3fde27db64626104282520dfc94582eead5e1.scope - libcontainer container 49d28f9a38ae15bec1dda70f4cb3fde27db64626104282520dfc94582eead5e1. Jul 2 00:18:19.945068 containerd[1472]: time="2024-07-02T00:18:19.944860781Z" level=info msg="StartContainer for \"49d28f9a38ae15bec1dda70f4cb3fde27db64626104282520dfc94582eead5e1\" returns successfully" Jul 2 00:18:19.964119 systemd[1]: cri-containerd-49d28f9a38ae15bec1dda70f4cb3fde27db64626104282520dfc94582eead5e1.scope: Deactivated successfully. Jul 2 00:18:20.001413 containerd[1472]: time="2024-07-02T00:18:20.001231620Z" level=info msg="shim disconnected" id=49d28f9a38ae15bec1dda70f4cb3fde27db64626104282520dfc94582eead5e1 namespace=k8s.io Jul 2 00:18:20.001413 containerd[1472]: time="2024-07-02T00:18:20.001305092Z" level=warning msg="cleaning up after shim disconnected" id=49d28f9a38ae15bec1dda70f4cb3fde27db64626104282520dfc94582eead5e1 namespace=k8s.io Jul 2 00:18:20.001413 containerd[1472]: time="2024-07-02T00:18:20.001318606Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:18:20.142586 kubelet[2543]: I0702 00:18:20.142510 2543 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c54f3f28-1765-4f64-a375-409c62d9adde" path="/var/lib/kubelet/pods/c54f3f28-1765-4f64-a375-409c62d9adde/volumes" Jul 2 00:18:20.355256 kubelet[2543]: E0702 00:18:20.354811 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:18:20.356456 containerd[1472]: time="2024-07-02T00:18:20.356398474Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Jul 2 00:18:20.645446 systemd[1]: run-containerd-runc-k8s.io-fa3cb5814ab7dad73668d33a5254f410a1eb8783dfc3dfd0c7f711be5df9ff37-runc.1fZd49.mount: Deactivated successfully. Jul 2 00:18:21.137959 kubelet[2543]: E0702 00:18:21.137358 2543 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dpgqd" podUID="d3d95f80-f22a-4d64-99eb-0d72b7beb76e" Jul 2 00:18:23.137736 kubelet[2543]: E0702 00:18:23.137639 2543 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dpgqd" podUID="d3d95f80-f22a-4d64-99eb-0d72b7beb76e" Jul 2 00:18:25.137811 kubelet[2543]: E0702 00:18:25.137267 2543 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dpgqd" podUID="d3d95f80-f22a-4d64-99eb-0d72b7beb76e" Jul 2 00:18:25.364468 containerd[1472]: time="2024-07-02T00:18:25.363097749Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:18:25.366936 containerd[1472]: time="2024-07-02T00:18:25.366857077Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=93087850" Jul 2 00:18:25.375035 containerd[1472]: time="2024-07-02T00:18:25.374969819Z" level=info msg="ImageCreate event name:\"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:18:25.382722 containerd[1472]: time="2024-07-02T00:18:25.381134467Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"94535610\" in 5.024691109s" Jul 2 00:18:25.382722 containerd[1472]: time="2024-07-02T00:18:25.381188217Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\"" Jul 2 00:18:25.390990 containerd[1472]: time="2024-07-02T00:18:25.389938622Z" level=info msg="CreateContainer within sandbox \"fa3cb5814ab7dad73668d33a5254f410a1eb8783dfc3dfd0c7f711be5df9ff37\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 2 00:18:25.399864 containerd[1472]: time="2024-07-02T00:18:25.399684120Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:18:25.551393 containerd[1472]: time="2024-07-02T00:18:25.551161736Z" level=info msg="CreateContainer within sandbox \"fa3cb5814ab7dad73668d33a5254f410a1eb8783dfc3dfd0c7f711be5df9ff37\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"2fc1afa534849750603cea643dc287a1f0d3b59a131cb78d643b8f788cc2a28b\"" Jul 2 00:18:25.554331 containerd[1472]: time="2024-07-02T00:18:25.552728424Z" level=info msg="StartContainer for \"2fc1afa534849750603cea643dc287a1f0d3b59a131cb78d643b8f788cc2a28b\"" Jul 2 00:18:25.715036 systemd[1]: Started cri-containerd-2fc1afa534849750603cea643dc287a1f0d3b59a131cb78d643b8f788cc2a28b.scope - libcontainer container 2fc1afa534849750603cea643dc287a1f0d3b59a131cb78d643b8f788cc2a28b. Jul 2 00:18:25.805657 containerd[1472]: time="2024-07-02T00:18:25.805568974Z" level=info msg="StartContainer for \"2fc1afa534849750603cea643dc287a1f0d3b59a131cb78d643b8f788cc2a28b\" returns successfully" Jul 2 00:18:26.383971 kubelet[2543]: E0702 00:18:26.383848 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:18:26.867204 systemd[1]: cri-containerd-2fc1afa534849750603cea643dc287a1f0d3b59a131cb78d643b8f788cc2a28b.scope: Deactivated successfully. Jul 2 00:18:26.930295 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2fc1afa534849750603cea643dc287a1f0d3b59a131cb78d643b8f788cc2a28b-rootfs.mount: Deactivated successfully. Jul 2 00:18:26.938707 containerd[1472]: time="2024-07-02T00:18:26.938292042Z" level=info msg="shim disconnected" id=2fc1afa534849750603cea643dc287a1f0d3b59a131cb78d643b8f788cc2a28b namespace=k8s.io Jul 2 00:18:26.938707 containerd[1472]: time="2024-07-02T00:18:26.938386121Z" level=warning msg="cleaning up after shim disconnected" id=2fc1afa534849750603cea643dc287a1f0d3b59a131cb78d643b8f788cc2a28b namespace=k8s.io Jul 2 00:18:26.938707 containerd[1472]: time="2024-07-02T00:18:26.938401107Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:18:26.981967 kubelet[2543]: I0702 00:18:26.981899 2543 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jul 2 00:18:27.024968 kubelet[2543]: I0702 00:18:27.024728 2543 topology_manager.go:215] "Topology Admit Handler" podUID="78f44af7-be8f-4f60-9f8c-68664bae1d7c" podNamespace="kube-system" podName="coredns-7db6d8ff4d-mq5j8" Jul 2 00:18:27.027867 kubelet[2543]: I0702 00:18:27.027265 2543 topology_manager.go:215] "Topology Admit Handler" podUID="f18b362e-cb9c-4d57-98c4-b6cecd0957a3" podNamespace="kube-system" podName="coredns-7db6d8ff4d-vplpg" Jul 2 00:18:27.039275 kubelet[2543]: I0702 00:18:27.036969 2543 topology_manager.go:215] "Topology Admit Handler" podUID="78d1aa2f-5769-4d6f-8574-3eb177d83dcb" podNamespace="calico-system" podName="calico-kube-controllers-df7b6c459-dxjdh" Jul 2 00:18:27.047826 systemd[1]: Created slice kubepods-burstable-pod78f44af7_be8f_4f60_9f8c_68664bae1d7c.slice - libcontainer container kubepods-burstable-pod78f44af7_be8f_4f60_9f8c_68664bae1d7c.slice. Jul 2 00:18:27.059053 systemd[1]: Created slice kubepods-burstable-podf18b362e_cb9c_4d57_98c4_b6cecd0957a3.slice - libcontainer container kubepods-burstable-podf18b362e_cb9c_4d57_98c4_b6cecd0957a3.slice. Jul 2 00:18:27.070935 systemd[1]: Created slice kubepods-besteffort-pod78d1aa2f_5769_4d6f_8574_3eb177d83dcb.slice - libcontainer container kubepods-besteffort-pod78d1aa2f_5769_4d6f_8574_3eb177d83dcb.slice. Jul 2 00:18:27.124176 kubelet[2543]: I0702 00:18:27.123987 2543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfmtf\" (UniqueName: \"kubernetes.io/projected/f18b362e-cb9c-4d57-98c4-b6cecd0957a3-kube-api-access-qfmtf\") pod \"coredns-7db6d8ff4d-vplpg\" (UID: \"f18b362e-cb9c-4d57-98c4-b6cecd0957a3\") " pod="kube-system/coredns-7db6d8ff4d-vplpg" Jul 2 00:18:27.124176 kubelet[2543]: I0702 00:18:27.124059 2543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/78d1aa2f-5769-4d6f-8574-3eb177d83dcb-tigera-ca-bundle\") pod \"calico-kube-controllers-df7b6c459-dxjdh\" (UID: \"78d1aa2f-5769-4d6f-8574-3eb177d83dcb\") " pod="calico-system/calico-kube-controllers-df7b6c459-dxjdh" Jul 2 00:18:27.124176 kubelet[2543]: I0702 00:18:27.124095 2543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f18b362e-cb9c-4d57-98c4-b6cecd0957a3-config-volume\") pod \"coredns-7db6d8ff4d-vplpg\" (UID: \"f18b362e-cb9c-4d57-98c4-b6cecd0957a3\") " pod="kube-system/coredns-7db6d8ff4d-vplpg" Jul 2 00:18:27.124176 kubelet[2543]: I0702 00:18:27.124133 2543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/78f44af7-be8f-4f60-9f8c-68664bae1d7c-config-volume\") pod \"coredns-7db6d8ff4d-mq5j8\" (UID: \"78f44af7-be8f-4f60-9f8c-68664bae1d7c\") " pod="kube-system/coredns-7db6d8ff4d-mq5j8" Jul 2 00:18:27.124176 kubelet[2543]: I0702 00:18:27.124164 2543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cs8w7\" (UniqueName: \"kubernetes.io/projected/78f44af7-be8f-4f60-9f8c-68664bae1d7c-kube-api-access-cs8w7\") pod \"coredns-7db6d8ff4d-mq5j8\" (UID: \"78f44af7-be8f-4f60-9f8c-68664bae1d7c\") " pod="kube-system/coredns-7db6d8ff4d-mq5j8" Jul 2 00:18:27.124621 kubelet[2543]: I0702 00:18:27.124187 2543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zprk\" (UniqueName: \"kubernetes.io/projected/78d1aa2f-5769-4d6f-8574-3eb177d83dcb-kube-api-access-2zprk\") pod \"calico-kube-controllers-df7b6c459-dxjdh\" (UID: \"78d1aa2f-5769-4d6f-8574-3eb177d83dcb\") " pod="calico-system/calico-kube-controllers-df7b6c459-dxjdh" Jul 2 00:18:27.155157 systemd[1]: Created slice kubepods-besteffort-podd3d95f80_f22a_4d64_99eb_0d72b7beb76e.slice - libcontainer container kubepods-besteffort-podd3d95f80_f22a_4d64_99eb_0d72b7beb76e.slice. Jul 2 00:18:27.163212 containerd[1472]: time="2024-07-02T00:18:27.162965258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dpgqd,Uid:d3d95f80-f22a-4d64-99eb-0d72b7beb76e,Namespace:calico-system,Attempt:0,}" Jul 2 00:18:27.359863 kubelet[2543]: E0702 00:18:27.358621 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:18:27.366598 containerd[1472]: time="2024-07-02T00:18:27.363483867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-mq5j8,Uid:78f44af7-be8f-4f60-9f8c-68664bae1d7c,Namespace:kube-system,Attempt:0,}" Jul 2 00:18:27.366820 kubelet[2543]: E0702 00:18:27.365510 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:18:27.366903 containerd[1472]: time="2024-07-02T00:18:27.366788956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vplpg,Uid:f18b362e-cb9c-4d57-98c4-b6cecd0957a3,Namespace:kube-system,Attempt:0,}" Jul 2 00:18:27.385519 containerd[1472]: time="2024-07-02T00:18:27.385342082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-df7b6c459-dxjdh,Uid:78d1aa2f-5769-4d6f-8574-3eb177d83dcb,Namespace:calico-system,Attempt:0,}" Jul 2 00:18:27.415614 kubelet[2543]: E0702 00:18:27.415245 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:18:27.421452 containerd[1472]: time="2024-07-02T00:18:27.420685723Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Jul 2 00:18:27.582438 containerd[1472]: time="2024-07-02T00:18:27.582249736Z" level=error msg="Failed to destroy network for sandbox \"7bc34c14754db37c601400724208d0149eed870786c0e4c22e4170c08cf1afc0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:18:27.596199 containerd[1472]: time="2024-07-02T00:18:27.594270419Z" level=error msg="encountered an error cleaning up failed sandbox \"7bc34c14754db37c601400724208d0149eed870786c0e4c22e4170c08cf1afc0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:18:27.596199 containerd[1472]: time="2024-07-02T00:18:27.594397110Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dpgqd,Uid:d3d95f80-f22a-4d64-99eb-0d72b7beb76e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7bc34c14754db37c601400724208d0149eed870786c0e4c22e4170c08cf1afc0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:18:27.597278 kubelet[2543]: E0702 00:18:27.594752 2543 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7bc34c14754db37c601400724208d0149eed870786c0e4c22e4170c08cf1afc0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:18:27.597278 kubelet[2543]: E0702 00:18:27.594832 2543 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7bc34c14754db37c601400724208d0149eed870786c0e4c22e4170c08cf1afc0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dpgqd" Jul 2 00:18:27.597278 kubelet[2543]: E0702 00:18:27.594857 2543 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7bc34c14754db37c601400724208d0149eed870786c0e4c22e4170c08cf1afc0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dpgqd" Jul 2 00:18:27.597563 kubelet[2543]: E0702 00:18:27.594911 2543 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dpgqd_calico-system(d3d95f80-f22a-4d64-99eb-0d72b7beb76e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dpgqd_calico-system(d3d95f80-f22a-4d64-99eb-0d72b7beb76e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7bc34c14754db37c601400724208d0149eed870786c0e4c22e4170c08cf1afc0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dpgqd" podUID="d3d95f80-f22a-4d64-99eb-0d72b7beb76e" Jul 2 00:18:27.687821 containerd[1472]: time="2024-07-02T00:18:27.687321192Z" level=error msg="Failed to destroy network for sandbox \"12b99c5f3a8bdcfd505f08fac04fc69a4fa09a4298b9923461c319f4a03e314b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:18:27.689738 containerd[1472]: time="2024-07-02T00:18:27.689669769Z" level=error msg="encountered an error cleaning up failed sandbox \"12b99c5f3a8bdcfd505f08fac04fc69a4fa09a4298b9923461c319f4a03e314b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:18:27.690173 containerd[1472]: time="2024-07-02T00:18:27.689778094Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vplpg,Uid:f18b362e-cb9c-4d57-98c4-b6cecd0957a3,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"12b99c5f3a8bdcfd505f08fac04fc69a4fa09a4298b9923461c319f4a03e314b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:18:27.690257 kubelet[2543]: E0702 00:18:27.690138 2543 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12b99c5f3a8bdcfd505f08fac04fc69a4fa09a4298b9923461c319f4a03e314b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:18:27.690257 kubelet[2543]: E0702 00:18:27.690208 2543 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12b99c5f3a8bdcfd505f08fac04fc69a4fa09a4298b9923461c319f4a03e314b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-vplpg" Jul 2 00:18:27.690257 kubelet[2543]: E0702 00:18:27.690240 2543 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12b99c5f3a8bdcfd505f08fac04fc69a4fa09a4298b9923461c319f4a03e314b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-vplpg" Jul 2 00:18:27.692725 kubelet[2543]: E0702 00:18:27.692553 2543 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-vplpg_kube-system(f18b362e-cb9c-4d57-98c4-b6cecd0957a3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-vplpg_kube-system(f18b362e-cb9c-4d57-98c4-b6cecd0957a3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"12b99c5f3a8bdcfd505f08fac04fc69a4fa09a4298b9923461c319f4a03e314b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-vplpg" podUID="f18b362e-cb9c-4d57-98c4-b6cecd0957a3" Jul 2 00:18:27.696833 containerd[1472]: time="2024-07-02T00:18:27.696688706Z" level=error msg="Failed to destroy network for sandbox \"e1015a203fc652227776daa37af45b6ddf61b3085e9f5250b16ee6a95f44d572\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:18:27.698027 containerd[1472]: time="2024-07-02T00:18:27.697883692Z" level=error msg="encountered an error cleaning up failed sandbox \"e1015a203fc652227776daa37af45b6ddf61b3085e9f5250b16ee6a95f44d572\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:18:27.698379 containerd[1472]: time="2024-07-02T00:18:27.698199947Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-mq5j8,Uid:78f44af7-be8f-4f60-9f8c-68664bae1d7c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e1015a203fc652227776daa37af45b6ddf61b3085e9f5250b16ee6a95f44d572\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:18:27.699674 kubelet[2543]: E0702 00:18:27.698771 2543 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1015a203fc652227776daa37af45b6ddf61b3085e9f5250b16ee6a95f44d572\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:18:27.699674 kubelet[2543]: E0702 00:18:27.698842 2543 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1015a203fc652227776daa37af45b6ddf61b3085e9f5250b16ee6a95f44d572\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-mq5j8" Jul 2 00:18:27.699674 kubelet[2543]: E0702 00:18:27.698871 2543 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1015a203fc652227776daa37af45b6ddf61b3085e9f5250b16ee6a95f44d572\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-mq5j8" Jul 2 00:18:27.699900 kubelet[2543]: E0702 00:18:27.698926 2543 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-mq5j8_kube-system(78f44af7-be8f-4f60-9f8c-68664bae1d7c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-mq5j8_kube-system(78f44af7-be8f-4f60-9f8c-68664bae1d7c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e1015a203fc652227776daa37af45b6ddf61b3085e9f5250b16ee6a95f44d572\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-mq5j8" podUID="78f44af7-be8f-4f60-9f8c-68664bae1d7c" Jul 2 00:18:27.705184 containerd[1472]: time="2024-07-02T00:18:27.705006157Z" level=error msg="Failed to destroy network for sandbox \"c4f38e218dc634aa72d5da94fba0e397028ec66505646ea6fe1cced0fa8d0284\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:18:27.705801 containerd[1472]: time="2024-07-02T00:18:27.705744114Z" level=error msg="encountered an error cleaning up failed sandbox \"c4f38e218dc634aa72d5da94fba0e397028ec66505646ea6fe1cced0fa8d0284\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:18:27.706158 containerd[1472]: time="2024-07-02T00:18:27.705994484Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-df7b6c459-dxjdh,Uid:78d1aa2f-5769-4d6f-8574-3eb177d83dcb,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c4f38e218dc634aa72d5da94fba0e397028ec66505646ea6fe1cced0fa8d0284\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:18:27.706863 kubelet[2543]: E0702 00:18:27.706475 2543 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4f38e218dc634aa72d5da94fba0e397028ec66505646ea6fe1cced0fa8d0284\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:18:27.706863 kubelet[2543]: E0702 00:18:27.706681 2543 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4f38e218dc634aa72d5da94fba0e397028ec66505646ea6fe1cced0fa8d0284\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-df7b6c459-dxjdh" Jul 2 00:18:27.706863 kubelet[2543]: E0702 00:18:27.706719 2543 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4f38e218dc634aa72d5da94fba0e397028ec66505646ea6fe1cced0fa8d0284\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-df7b6c459-dxjdh" Jul 2 00:18:27.707745 kubelet[2543]: E0702 00:18:27.706801 2543 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-df7b6c459-dxjdh_calico-system(78d1aa2f-5769-4d6f-8574-3eb177d83dcb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-df7b6c459-dxjdh_calico-system(78d1aa2f-5769-4d6f-8574-3eb177d83dcb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c4f38e218dc634aa72d5da94fba0e397028ec66505646ea6fe1cced0fa8d0284\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-df7b6c459-dxjdh" podUID="78d1aa2f-5769-4d6f-8574-3eb177d83dcb" Jul 2 00:18:27.934218 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7bc34c14754db37c601400724208d0149eed870786c0e4c22e4170c08cf1afc0-shm.mount: Deactivated successfully. Jul 2 00:18:28.433839 kubelet[2543]: I0702 00:18:28.431878 2543 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12b99c5f3a8bdcfd505f08fac04fc69a4fa09a4298b9923461c319f4a03e314b" Jul 2 00:18:28.436926 containerd[1472]: time="2024-07-02T00:18:28.436491916Z" level=info msg="StopPodSandbox for \"12b99c5f3a8bdcfd505f08fac04fc69a4fa09a4298b9923461c319f4a03e314b\"" Jul 2 00:18:28.438243 containerd[1472]: time="2024-07-02T00:18:28.437592073Z" level=info msg="Ensure that sandbox 12b99c5f3a8bdcfd505f08fac04fc69a4fa09a4298b9923461c319f4a03e314b in task-service has been cleanup successfully" Jul 2 00:18:28.449754 kubelet[2543]: I0702 00:18:28.449097 2543 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1015a203fc652227776daa37af45b6ddf61b3085e9f5250b16ee6a95f44d572" Jul 2 00:18:28.451619 containerd[1472]: time="2024-07-02T00:18:28.450797210Z" level=info msg="StopPodSandbox for \"e1015a203fc652227776daa37af45b6ddf61b3085e9f5250b16ee6a95f44d572\"" Jul 2 00:18:28.455369 containerd[1472]: time="2024-07-02T00:18:28.454789525Z" level=info msg="Ensure that sandbox e1015a203fc652227776daa37af45b6ddf61b3085e9f5250b16ee6a95f44d572 in task-service has been cleanup successfully" Jul 2 00:18:28.465990 kubelet[2543]: I0702 00:18:28.465948 2543 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7bc34c14754db37c601400724208d0149eed870786c0e4c22e4170c08cf1afc0" Jul 2 00:18:28.469648 containerd[1472]: time="2024-07-02T00:18:28.467014768Z" level=info msg="StopPodSandbox for \"7bc34c14754db37c601400724208d0149eed870786c0e4c22e4170c08cf1afc0\"" Jul 2 00:18:28.469648 containerd[1472]: time="2024-07-02T00:18:28.467336493Z" level=info msg="Ensure that sandbox 7bc34c14754db37c601400724208d0149eed870786c0e4c22e4170c08cf1afc0 in task-service has been cleanup successfully" Jul 2 00:18:28.470216 kubelet[2543]: I0702 00:18:28.470177 2543 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4f38e218dc634aa72d5da94fba0e397028ec66505646ea6fe1cced0fa8d0284" Jul 2 00:18:28.472460 containerd[1472]: time="2024-07-02T00:18:28.472040768Z" level=info msg="StopPodSandbox for \"c4f38e218dc634aa72d5da94fba0e397028ec66505646ea6fe1cced0fa8d0284\"" Jul 2 00:18:28.472460 containerd[1472]: time="2024-07-02T00:18:28.472370285Z" level=info msg="Ensure that sandbox c4f38e218dc634aa72d5da94fba0e397028ec66505646ea6fe1cced0fa8d0284 in task-service has been cleanup successfully" Jul 2 00:18:28.525029 containerd[1472]: time="2024-07-02T00:18:28.524957246Z" level=error msg="StopPodSandbox for \"12b99c5f3a8bdcfd505f08fac04fc69a4fa09a4298b9923461c319f4a03e314b\" failed" error="failed to destroy network for sandbox \"12b99c5f3a8bdcfd505f08fac04fc69a4fa09a4298b9923461c319f4a03e314b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:18:28.525604 kubelet[2543]: E0702 00:18:28.525321 2543 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"12b99c5f3a8bdcfd505f08fac04fc69a4fa09a4298b9923461c319f4a03e314b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="12b99c5f3a8bdcfd505f08fac04fc69a4fa09a4298b9923461c319f4a03e314b" Jul 2 00:18:28.525604 kubelet[2543]: E0702 00:18:28.525383 2543 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"12b99c5f3a8bdcfd505f08fac04fc69a4fa09a4298b9923461c319f4a03e314b"} Jul 2 00:18:28.525604 kubelet[2543]: E0702 00:18:28.525428 2543 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f18b362e-cb9c-4d57-98c4-b6cecd0957a3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"12b99c5f3a8bdcfd505f08fac04fc69a4fa09a4298b9923461c319f4a03e314b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:18:28.525604 kubelet[2543]: E0702 00:18:28.525457 2543 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f18b362e-cb9c-4d57-98c4-b6cecd0957a3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"12b99c5f3a8bdcfd505f08fac04fc69a4fa09a4298b9923461c319f4a03e314b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-vplpg" podUID="f18b362e-cb9c-4d57-98c4-b6cecd0957a3" Jul 2 00:18:28.558567 containerd[1472]: time="2024-07-02T00:18:28.557735718Z" level=error msg="StopPodSandbox for \"c4f38e218dc634aa72d5da94fba0e397028ec66505646ea6fe1cced0fa8d0284\" failed" error="failed to destroy network for sandbox \"c4f38e218dc634aa72d5da94fba0e397028ec66505646ea6fe1cced0fa8d0284\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:18:28.558744 kubelet[2543]: E0702 00:18:28.558236 2543 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c4f38e218dc634aa72d5da94fba0e397028ec66505646ea6fe1cced0fa8d0284\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c4f38e218dc634aa72d5da94fba0e397028ec66505646ea6fe1cced0fa8d0284" Jul 2 00:18:28.558744 kubelet[2543]: E0702 00:18:28.558309 2543 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c4f38e218dc634aa72d5da94fba0e397028ec66505646ea6fe1cced0fa8d0284"} Jul 2 00:18:28.558744 kubelet[2543]: E0702 00:18:28.558353 2543 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"78d1aa2f-5769-4d6f-8574-3eb177d83dcb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c4f38e218dc634aa72d5da94fba0e397028ec66505646ea6fe1cced0fa8d0284\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:18:28.558744 kubelet[2543]: E0702 00:18:28.558385 2543 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"78d1aa2f-5769-4d6f-8574-3eb177d83dcb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c4f38e218dc634aa72d5da94fba0e397028ec66505646ea6fe1cced0fa8d0284\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-df7b6c459-dxjdh" podUID="78d1aa2f-5769-4d6f-8574-3eb177d83dcb" Jul 2 00:18:28.569511 containerd[1472]: time="2024-07-02T00:18:28.569442752Z" level=error msg="StopPodSandbox for \"e1015a203fc652227776daa37af45b6ddf61b3085e9f5250b16ee6a95f44d572\" failed" error="failed to destroy network for sandbox \"e1015a203fc652227776daa37af45b6ddf61b3085e9f5250b16ee6a95f44d572\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:18:28.569918 kubelet[2543]: E0702 00:18:28.569863 2543 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e1015a203fc652227776daa37af45b6ddf61b3085e9f5250b16ee6a95f44d572\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e1015a203fc652227776daa37af45b6ddf61b3085e9f5250b16ee6a95f44d572" Jul 2 00:18:28.570255 kubelet[2543]: E0702 00:18:28.569954 2543 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e1015a203fc652227776daa37af45b6ddf61b3085e9f5250b16ee6a95f44d572"} Jul 2 00:18:28.570255 kubelet[2543]: E0702 00:18:28.570017 2543 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"78f44af7-be8f-4f60-9f8c-68664bae1d7c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e1015a203fc652227776daa37af45b6ddf61b3085e9f5250b16ee6a95f44d572\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:18:28.570255 kubelet[2543]: E0702 00:18:28.570056 2543 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"78f44af7-be8f-4f60-9f8c-68664bae1d7c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e1015a203fc652227776daa37af45b6ddf61b3085e9f5250b16ee6a95f44d572\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-mq5j8" podUID="78f44af7-be8f-4f60-9f8c-68664bae1d7c" Jul 2 00:18:28.576984 containerd[1472]: time="2024-07-02T00:18:28.576909588Z" level=error msg="StopPodSandbox for \"7bc34c14754db37c601400724208d0149eed870786c0e4c22e4170c08cf1afc0\" failed" error="failed to destroy network for sandbox \"7bc34c14754db37c601400724208d0149eed870786c0e4c22e4170c08cf1afc0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:18:28.577689 kubelet[2543]: E0702 00:18:28.577621 2543 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7bc34c14754db37c601400724208d0149eed870786c0e4c22e4170c08cf1afc0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7bc34c14754db37c601400724208d0149eed870786c0e4c22e4170c08cf1afc0" Jul 2 00:18:28.577841 kubelet[2543]: E0702 00:18:28.577692 2543 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7bc34c14754db37c601400724208d0149eed870786c0e4c22e4170c08cf1afc0"} Jul 2 00:18:28.577841 kubelet[2543]: E0702 00:18:28.577744 2543 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d3d95f80-f22a-4d64-99eb-0d72b7beb76e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7bc34c14754db37c601400724208d0149eed870786c0e4c22e4170c08cf1afc0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:18:28.577841 kubelet[2543]: E0702 00:18:28.577785 2543 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d3d95f80-f22a-4d64-99eb-0d72b7beb76e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7bc34c14754db37c601400724208d0149eed870786c0e4c22e4170c08cf1afc0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dpgqd" podUID="d3d95f80-f22a-4d64-99eb-0d72b7beb76e" Jul 2 00:18:33.273317 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2546558791.mount: Deactivated successfully. Jul 2 00:18:33.305306 containerd[1472]: time="2024-07-02T00:18:33.305240957Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:18:33.308067 containerd[1472]: time="2024-07-02T00:18:33.307978655Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=115238750" Jul 2 00:18:33.308580 containerd[1472]: time="2024-07-02T00:18:33.308493496Z" level=info msg="ImageCreate event name:\"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:18:33.311976 containerd[1472]: time="2024-07-02T00:18:33.311879228Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:18:33.312586 containerd[1472]: time="2024-07-02T00:18:33.312522896Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"115238612\" in 5.891775861s" Jul 2 00:18:33.312670 containerd[1472]: time="2024-07-02T00:18:33.312592463Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\"" Jul 2 00:18:33.336676 containerd[1472]: time="2024-07-02T00:18:33.336508418Z" level=info msg="CreateContainer within sandbox \"fa3cb5814ab7dad73668d33a5254f410a1eb8783dfc3dfd0c7f711be5df9ff37\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 2 00:18:33.378432 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount69443002.mount: Deactivated successfully. Jul 2 00:18:33.389171 containerd[1472]: time="2024-07-02T00:18:33.389099212Z" level=info msg="CreateContainer within sandbox \"fa3cb5814ab7dad73668d33a5254f410a1eb8783dfc3dfd0c7f711be5df9ff37\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"bdb8a50bcc3358ab261ea271a8f57c7e4df71dc242aeefc034977e31517d7d1c\"" Jul 2 00:18:33.390658 containerd[1472]: time="2024-07-02T00:18:33.390229543Z" level=info msg="StartContainer for \"bdb8a50bcc3358ab261ea271a8f57c7e4df71dc242aeefc034977e31517d7d1c\"" Jul 2 00:18:33.473870 systemd[1]: Started cri-containerd-bdb8a50bcc3358ab261ea271a8f57c7e4df71dc242aeefc034977e31517d7d1c.scope - libcontainer container bdb8a50bcc3358ab261ea271a8f57c7e4df71dc242aeefc034977e31517d7d1c. Jul 2 00:18:33.542053 containerd[1472]: time="2024-07-02T00:18:33.540066936Z" level=info msg="StartContainer for \"bdb8a50bcc3358ab261ea271a8f57c7e4df71dc242aeefc034977e31517d7d1c\" returns successfully" Jul 2 00:18:33.661153 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 2 00:18:33.662459 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 2 00:18:34.500024 kubelet[2543]: E0702 00:18:34.499696 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:18:35.503803 kubelet[2543]: E0702 00:18:35.503771 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:18:35.527999 systemd[1]: run-containerd-runc-k8s.io-bdb8a50bcc3358ab261ea271a8f57c7e4df71dc242aeefc034977e31517d7d1c-runc.5RhB6q.mount: Deactivated successfully. Jul 2 00:18:40.140807 containerd[1472]: time="2024-07-02T00:18:40.140753456Z" level=info msg="StopPodSandbox for \"c4f38e218dc634aa72d5da94fba0e397028ec66505646ea6fe1cced0fa8d0284\"" Jul 2 00:18:40.209231 kubelet[2543]: I0702 00:18:40.208469 2543 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-n6dkq" podStartSLOduration=8.250015396 podStartE2EDuration="21.208440776s" podCreationTimestamp="2024-07-02 00:18:19 +0000 UTC" firstStartedPulling="2024-07-02 00:18:20.355919207 +0000 UTC m=+28.417958468" lastFinishedPulling="2024-07-02 00:18:33.31434457 +0000 UTC m=+41.376383848" observedRunningTime="2024-07-02 00:18:34.538842508 +0000 UTC m=+42.600881786" watchObservedRunningTime="2024-07-02 00:18:40.208440776 +0000 UTC m=+48.270480052" Jul 2 00:18:40.396273 containerd[1472]: 2024-07-02 00:18:40.206 [INFO][4155] k8s.go 608: Cleaning up netns ContainerID="c4f38e218dc634aa72d5da94fba0e397028ec66505646ea6fe1cced0fa8d0284" Jul 2 00:18:40.396273 containerd[1472]: 2024-07-02 00:18:40.208 [INFO][4155] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="c4f38e218dc634aa72d5da94fba0e397028ec66505646ea6fe1cced0fa8d0284" iface="eth0" netns="/var/run/netns/cni-3593a9e0-373a-d679-3394-6e2b87156bf5" Jul 2 00:18:40.396273 containerd[1472]: 2024-07-02 00:18:40.208 [INFO][4155] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="c4f38e218dc634aa72d5da94fba0e397028ec66505646ea6fe1cced0fa8d0284" iface="eth0" netns="/var/run/netns/cni-3593a9e0-373a-d679-3394-6e2b87156bf5" Jul 2 00:18:40.396273 containerd[1472]: 2024-07-02 00:18:40.208 [INFO][4155] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="c4f38e218dc634aa72d5da94fba0e397028ec66505646ea6fe1cced0fa8d0284" iface="eth0" netns="/var/run/netns/cni-3593a9e0-373a-d679-3394-6e2b87156bf5" Jul 2 00:18:40.396273 containerd[1472]: 2024-07-02 00:18:40.208 [INFO][4155] k8s.go 615: Releasing IP address(es) ContainerID="c4f38e218dc634aa72d5da94fba0e397028ec66505646ea6fe1cced0fa8d0284" Jul 2 00:18:40.396273 containerd[1472]: 2024-07-02 00:18:40.208 [INFO][4155] utils.go 188: Calico CNI releasing IP address ContainerID="c4f38e218dc634aa72d5da94fba0e397028ec66505646ea6fe1cced0fa8d0284" Jul 2 00:18:40.396273 containerd[1472]: 2024-07-02 00:18:40.370 [INFO][4162] ipam_plugin.go 411: Releasing address using handleID ContainerID="c4f38e218dc634aa72d5da94fba0e397028ec66505646ea6fe1cced0fa8d0284" HandleID="k8s-pod-network.c4f38e218dc634aa72d5da94fba0e397028ec66505646ea6fe1cced0fa8d0284" Workload="ci--3975.1.1--c--5be545c9fd-k8s-calico--kube--controllers--df7b6c459--dxjdh-eth0" Jul 2 00:18:40.396273 containerd[1472]: 2024-07-02 00:18:40.371 [INFO][4162] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:18:40.396273 containerd[1472]: 2024-07-02 00:18:40.371 [INFO][4162] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:18:40.396273 containerd[1472]: 2024-07-02 00:18:40.389 [WARNING][4162] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="c4f38e218dc634aa72d5da94fba0e397028ec66505646ea6fe1cced0fa8d0284" HandleID="k8s-pod-network.c4f38e218dc634aa72d5da94fba0e397028ec66505646ea6fe1cced0fa8d0284" Workload="ci--3975.1.1--c--5be545c9fd-k8s-calico--kube--controllers--df7b6c459--dxjdh-eth0" Jul 2 00:18:40.396273 containerd[1472]: 2024-07-02 00:18:40.389 [INFO][4162] ipam_plugin.go 439: Releasing address using workloadID ContainerID="c4f38e218dc634aa72d5da94fba0e397028ec66505646ea6fe1cced0fa8d0284" HandleID="k8s-pod-network.c4f38e218dc634aa72d5da94fba0e397028ec66505646ea6fe1cced0fa8d0284" Workload="ci--3975.1.1--c--5be545c9fd-k8s-calico--kube--controllers--df7b6c459--dxjdh-eth0" Jul 2 00:18:40.396273 containerd[1472]: 2024-07-02 00:18:40.392 [INFO][4162] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:18:40.396273 containerd[1472]: 2024-07-02 00:18:40.393 [INFO][4155] k8s.go 621: Teardown processing complete. ContainerID="c4f38e218dc634aa72d5da94fba0e397028ec66505646ea6fe1cced0fa8d0284" Jul 2 00:18:40.397270 containerd[1472]: time="2024-07-02T00:18:40.396611031Z" level=info msg="TearDown network for sandbox \"c4f38e218dc634aa72d5da94fba0e397028ec66505646ea6fe1cced0fa8d0284\" successfully" Jul 2 00:18:40.397270 containerd[1472]: time="2024-07-02T00:18:40.396649724Z" level=info msg="StopPodSandbox for \"c4f38e218dc634aa72d5da94fba0e397028ec66505646ea6fe1cced0fa8d0284\" returns successfully" Jul 2 00:18:40.400632 containerd[1472]: time="2024-07-02T00:18:40.399901750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-df7b6c459-dxjdh,Uid:78d1aa2f-5769-4d6f-8574-3eb177d83dcb,Namespace:calico-system,Attempt:1,}" Jul 2 00:18:40.402338 systemd[1]: run-netns-cni\x2d3593a9e0\x2d373a\x2dd679\x2d3394\x2d6e2b87156bf5.mount: Deactivated successfully. Jul 2 00:18:40.606670 systemd-networkd[1371]: califd1516030ca: Link UP Jul 2 00:18:40.608869 systemd-networkd[1371]: califd1516030ca: Gained carrier Jul 2 00:18:40.642777 containerd[1472]: 2024-07-02 00:18:40.457 [INFO][4171] utils.go 100: File /var/lib/calico/mtu does not exist Jul 2 00:18:40.642777 containerd[1472]: 2024-07-02 00:18:40.473 [INFO][4171] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.1.1--c--5be545c9fd-k8s-calico--kube--controllers--df7b6c459--dxjdh-eth0 calico-kube-controllers-df7b6c459- calico-system 78d1aa2f-5769-4d6f-8574-3eb177d83dcb 848 0 2024-07-02 00:18:14 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:df7b6c459 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-3975.1.1-c-5be545c9fd calico-kube-controllers-df7b6c459-dxjdh eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] califd1516030ca [] []}} ContainerID="91e26ae58c8bf5ca48067d4fb68c5582a8c218b20d01ec532e8b89d770e619b0" Namespace="calico-system" Pod="calico-kube-controllers-df7b6c459-dxjdh" WorkloadEndpoint="ci--3975.1.1--c--5be545c9fd-k8s-calico--kube--controllers--df7b6c459--dxjdh-" Jul 2 00:18:40.642777 containerd[1472]: 2024-07-02 00:18:40.473 [INFO][4171] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="91e26ae58c8bf5ca48067d4fb68c5582a8c218b20d01ec532e8b89d770e619b0" Namespace="calico-system" Pod="calico-kube-controllers-df7b6c459-dxjdh" WorkloadEndpoint="ci--3975.1.1--c--5be545c9fd-k8s-calico--kube--controllers--df7b6c459--dxjdh-eth0" Jul 2 00:18:40.642777 containerd[1472]: 2024-07-02 00:18:40.519 [INFO][4181] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="91e26ae58c8bf5ca48067d4fb68c5582a8c218b20d01ec532e8b89d770e619b0" HandleID="k8s-pod-network.91e26ae58c8bf5ca48067d4fb68c5582a8c218b20d01ec532e8b89d770e619b0" Workload="ci--3975.1.1--c--5be545c9fd-k8s-calico--kube--controllers--df7b6c459--dxjdh-eth0" Jul 2 00:18:40.642777 containerd[1472]: 2024-07-02 00:18:40.537 [INFO][4181] ipam_plugin.go 264: Auto assigning IP ContainerID="91e26ae58c8bf5ca48067d4fb68c5582a8c218b20d01ec532e8b89d770e619b0" HandleID="k8s-pod-network.91e26ae58c8bf5ca48067d4fb68c5582a8c218b20d01ec532e8b89d770e619b0" Workload="ci--3975.1.1--c--5be545c9fd-k8s-calico--kube--controllers--df7b6c459--dxjdh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e5cc0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3975.1.1-c-5be545c9fd", "pod":"calico-kube-controllers-df7b6c459-dxjdh", "timestamp":"2024-07-02 00:18:40.519208971 +0000 UTC"}, Hostname:"ci-3975.1.1-c-5be545c9fd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:18:40.642777 containerd[1472]: 2024-07-02 00:18:40.537 [INFO][4181] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:18:40.642777 containerd[1472]: 2024-07-02 00:18:40.537 [INFO][4181] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:18:40.642777 containerd[1472]: 2024-07-02 00:18:40.537 [INFO][4181] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.1.1-c-5be545c9fd' Jul 2 00:18:40.642777 containerd[1472]: 2024-07-02 00:18:40.541 [INFO][4181] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.91e26ae58c8bf5ca48067d4fb68c5582a8c218b20d01ec532e8b89d770e619b0" host="ci-3975.1.1-c-5be545c9fd" Jul 2 00:18:40.642777 containerd[1472]: 2024-07-02 00:18:40.550 [INFO][4181] ipam.go 372: Looking up existing affinities for host host="ci-3975.1.1-c-5be545c9fd" Jul 2 00:18:40.642777 containerd[1472]: 2024-07-02 00:18:40.559 [INFO][4181] ipam.go 489: Trying affinity for 192.168.63.64/26 host="ci-3975.1.1-c-5be545c9fd" Jul 2 00:18:40.642777 containerd[1472]: 2024-07-02 00:18:40.563 [INFO][4181] ipam.go 155: Attempting to load block cidr=192.168.63.64/26 host="ci-3975.1.1-c-5be545c9fd" Jul 2 00:18:40.642777 containerd[1472]: 2024-07-02 00:18:40.568 [INFO][4181] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.63.64/26 host="ci-3975.1.1-c-5be545c9fd" Jul 2 00:18:40.642777 containerd[1472]: 2024-07-02 00:18:40.568 [INFO][4181] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.63.64/26 handle="k8s-pod-network.91e26ae58c8bf5ca48067d4fb68c5582a8c218b20d01ec532e8b89d770e619b0" host="ci-3975.1.1-c-5be545c9fd" Jul 2 00:18:40.642777 containerd[1472]: 2024-07-02 00:18:40.572 [INFO][4181] ipam.go 1685: Creating new handle: k8s-pod-network.91e26ae58c8bf5ca48067d4fb68c5582a8c218b20d01ec532e8b89d770e619b0 Jul 2 00:18:40.642777 containerd[1472]: 2024-07-02 00:18:40.578 [INFO][4181] ipam.go 1203: Writing block in order to claim IPs block=192.168.63.64/26 handle="k8s-pod-network.91e26ae58c8bf5ca48067d4fb68c5582a8c218b20d01ec532e8b89d770e619b0" host="ci-3975.1.1-c-5be545c9fd" Jul 2 00:18:40.642777 containerd[1472]: 2024-07-02 00:18:40.586 [INFO][4181] ipam.go 1216: Successfully claimed IPs: [192.168.63.65/26] block=192.168.63.64/26 handle="k8s-pod-network.91e26ae58c8bf5ca48067d4fb68c5582a8c218b20d01ec532e8b89d770e619b0" host="ci-3975.1.1-c-5be545c9fd" Jul 2 00:18:40.642777 containerd[1472]: 2024-07-02 00:18:40.587 [INFO][4181] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.63.65/26] handle="k8s-pod-network.91e26ae58c8bf5ca48067d4fb68c5582a8c218b20d01ec532e8b89d770e619b0" host="ci-3975.1.1-c-5be545c9fd" Jul 2 00:18:40.642777 containerd[1472]: 2024-07-02 00:18:40.587 [INFO][4181] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:18:40.642777 containerd[1472]: 2024-07-02 00:18:40.587 [INFO][4181] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.63.65/26] IPv6=[] ContainerID="91e26ae58c8bf5ca48067d4fb68c5582a8c218b20d01ec532e8b89d770e619b0" HandleID="k8s-pod-network.91e26ae58c8bf5ca48067d4fb68c5582a8c218b20d01ec532e8b89d770e619b0" Workload="ci--3975.1.1--c--5be545c9fd-k8s-calico--kube--controllers--df7b6c459--dxjdh-eth0" Jul 2 00:18:40.645147 containerd[1472]: 2024-07-02 00:18:40.590 [INFO][4171] k8s.go 386: Populated endpoint ContainerID="91e26ae58c8bf5ca48067d4fb68c5582a8c218b20d01ec532e8b89d770e619b0" Namespace="calico-system" Pod="calico-kube-controllers-df7b6c459-dxjdh" WorkloadEndpoint="ci--3975.1.1--c--5be545c9fd-k8s-calico--kube--controllers--df7b6c459--dxjdh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--c--5be545c9fd-k8s-calico--kube--controllers--df7b6c459--dxjdh-eth0", GenerateName:"calico-kube-controllers-df7b6c459-", Namespace:"calico-system", SelfLink:"", UID:"78d1aa2f-5769-4d6f-8574-3eb177d83dcb", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 18, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"df7b6c459", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-c-5be545c9fd", ContainerID:"", Pod:"calico-kube-controllers-df7b6c459-dxjdh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.63.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califd1516030ca", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:18:40.645147 containerd[1472]: 2024-07-02 00:18:40.590 [INFO][4171] k8s.go 387: Calico CNI using IPs: [192.168.63.65/32] ContainerID="91e26ae58c8bf5ca48067d4fb68c5582a8c218b20d01ec532e8b89d770e619b0" Namespace="calico-system" Pod="calico-kube-controllers-df7b6c459-dxjdh" WorkloadEndpoint="ci--3975.1.1--c--5be545c9fd-k8s-calico--kube--controllers--df7b6c459--dxjdh-eth0" Jul 2 00:18:40.645147 containerd[1472]: 2024-07-02 00:18:40.590 [INFO][4171] dataplane_linux.go 68: Setting the host side veth name to califd1516030ca ContainerID="91e26ae58c8bf5ca48067d4fb68c5582a8c218b20d01ec532e8b89d770e619b0" Namespace="calico-system" Pod="calico-kube-controllers-df7b6c459-dxjdh" WorkloadEndpoint="ci--3975.1.1--c--5be545c9fd-k8s-calico--kube--controllers--df7b6c459--dxjdh-eth0" Jul 2 00:18:40.645147 containerd[1472]: 2024-07-02 00:18:40.609 [INFO][4171] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="91e26ae58c8bf5ca48067d4fb68c5582a8c218b20d01ec532e8b89d770e619b0" Namespace="calico-system" Pod="calico-kube-controllers-df7b6c459-dxjdh" WorkloadEndpoint="ci--3975.1.1--c--5be545c9fd-k8s-calico--kube--controllers--df7b6c459--dxjdh-eth0" Jul 2 00:18:40.645147 containerd[1472]: 2024-07-02 00:18:40.611 [INFO][4171] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="91e26ae58c8bf5ca48067d4fb68c5582a8c218b20d01ec532e8b89d770e619b0" Namespace="calico-system" Pod="calico-kube-controllers-df7b6c459-dxjdh" WorkloadEndpoint="ci--3975.1.1--c--5be545c9fd-k8s-calico--kube--controllers--df7b6c459--dxjdh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--c--5be545c9fd-k8s-calico--kube--controllers--df7b6c459--dxjdh-eth0", GenerateName:"calico-kube-controllers-df7b6c459-", Namespace:"calico-system", SelfLink:"", UID:"78d1aa2f-5769-4d6f-8574-3eb177d83dcb", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 18, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"df7b6c459", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-c-5be545c9fd", ContainerID:"91e26ae58c8bf5ca48067d4fb68c5582a8c218b20d01ec532e8b89d770e619b0", Pod:"calico-kube-controllers-df7b6c459-dxjdh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.63.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califd1516030ca", MAC:"b2:0b:06:8d:62:93", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:18:40.645147 containerd[1472]: 2024-07-02 00:18:40.638 [INFO][4171] k8s.go 500: Wrote updated endpoint to datastore ContainerID="91e26ae58c8bf5ca48067d4fb68c5582a8c218b20d01ec532e8b89d770e619b0" Namespace="calico-system" Pod="calico-kube-controllers-df7b6c459-dxjdh" WorkloadEndpoint="ci--3975.1.1--c--5be545c9fd-k8s-calico--kube--controllers--df7b6c459--dxjdh-eth0" Jul 2 00:18:40.692105 containerd[1472]: time="2024-07-02T00:18:40.691836913Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:18:40.692105 containerd[1472]: time="2024-07-02T00:18:40.691918769Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:18:40.692105 containerd[1472]: time="2024-07-02T00:18:40.691943705Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:18:40.692105 containerd[1472]: time="2024-07-02T00:18:40.691957475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:18:40.720801 systemd[1]: run-containerd-runc-k8s.io-91e26ae58c8bf5ca48067d4fb68c5582a8c218b20d01ec532e8b89d770e619b0-runc.gdYHhK.mount: Deactivated successfully. Jul 2 00:18:40.729050 systemd[1]: Started cri-containerd-91e26ae58c8bf5ca48067d4fb68c5582a8c218b20d01ec532e8b89d770e619b0.scope - libcontainer container 91e26ae58c8bf5ca48067d4fb68c5582a8c218b20d01ec532e8b89d770e619b0. Jul 2 00:18:40.824097 containerd[1472]: time="2024-07-02T00:18:40.823985445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-df7b6c459-dxjdh,Uid:78d1aa2f-5769-4d6f-8574-3eb177d83dcb,Namespace:calico-system,Attempt:1,} returns sandbox id \"91e26ae58c8bf5ca48067d4fb68c5582a8c218b20d01ec532e8b89d770e619b0\"" Jul 2 00:18:40.830268 containerd[1472]: time="2024-07-02T00:18:40.829733535Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Jul 2 00:18:41.139889 containerd[1472]: time="2024-07-02T00:18:41.138580560Z" level=info msg="StopPodSandbox for \"e1015a203fc652227776daa37af45b6ddf61b3085e9f5250b16ee6a95f44d572\"" Jul 2 00:18:41.140206 containerd[1472]: time="2024-07-02T00:18:41.140162675Z" level=info msg="StopPodSandbox for \"12b99c5f3a8bdcfd505f08fac04fc69a4fa09a4298b9923461c319f4a03e314b\"" Jul 2 00:18:41.300523 containerd[1472]: 2024-07-02 00:18:41.222 [INFO][4288] k8s.go 608: Cleaning up netns ContainerID="12b99c5f3a8bdcfd505f08fac04fc69a4fa09a4298b9923461c319f4a03e314b" Jul 2 00:18:41.300523 containerd[1472]: 2024-07-02 00:18:41.223 [INFO][4288] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="12b99c5f3a8bdcfd505f08fac04fc69a4fa09a4298b9923461c319f4a03e314b" iface="eth0" netns="/var/run/netns/cni-e8f46943-9589-0809-f6ba-1549f3938d32" Jul 2 00:18:41.300523 containerd[1472]: 2024-07-02 00:18:41.224 [INFO][4288] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="12b99c5f3a8bdcfd505f08fac04fc69a4fa09a4298b9923461c319f4a03e314b" iface="eth0" netns="/var/run/netns/cni-e8f46943-9589-0809-f6ba-1549f3938d32" Jul 2 00:18:41.300523 containerd[1472]: 2024-07-02 00:18:41.228 [INFO][4288] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="12b99c5f3a8bdcfd505f08fac04fc69a4fa09a4298b9923461c319f4a03e314b" iface="eth0" netns="/var/run/netns/cni-e8f46943-9589-0809-f6ba-1549f3938d32" Jul 2 00:18:41.300523 containerd[1472]: 2024-07-02 00:18:41.228 [INFO][4288] k8s.go 615: Releasing IP address(es) ContainerID="12b99c5f3a8bdcfd505f08fac04fc69a4fa09a4298b9923461c319f4a03e314b" Jul 2 00:18:41.300523 containerd[1472]: 2024-07-02 00:18:41.228 [INFO][4288] utils.go 188: Calico CNI releasing IP address ContainerID="12b99c5f3a8bdcfd505f08fac04fc69a4fa09a4298b9923461c319f4a03e314b" Jul 2 00:18:41.300523 containerd[1472]: 2024-07-02 00:18:41.278 [INFO][4301] ipam_plugin.go 411: Releasing address using handleID ContainerID="12b99c5f3a8bdcfd505f08fac04fc69a4fa09a4298b9923461c319f4a03e314b" HandleID="k8s-pod-network.12b99c5f3a8bdcfd505f08fac04fc69a4fa09a4298b9923461c319f4a03e314b" Workload="ci--3975.1.1--c--5be545c9fd-k8s-coredns--7db6d8ff4d--vplpg-eth0" Jul 2 00:18:41.300523 containerd[1472]: 2024-07-02 00:18:41.279 [INFO][4301] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:18:41.300523 containerd[1472]: 2024-07-02 00:18:41.279 [INFO][4301] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:18:41.300523 containerd[1472]: 2024-07-02 00:18:41.290 [WARNING][4301] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="12b99c5f3a8bdcfd505f08fac04fc69a4fa09a4298b9923461c319f4a03e314b" HandleID="k8s-pod-network.12b99c5f3a8bdcfd505f08fac04fc69a4fa09a4298b9923461c319f4a03e314b" Workload="ci--3975.1.1--c--5be545c9fd-k8s-coredns--7db6d8ff4d--vplpg-eth0" Jul 2 00:18:41.300523 containerd[1472]: 2024-07-02 00:18:41.291 [INFO][4301] ipam_plugin.go 439: Releasing address using workloadID ContainerID="12b99c5f3a8bdcfd505f08fac04fc69a4fa09a4298b9923461c319f4a03e314b" HandleID="k8s-pod-network.12b99c5f3a8bdcfd505f08fac04fc69a4fa09a4298b9923461c319f4a03e314b" Workload="ci--3975.1.1--c--5be545c9fd-k8s-coredns--7db6d8ff4d--vplpg-eth0" Jul 2 00:18:41.300523 containerd[1472]: 2024-07-02 00:18:41.294 [INFO][4301] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:18:41.300523 containerd[1472]: 2024-07-02 00:18:41.296 [INFO][4288] k8s.go 621: Teardown processing complete. ContainerID="12b99c5f3a8bdcfd505f08fac04fc69a4fa09a4298b9923461c319f4a03e314b" Jul 2 00:18:41.304257 containerd[1472]: time="2024-07-02T00:18:41.302507671Z" level=info msg="TearDown network for sandbox \"12b99c5f3a8bdcfd505f08fac04fc69a4fa09a4298b9923461c319f4a03e314b\" successfully" Jul 2 00:18:41.304257 containerd[1472]: time="2024-07-02T00:18:41.302589593Z" level=info msg="StopPodSandbox for \"12b99c5f3a8bdcfd505f08fac04fc69a4fa09a4298b9923461c319f4a03e314b\" returns successfully" Jul 2 00:18:41.304496 kubelet[2543]: E0702 00:18:41.303251 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:18:41.305521 containerd[1472]: time="2024-07-02T00:18:41.304362278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vplpg,Uid:f18b362e-cb9c-4d57-98c4-b6cecd0957a3,Namespace:kube-system,Attempt:1,}" Jul 2 00:18:41.318688 containerd[1472]: 2024-07-02 00:18:41.253 [INFO][4284] k8s.go 608: Cleaning up netns ContainerID="e1015a203fc652227776daa37af45b6ddf61b3085e9f5250b16ee6a95f44d572" Jul 2 00:18:41.318688 containerd[1472]: 2024-07-02 00:18:41.255 [INFO][4284] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="e1015a203fc652227776daa37af45b6ddf61b3085e9f5250b16ee6a95f44d572" iface="eth0" netns="/var/run/netns/cni-1546cd26-24b8-c6ab-dd39-46be09c64900" Jul 2 00:18:41.318688 containerd[1472]: 2024-07-02 00:18:41.256 [INFO][4284] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="e1015a203fc652227776daa37af45b6ddf61b3085e9f5250b16ee6a95f44d572" iface="eth0" netns="/var/run/netns/cni-1546cd26-24b8-c6ab-dd39-46be09c64900" Jul 2 00:18:41.318688 containerd[1472]: 2024-07-02 00:18:41.256 [INFO][4284] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="e1015a203fc652227776daa37af45b6ddf61b3085e9f5250b16ee6a95f44d572" iface="eth0" netns="/var/run/netns/cni-1546cd26-24b8-c6ab-dd39-46be09c64900" Jul 2 00:18:41.318688 containerd[1472]: 2024-07-02 00:18:41.256 [INFO][4284] k8s.go 615: Releasing IP address(es) ContainerID="e1015a203fc652227776daa37af45b6ddf61b3085e9f5250b16ee6a95f44d572" Jul 2 00:18:41.318688 containerd[1472]: 2024-07-02 00:18:41.256 [INFO][4284] utils.go 188: Calico CNI releasing IP address ContainerID="e1015a203fc652227776daa37af45b6ddf61b3085e9f5250b16ee6a95f44d572" Jul 2 00:18:41.318688 containerd[1472]: 2024-07-02 00:18:41.292 [INFO][4306] ipam_plugin.go 411: Releasing address using handleID ContainerID="e1015a203fc652227776daa37af45b6ddf61b3085e9f5250b16ee6a95f44d572" HandleID="k8s-pod-network.e1015a203fc652227776daa37af45b6ddf61b3085e9f5250b16ee6a95f44d572" Workload="ci--3975.1.1--c--5be545c9fd-k8s-coredns--7db6d8ff4d--mq5j8-eth0" Jul 2 00:18:41.318688 containerd[1472]: 2024-07-02 00:18:41.293 [INFO][4306] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:18:41.318688 containerd[1472]: 2024-07-02 00:18:41.294 [INFO][4306] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:18:41.318688 containerd[1472]: 2024-07-02 00:18:41.304 [WARNING][4306] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="e1015a203fc652227776daa37af45b6ddf61b3085e9f5250b16ee6a95f44d572" HandleID="k8s-pod-network.e1015a203fc652227776daa37af45b6ddf61b3085e9f5250b16ee6a95f44d572" Workload="ci--3975.1.1--c--5be545c9fd-k8s-coredns--7db6d8ff4d--mq5j8-eth0" Jul 2 00:18:41.318688 containerd[1472]: 2024-07-02 00:18:41.305 [INFO][4306] ipam_plugin.go 439: Releasing address using workloadID ContainerID="e1015a203fc652227776daa37af45b6ddf61b3085e9f5250b16ee6a95f44d572" HandleID="k8s-pod-network.e1015a203fc652227776daa37af45b6ddf61b3085e9f5250b16ee6a95f44d572" Workload="ci--3975.1.1--c--5be545c9fd-k8s-coredns--7db6d8ff4d--mq5j8-eth0" Jul 2 00:18:41.318688 containerd[1472]: 2024-07-02 00:18:41.312 [INFO][4306] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:18:41.318688 containerd[1472]: 2024-07-02 00:18:41.314 [INFO][4284] k8s.go 621: Teardown processing complete. ContainerID="e1015a203fc652227776daa37af45b6ddf61b3085e9f5250b16ee6a95f44d572" Jul 2 00:18:41.319837 containerd[1472]: time="2024-07-02T00:18:41.318339358Z" level=info msg="TearDown network for sandbox \"e1015a203fc652227776daa37af45b6ddf61b3085e9f5250b16ee6a95f44d572\" successfully" Jul 2 00:18:41.319837 containerd[1472]: time="2024-07-02T00:18:41.319199188Z" level=info msg="StopPodSandbox for \"e1015a203fc652227776daa37af45b6ddf61b3085e9f5250b16ee6a95f44d572\" returns successfully" Jul 2 00:18:41.319974 kubelet[2543]: E0702 00:18:41.319954 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:18:41.321146 containerd[1472]: time="2024-07-02T00:18:41.321084524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-mq5j8,Uid:78f44af7-be8f-4f60-9f8c-68664bae1d7c,Namespace:kube-system,Attempt:1,}" Jul 2 00:18:41.408274 systemd[1]: run-netns-cni\x2d1546cd26\x2d24b8\x2dc6ab\x2ddd39\x2d46be09c64900.mount: Deactivated successfully. Jul 2 00:18:41.408393 systemd[1]: run-netns-cni\x2de8f46943\x2d9589\x2d0809\x2df6ba\x2d1549f3938d32.mount: Deactivated successfully. Jul 2 00:18:41.521865 systemd-networkd[1371]: calie9c0b744628: Link UP Jul 2 00:18:41.526989 systemd-networkd[1371]: calie9c0b744628: Gained carrier Jul 2 00:18:41.554650 containerd[1472]: 2024-07-02 00:18:41.383 [INFO][4325] utils.go 100: File /var/lib/calico/mtu does not exist Jul 2 00:18:41.554650 containerd[1472]: 2024-07-02 00:18:41.399 [INFO][4325] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.1.1--c--5be545c9fd-k8s-coredns--7db6d8ff4d--mq5j8-eth0 coredns-7db6d8ff4d- kube-system 78f44af7-be8f-4f60-9f8c-68664bae1d7c 857 0 2024-07-02 00:18:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3975.1.1-c-5be545c9fd coredns-7db6d8ff4d-mq5j8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie9c0b744628 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="1ee49ff0b48606ca5f750ff81157abb0b628341c93b5b0283a3198809fefcf2d" Namespace="kube-system" Pod="coredns-7db6d8ff4d-mq5j8" WorkloadEndpoint="ci--3975.1.1--c--5be545c9fd-k8s-coredns--7db6d8ff4d--mq5j8-" Jul 2 00:18:41.554650 containerd[1472]: 2024-07-02 00:18:41.399 [INFO][4325] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1ee49ff0b48606ca5f750ff81157abb0b628341c93b5b0283a3198809fefcf2d" Namespace="kube-system" Pod="coredns-7db6d8ff4d-mq5j8" WorkloadEndpoint="ci--3975.1.1--c--5be545c9fd-k8s-coredns--7db6d8ff4d--mq5j8-eth0" Jul 2 00:18:41.554650 containerd[1472]: 2024-07-02 00:18:41.452 [INFO][4342] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1ee49ff0b48606ca5f750ff81157abb0b628341c93b5b0283a3198809fefcf2d" HandleID="k8s-pod-network.1ee49ff0b48606ca5f750ff81157abb0b628341c93b5b0283a3198809fefcf2d" Workload="ci--3975.1.1--c--5be545c9fd-k8s-coredns--7db6d8ff4d--mq5j8-eth0" Jul 2 00:18:41.554650 containerd[1472]: 2024-07-02 00:18:41.464 [INFO][4342] ipam_plugin.go 264: Auto assigning IP ContainerID="1ee49ff0b48606ca5f750ff81157abb0b628341c93b5b0283a3198809fefcf2d" HandleID="k8s-pod-network.1ee49ff0b48606ca5f750ff81157abb0b628341c93b5b0283a3198809fefcf2d" Workload="ci--3975.1.1--c--5be545c9fd-k8s-coredns--7db6d8ff4d--mq5j8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002edba0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3975.1.1-c-5be545c9fd", "pod":"coredns-7db6d8ff4d-mq5j8", "timestamp":"2024-07-02 00:18:41.452052611 +0000 UTC"}, Hostname:"ci-3975.1.1-c-5be545c9fd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:18:41.554650 containerd[1472]: 2024-07-02 00:18:41.465 [INFO][4342] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:18:41.554650 containerd[1472]: 2024-07-02 00:18:41.465 [INFO][4342] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:18:41.554650 containerd[1472]: 2024-07-02 00:18:41.465 [INFO][4342] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.1.1-c-5be545c9fd' Jul 2 00:18:41.554650 containerd[1472]: 2024-07-02 00:18:41.468 [INFO][4342] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1ee49ff0b48606ca5f750ff81157abb0b628341c93b5b0283a3198809fefcf2d" host="ci-3975.1.1-c-5be545c9fd" Jul 2 00:18:41.554650 containerd[1472]: 2024-07-02 00:18:41.475 [INFO][4342] ipam.go 372: Looking up existing affinities for host host="ci-3975.1.1-c-5be545c9fd" Jul 2 00:18:41.554650 containerd[1472]: 2024-07-02 00:18:41.486 [INFO][4342] ipam.go 489: Trying affinity for 192.168.63.64/26 host="ci-3975.1.1-c-5be545c9fd" Jul 2 00:18:41.554650 containerd[1472]: 2024-07-02 00:18:41.490 [INFO][4342] ipam.go 155: Attempting to load block cidr=192.168.63.64/26 host="ci-3975.1.1-c-5be545c9fd" Jul 2 00:18:41.554650 containerd[1472]: 2024-07-02 00:18:41.494 [INFO][4342] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.63.64/26 host="ci-3975.1.1-c-5be545c9fd" Jul 2 00:18:41.554650 containerd[1472]: 2024-07-02 00:18:41.494 [INFO][4342] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.63.64/26 handle="k8s-pod-network.1ee49ff0b48606ca5f750ff81157abb0b628341c93b5b0283a3198809fefcf2d" host="ci-3975.1.1-c-5be545c9fd" Jul 2 00:18:41.554650 containerd[1472]: 2024-07-02 00:18:41.497 [INFO][4342] ipam.go 1685: Creating new handle: k8s-pod-network.1ee49ff0b48606ca5f750ff81157abb0b628341c93b5b0283a3198809fefcf2d Jul 2 00:18:41.554650 containerd[1472]: 2024-07-02 00:18:41.503 [INFO][4342] ipam.go 1203: Writing block in order to claim IPs block=192.168.63.64/26 handle="k8s-pod-network.1ee49ff0b48606ca5f750ff81157abb0b628341c93b5b0283a3198809fefcf2d" host="ci-3975.1.1-c-5be545c9fd" Jul 2 00:18:41.554650 containerd[1472]: 2024-07-02 00:18:41.511 [INFO][4342] ipam.go 1216: Successfully claimed IPs: [192.168.63.66/26] block=192.168.63.64/26 handle="k8s-pod-network.1ee49ff0b48606ca5f750ff81157abb0b628341c93b5b0283a3198809fefcf2d" host="ci-3975.1.1-c-5be545c9fd" Jul 2 00:18:41.554650 containerd[1472]: 2024-07-02 00:18:41.511 [INFO][4342] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.63.66/26] handle="k8s-pod-network.1ee49ff0b48606ca5f750ff81157abb0b628341c93b5b0283a3198809fefcf2d" host="ci-3975.1.1-c-5be545c9fd" Jul 2 00:18:41.554650 containerd[1472]: 2024-07-02 00:18:41.512 [INFO][4342] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:18:41.554650 containerd[1472]: 2024-07-02 00:18:41.512 [INFO][4342] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.63.66/26] IPv6=[] ContainerID="1ee49ff0b48606ca5f750ff81157abb0b628341c93b5b0283a3198809fefcf2d" HandleID="k8s-pod-network.1ee49ff0b48606ca5f750ff81157abb0b628341c93b5b0283a3198809fefcf2d" Workload="ci--3975.1.1--c--5be545c9fd-k8s-coredns--7db6d8ff4d--mq5j8-eth0" Jul 2 00:18:41.556248 containerd[1472]: 2024-07-02 00:18:41.515 [INFO][4325] k8s.go 386: Populated endpoint ContainerID="1ee49ff0b48606ca5f750ff81157abb0b628341c93b5b0283a3198809fefcf2d" Namespace="kube-system" Pod="coredns-7db6d8ff4d-mq5j8" WorkloadEndpoint="ci--3975.1.1--c--5be545c9fd-k8s-coredns--7db6d8ff4d--mq5j8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--c--5be545c9fd-k8s-coredns--7db6d8ff4d--mq5j8-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"78f44af7-be8f-4f60-9f8c-68664bae1d7c", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 18, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-c-5be545c9fd", ContainerID:"", Pod:"coredns-7db6d8ff4d-mq5j8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.63.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie9c0b744628", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:18:41.556248 containerd[1472]: 2024-07-02 00:18:41.515 [INFO][4325] k8s.go 387: Calico CNI using IPs: [192.168.63.66/32] ContainerID="1ee49ff0b48606ca5f750ff81157abb0b628341c93b5b0283a3198809fefcf2d" Namespace="kube-system" Pod="coredns-7db6d8ff4d-mq5j8" WorkloadEndpoint="ci--3975.1.1--c--5be545c9fd-k8s-coredns--7db6d8ff4d--mq5j8-eth0" Jul 2 00:18:41.556248 containerd[1472]: 2024-07-02 00:18:41.516 [INFO][4325] dataplane_linux.go 68: Setting the host side veth name to calie9c0b744628 ContainerID="1ee49ff0b48606ca5f750ff81157abb0b628341c93b5b0283a3198809fefcf2d" Namespace="kube-system" Pod="coredns-7db6d8ff4d-mq5j8" WorkloadEndpoint="ci--3975.1.1--c--5be545c9fd-k8s-coredns--7db6d8ff4d--mq5j8-eth0" Jul 2 00:18:41.556248 containerd[1472]: 2024-07-02 00:18:41.530 [INFO][4325] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="1ee49ff0b48606ca5f750ff81157abb0b628341c93b5b0283a3198809fefcf2d" Namespace="kube-system" Pod="coredns-7db6d8ff4d-mq5j8" WorkloadEndpoint="ci--3975.1.1--c--5be545c9fd-k8s-coredns--7db6d8ff4d--mq5j8-eth0" Jul 2 00:18:41.556248 containerd[1472]: 2024-07-02 00:18:41.532 [INFO][4325] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1ee49ff0b48606ca5f750ff81157abb0b628341c93b5b0283a3198809fefcf2d" Namespace="kube-system" Pod="coredns-7db6d8ff4d-mq5j8" WorkloadEndpoint="ci--3975.1.1--c--5be545c9fd-k8s-coredns--7db6d8ff4d--mq5j8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--c--5be545c9fd-k8s-coredns--7db6d8ff4d--mq5j8-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"78f44af7-be8f-4f60-9f8c-68664bae1d7c", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 18, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-c-5be545c9fd", ContainerID:"1ee49ff0b48606ca5f750ff81157abb0b628341c93b5b0283a3198809fefcf2d", Pod:"coredns-7db6d8ff4d-mq5j8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.63.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie9c0b744628", MAC:"e2:42:3f:05:09:4f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:18:41.556248 containerd[1472]: 2024-07-02 00:18:41.550 [INFO][4325] k8s.go 500: Wrote updated endpoint to datastore ContainerID="1ee49ff0b48606ca5f750ff81157abb0b628341c93b5b0283a3198809fefcf2d" Namespace="kube-system" Pod="coredns-7db6d8ff4d-mq5j8" WorkloadEndpoint="ci--3975.1.1--c--5be545c9fd-k8s-coredns--7db6d8ff4d--mq5j8-eth0" Jul 2 00:18:41.609795 containerd[1472]: time="2024-07-02T00:18:41.609312413Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:18:41.610863 systemd-networkd[1371]: cali33386cc29b4: Link UP Jul 2 00:18:41.612890 containerd[1472]: time="2024-07-02T00:18:41.609386853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:18:41.612890 containerd[1472]: time="2024-07-02T00:18:41.612693745Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:18:41.613620 systemd-networkd[1371]: cali33386cc29b4: Gained carrier Jul 2 00:18:41.616193 containerd[1472]: time="2024-07-02T00:18:41.612734204Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:18:41.656215 containerd[1472]: 2024-07-02 00:18:41.370 [INFO][4314] utils.go 100: File /var/lib/calico/mtu does not exist Jul 2 00:18:41.656215 containerd[1472]: 2024-07-02 00:18:41.392 [INFO][4314] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.1.1--c--5be545c9fd-k8s-coredns--7db6d8ff4d--vplpg-eth0 coredns-7db6d8ff4d- kube-system f18b362e-cb9c-4d57-98c4-b6cecd0957a3 856 0 2024-07-02 00:18:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3975.1.1-c-5be545c9fd coredns-7db6d8ff4d-vplpg eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali33386cc29b4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="cb2b687de32c91a9a972ff69b6161db696277d692e0fa7a7cc0defe915eb21da" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vplpg" WorkloadEndpoint="ci--3975.1.1--c--5be545c9fd-k8s-coredns--7db6d8ff4d--vplpg-" Jul 2 00:18:41.656215 containerd[1472]: 2024-07-02 00:18:41.392 [INFO][4314] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="cb2b687de32c91a9a972ff69b6161db696277d692e0fa7a7cc0defe915eb21da" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vplpg" WorkloadEndpoint="ci--3975.1.1--c--5be545c9fd-k8s-coredns--7db6d8ff4d--vplpg-eth0" Jul 2 00:18:41.656215 containerd[1472]: 2024-07-02 00:18:41.455 [INFO][4340] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cb2b687de32c91a9a972ff69b6161db696277d692e0fa7a7cc0defe915eb21da" HandleID="k8s-pod-network.cb2b687de32c91a9a972ff69b6161db696277d692e0fa7a7cc0defe915eb21da" Workload="ci--3975.1.1--c--5be545c9fd-k8s-coredns--7db6d8ff4d--vplpg-eth0" Jul 2 00:18:41.656215 containerd[1472]: 2024-07-02 00:18:41.472 [INFO][4340] ipam_plugin.go 264: Auto assigning IP ContainerID="cb2b687de32c91a9a972ff69b6161db696277d692e0fa7a7cc0defe915eb21da" HandleID="k8s-pod-network.cb2b687de32c91a9a972ff69b6161db696277d692e0fa7a7cc0defe915eb21da" Workload="ci--3975.1.1--c--5be545c9fd-k8s-coredns--7db6d8ff4d--vplpg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000504b0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3975.1.1-c-5be545c9fd", "pod":"coredns-7db6d8ff4d-vplpg", "timestamp":"2024-07-02 00:18:41.455493902 +0000 UTC"}, Hostname:"ci-3975.1.1-c-5be545c9fd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:18:41.656215 containerd[1472]: 2024-07-02 00:18:41.472 [INFO][4340] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:18:41.656215 containerd[1472]: 2024-07-02 00:18:41.512 [INFO][4340] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:18:41.656215 containerd[1472]: 2024-07-02 00:18:41.512 [INFO][4340] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.1.1-c-5be545c9fd' Jul 2 00:18:41.656215 containerd[1472]: 2024-07-02 00:18:41.520 [INFO][4340] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.cb2b687de32c91a9a972ff69b6161db696277d692e0fa7a7cc0defe915eb21da" host="ci-3975.1.1-c-5be545c9fd" Jul 2 00:18:41.656215 containerd[1472]: 2024-07-02 00:18:41.535 [INFO][4340] ipam.go 372: Looking up existing affinities for host host="ci-3975.1.1-c-5be545c9fd" Jul 2 00:18:41.656215 containerd[1472]: 2024-07-02 00:18:41.556 [INFO][4340] ipam.go 489: Trying affinity for 192.168.63.64/26 host="ci-3975.1.1-c-5be545c9fd" Jul 2 00:18:41.656215 containerd[1472]: 2024-07-02 00:18:41.561 [INFO][4340] ipam.go 155: Attempting to load block cidr=192.168.63.64/26 host="ci-3975.1.1-c-5be545c9fd" Jul 2 00:18:41.656215 containerd[1472]: 2024-07-02 00:18:41.567 [INFO][4340] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.63.64/26 host="ci-3975.1.1-c-5be545c9fd" Jul 2 00:18:41.656215 containerd[1472]: 2024-07-02 00:18:41.567 [INFO][4340] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.63.64/26 handle="k8s-pod-network.cb2b687de32c91a9a972ff69b6161db696277d692e0fa7a7cc0defe915eb21da" host="ci-3975.1.1-c-5be545c9fd" Jul 2 00:18:41.656215 containerd[1472]: 2024-07-02 00:18:41.571 [INFO][4340] ipam.go 1685: Creating new handle: k8s-pod-network.cb2b687de32c91a9a972ff69b6161db696277d692e0fa7a7cc0defe915eb21da Jul 2 00:18:41.656215 containerd[1472]: 2024-07-02 00:18:41.581 [INFO][4340] ipam.go 1203: Writing block in order to claim IPs block=192.168.63.64/26 handle="k8s-pod-network.cb2b687de32c91a9a972ff69b6161db696277d692e0fa7a7cc0defe915eb21da" host="ci-3975.1.1-c-5be545c9fd" Jul 2 00:18:41.656215 containerd[1472]: 2024-07-02 00:18:41.596 [INFO][4340] ipam.go 1216: Successfully claimed IPs: [192.168.63.67/26] block=192.168.63.64/26 handle="k8s-pod-network.cb2b687de32c91a9a972ff69b6161db696277d692e0fa7a7cc0defe915eb21da" host="ci-3975.1.1-c-5be545c9fd" Jul 2 00:18:41.656215 containerd[1472]: 2024-07-02 00:18:41.596 [INFO][4340] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.63.67/26] handle="k8s-pod-network.cb2b687de32c91a9a972ff69b6161db696277d692e0fa7a7cc0defe915eb21da" host="ci-3975.1.1-c-5be545c9fd" Jul 2 00:18:41.656215 containerd[1472]: 2024-07-02 00:18:41.596 [INFO][4340] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:18:41.656215 containerd[1472]: 2024-07-02 00:18:41.596 [INFO][4340] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.63.67/26] IPv6=[] ContainerID="cb2b687de32c91a9a972ff69b6161db696277d692e0fa7a7cc0defe915eb21da" HandleID="k8s-pod-network.cb2b687de32c91a9a972ff69b6161db696277d692e0fa7a7cc0defe915eb21da" Workload="ci--3975.1.1--c--5be545c9fd-k8s-coredns--7db6d8ff4d--vplpg-eth0" Jul 2 00:18:41.658074 containerd[1472]: 2024-07-02 00:18:41.604 [INFO][4314] k8s.go 386: Populated endpoint ContainerID="cb2b687de32c91a9a972ff69b6161db696277d692e0fa7a7cc0defe915eb21da" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vplpg" WorkloadEndpoint="ci--3975.1.1--c--5be545c9fd-k8s-coredns--7db6d8ff4d--vplpg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--c--5be545c9fd-k8s-coredns--7db6d8ff4d--vplpg-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"f18b362e-cb9c-4d57-98c4-b6cecd0957a3", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 18, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-c-5be545c9fd", ContainerID:"", Pod:"coredns-7db6d8ff4d-vplpg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.63.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali33386cc29b4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:18:41.658074 containerd[1472]: 2024-07-02 00:18:41.604 [INFO][4314] k8s.go 387: Calico CNI using IPs: [192.168.63.67/32] ContainerID="cb2b687de32c91a9a972ff69b6161db696277d692e0fa7a7cc0defe915eb21da" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vplpg" WorkloadEndpoint="ci--3975.1.1--c--5be545c9fd-k8s-coredns--7db6d8ff4d--vplpg-eth0" Jul 2 00:18:41.658074 containerd[1472]: 2024-07-02 00:18:41.604 [INFO][4314] dataplane_linux.go 68: Setting the host side veth name to cali33386cc29b4 ContainerID="cb2b687de32c91a9a972ff69b6161db696277d692e0fa7a7cc0defe915eb21da" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vplpg" WorkloadEndpoint="ci--3975.1.1--c--5be545c9fd-k8s-coredns--7db6d8ff4d--vplpg-eth0" Jul 2 00:18:41.658074 containerd[1472]: 2024-07-02 00:18:41.617 [INFO][4314] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="cb2b687de32c91a9a972ff69b6161db696277d692e0fa7a7cc0defe915eb21da" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vplpg" WorkloadEndpoint="ci--3975.1.1--c--5be545c9fd-k8s-coredns--7db6d8ff4d--vplpg-eth0" Jul 2 00:18:41.658074 containerd[1472]: 2024-07-02 00:18:41.620 [INFO][4314] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="cb2b687de32c91a9a972ff69b6161db696277d692e0fa7a7cc0defe915eb21da" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vplpg" WorkloadEndpoint="ci--3975.1.1--c--5be545c9fd-k8s-coredns--7db6d8ff4d--vplpg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--c--5be545c9fd-k8s-coredns--7db6d8ff4d--vplpg-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"f18b362e-cb9c-4d57-98c4-b6cecd0957a3", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 18, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-c-5be545c9fd", ContainerID:"cb2b687de32c91a9a972ff69b6161db696277d692e0fa7a7cc0defe915eb21da", Pod:"coredns-7db6d8ff4d-vplpg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.63.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali33386cc29b4", MAC:"f2:dd:31:54:6a:c1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:18:41.658074 containerd[1472]: 2024-07-02 00:18:41.650 [INFO][4314] k8s.go 500: Wrote updated endpoint to datastore ContainerID="cb2b687de32c91a9a972ff69b6161db696277d692e0fa7a7cc0defe915eb21da" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vplpg" WorkloadEndpoint="ci--3975.1.1--c--5be545c9fd-k8s-coredns--7db6d8ff4d--vplpg-eth0" Jul 2 00:18:41.674115 systemd[1]: Started cri-containerd-1ee49ff0b48606ca5f750ff81157abb0b628341c93b5b0283a3198809fefcf2d.scope - libcontainer container 1ee49ff0b48606ca5f750ff81157abb0b628341c93b5b0283a3198809fefcf2d. Jul 2 00:18:41.705865 containerd[1472]: time="2024-07-02T00:18:41.705715984Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:18:41.705865 containerd[1472]: time="2024-07-02T00:18:41.705827187Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:18:41.707047 containerd[1472]: time="2024-07-02T00:18:41.706172607Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:18:41.708221 containerd[1472]: time="2024-07-02T00:18:41.708126698Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:18:41.757796 systemd[1]: Started cri-containerd-cb2b687de32c91a9a972ff69b6161db696277d692e0fa7a7cc0defe915eb21da.scope - libcontainer container cb2b687de32c91a9a972ff69b6161db696277d692e0fa7a7cc0defe915eb21da. Jul 2 00:18:41.772493 containerd[1472]: time="2024-07-02T00:18:41.772414653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-mq5j8,Uid:78f44af7-be8f-4f60-9f8c-68664bae1d7c,Namespace:kube-system,Attempt:1,} returns sandbox id \"1ee49ff0b48606ca5f750ff81157abb0b628341c93b5b0283a3198809fefcf2d\"" Jul 2 00:18:41.773726 kubelet[2543]: E0702 00:18:41.773698 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:18:41.800326 containerd[1472]: time="2024-07-02T00:18:41.799975710Z" level=info msg="CreateContainer within sandbox \"1ee49ff0b48606ca5f750ff81157abb0b628341c93b5b0283a3198809fefcf2d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 00:18:41.832152 containerd[1472]: time="2024-07-02T00:18:41.832092364Z" level=info msg="CreateContainer within sandbox \"1ee49ff0b48606ca5f750ff81157abb0b628341c93b5b0283a3198809fefcf2d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5f1eefd36d5a9010211916e9e18541f723a4cc868dc1c4383ec3506e493c7dfd\"" Jul 2 00:18:41.835072 containerd[1472]: time="2024-07-02T00:18:41.835029864Z" level=info msg="StartContainer for \"5f1eefd36d5a9010211916e9e18541f723a4cc868dc1c4383ec3506e493c7dfd\"" Jul 2 00:18:41.839968 containerd[1472]: time="2024-07-02T00:18:41.839925490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vplpg,Uid:f18b362e-cb9c-4d57-98c4-b6cecd0957a3,Namespace:kube-system,Attempt:1,} returns sandbox id \"cb2b687de32c91a9a972ff69b6161db696277d692e0fa7a7cc0defe915eb21da\"" Jul 2 00:18:41.843121 kubelet[2543]: E0702 00:18:41.843085 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:18:41.851328 containerd[1472]: time="2024-07-02T00:18:41.850813316Z" level=info msg="CreateContainer within sandbox \"cb2b687de32c91a9a972ff69b6161db696277d692e0fa7a7cc0defe915eb21da\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 00:18:41.900783 systemd[1]: Started cri-containerd-5f1eefd36d5a9010211916e9e18541f723a4cc868dc1c4383ec3506e493c7dfd.scope - libcontainer container 5f1eefd36d5a9010211916e9e18541f723a4cc868dc1c4383ec3506e493c7dfd. Jul 2 00:18:41.943823 containerd[1472]: time="2024-07-02T00:18:41.943761688Z" level=info msg="CreateContainer within sandbox \"cb2b687de32c91a9a972ff69b6161db696277d692e0fa7a7cc0defe915eb21da\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"945e180faa39efe84c40ff77872afc9d08b1413376348f4de3af864bd5b1635c\"" Jul 2 00:18:41.945054 containerd[1472]: time="2024-07-02T00:18:41.945004979Z" level=info msg="StartContainer for \"945e180faa39efe84c40ff77872afc9d08b1413376348f4de3af864bd5b1635c\"" Jul 2 00:18:41.981141 containerd[1472]: time="2024-07-02T00:18:41.980321934Z" level=info msg="StartContainer for \"5f1eefd36d5a9010211916e9e18541f723a4cc868dc1c4383ec3506e493c7dfd\" returns successfully" Jul 2 00:18:42.016150 systemd[1]: Started cri-containerd-945e180faa39efe84c40ff77872afc9d08b1413376348f4de3af864bd5b1635c.scope - libcontainer container 945e180faa39efe84c40ff77872afc9d08b1413376348f4de3af864bd5b1635c. Jul 2 00:18:42.161354 systemd-networkd[1371]: califd1516030ca: Gained IPv6LL Jul 2 00:18:42.169816 containerd[1472]: time="2024-07-02T00:18:42.168460578Z" level=info msg="StartContainer for \"945e180faa39efe84c40ff77872afc9d08b1413376348f4de3af864bd5b1635c\" returns successfully" Jul 2 00:18:42.412735 systemd[1]: run-containerd-runc-k8s.io-cb2b687de32c91a9a972ff69b6161db696277d692e0fa7a7cc0defe915eb21da-runc.mK8lbM.mount: Deactivated successfully. Jul 2 00:18:42.538758 kubelet[2543]: E0702 00:18:42.537784 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:18:42.562179 kubelet[2543]: E0702 00:18:42.561042 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:18:42.608191 systemd-networkd[1371]: calie9c0b744628: Gained IPv6LL Jul 2 00:18:42.621277 kubelet[2543]: I0702 00:18:42.620414 2543 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-mq5j8" podStartSLOduration=36.62029098 podStartE2EDuration="36.62029098s" podCreationTimestamp="2024-07-02 00:18:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:18:42.598994067 +0000 UTC m=+50.661033345" watchObservedRunningTime="2024-07-02 00:18:42.62029098 +0000 UTC m=+50.682330257" Jul 2 00:18:42.643179 kubelet[2543]: I0702 00:18:42.642721 2543 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-vplpg" podStartSLOduration=36.64270188 podStartE2EDuration="36.64270188s" podCreationTimestamp="2024-07-02 00:18:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:18:42.641457495 +0000 UTC m=+50.703496773" watchObservedRunningTime="2024-07-02 00:18:42.64270188 +0000 UTC m=+50.704741159" Jul 2 00:18:43.250242 systemd-networkd[1371]: cali33386cc29b4: Gained IPv6LL Jul 2 00:18:43.565796 kubelet[2543]: E0702 00:18:43.564241 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:18:43.565796 kubelet[2543]: E0702 00:18:43.565039 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:18:43.615807 containerd[1472]: time="2024-07-02T00:18:43.615739849Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:18:43.617416 containerd[1472]: time="2024-07-02T00:18:43.617217586Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=33505793" Jul 2 00:18:43.618349 containerd[1472]: time="2024-07-02T00:18:43.618294147Z" level=info msg="ImageCreate event name:\"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:18:43.621460 containerd[1472]: time="2024-07-02T00:18:43.621307683Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:18:43.624146 containerd[1472]: time="2024-07-02T00:18:43.623994245Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"34953521\" in 2.794208377s" Jul 2 00:18:43.624146 containerd[1472]: time="2024-07-02T00:18:43.624047270Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\"" Jul 2 00:18:43.644148 containerd[1472]: time="2024-07-02T00:18:43.644093801Z" level=info msg="CreateContainer within sandbox \"91e26ae58c8bf5ca48067d4fb68c5582a8c218b20d01ec532e8b89d770e619b0\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 2 00:18:43.663169 containerd[1472]: time="2024-07-02T00:18:43.663113393Z" level=info msg="CreateContainer within sandbox \"91e26ae58c8bf5ca48067d4fb68c5582a8c218b20d01ec532e8b89d770e619b0\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"7d0fb83e8538b08c6ff6fd8653f3825500e4496ec8633357cee28c65bc118f82\"" Jul 2 00:18:43.665496 containerd[1472]: time="2024-07-02T00:18:43.664349718Z" level=info msg="StartContainer for \"7d0fb83e8538b08c6ff6fd8653f3825500e4496ec8633357cee28c65bc118f82\"" Jul 2 00:18:43.706879 systemd[1]: Started cri-containerd-7d0fb83e8538b08c6ff6fd8653f3825500e4496ec8633357cee28c65bc118f82.scope - libcontainer container 7d0fb83e8538b08c6ff6fd8653f3825500e4496ec8633357cee28c65bc118f82. Jul 2 00:18:43.758440 containerd[1472]: time="2024-07-02T00:18:43.758378975Z" level=info msg="StartContainer for \"7d0fb83e8538b08c6ff6fd8653f3825500e4496ec8633357cee28c65bc118f82\" returns successfully" Jul 2 00:18:43.827638 kubelet[2543]: I0702 00:18:43.826479 2543 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 00:18:43.827638 kubelet[2543]: E0702 00:18:43.827161 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:18:44.139691 containerd[1472]: time="2024-07-02T00:18:44.139234843Z" level=info msg="StopPodSandbox for \"7bc34c14754db37c601400724208d0149eed870786c0e4c22e4170c08cf1afc0\"" Jul 2 00:18:44.314986 containerd[1472]: 2024-07-02 00:18:44.236 [INFO][4639] k8s.go 608: Cleaning up netns ContainerID="7bc34c14754db37c601400724208d0149eed870786c0e4c22e4170c08cf1afc0" Jul 2 00:18:44.314986 containerd[1472]: 2024-07-02 00:18:44.236 [INFO][4639] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="7bc34c14754db37c601400724208d0149eed870786c0e4c22e4170c08cf1afc0" iface="eth0" netns="/var/run/netns/cni-4376e9fc-4340-590e-4229-4f05114da6ec" Jul 2 00:18:44.314986 containerd[1472]: 2024-07-02 00:18:44.236 [INFO][4639] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="7bc34c14754db37c601400724208d0149eed870786c0e4c22e4170c08cf1afc0" iface="eth0" netns="/var/run/netns/cni-4376e9fc-4340-590e-4229-4f05114da6ec" Jul 2 00:18:44.314986 containerd[1472]: 2024-07-02 00:18:44.238 [INFO][4639] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="7bc34c14754db37c601400724208d0149eed870786c0e4c22e4170c08cf1afc0" iface="eth0" netns="/var/run/netns/cni-4376e9fc-4340-590e-4229-4f05114da6ec" Jul 2 00:18:44.314986 containerd[1472]: 2024-07-02 00:18:44.238 [INFO][4639] k8s.go 615: Releasing IP address(es) ContainerID="7bc34c14754db37c601400724208d0149eed870786c0e4c22e4170c08cf1afc0" Jul 2 00:18:44.314986 containerd[1472]: 2024-07-02 00:18:44.238 [INFO][4639] utils.go 188: Calico CNI releasing IP address ContainerID="7bc34c14754db37c601400724208d0149eed870786c0e4c22e4170c08cf1afc0" Jul 2 00:18:44.314986 containerd[1472]: 2024-07-02 00:18:44.286 [INFO][4650] ipam_plugin.go 411: Releasing address using handleID ContainerID="7bc34c14754db37c601400724208d0149eed870786c0e4c22e4170c08cf1afc0" HandleID="k8s-pod-network.7bc34c14754db37c601400724208d0149eed870786c0e4c22e4170c08cf1afc0" Workload="ci--3975.1.1--c--5be545c9fd-k8s-csi--node--driver--dpgqd-eth0" Jul 2 00:18:44.314986 containerd[1472]: 2024-07-02 00:18:44.286 [INFO][4650] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:18:44.314986 containerd[1472]: 2024-07-02 00:18:44.286 [INFO][4650] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:18:44.314986 containerd[1472]: 2024-07-02 00:18:44.293 [WARNING][4650] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="7bc34c14754db37c601400724208d0149eed870786c0e4c22e4170c08cf1afc0" HandleID="k8s-pod-network.7bc34c14754db37c601400724208d0149eed870786c0e4c22e4170c08cf1afc0" Workload="ci--3975.1.1--c--5be545c9fd-k8s-csi--node--driver--dpgqd-eth0" Jul 2 00:18:44.314986 containerd[1472]: 2024-07-02 00:18:44.293 [INFO][4650] ipam_plugin.go 439: Releasing address using workloadID ContainerID="7bc34c14754db37c601400724208d0149eed870786c0e4c22e4170c08cf1afc0" HandleID="k8s-pod-network.7bc34c14754db37c601400724208d0149eed870786c0e4c22e4170c08cf1afc0" Workload="ci--3975.1.1--c--5be545c9fd-k8s-csi--node--driver--dpgqd-eth0" Jul 2 00:18:44.314986 containerd[1472]: 2024-07-02 00:18:44.308 [INFO][4650] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:18:44.314986 containerd[1472]: 2024-07-02 00:18:44.312 [INFO][4639] k8s.go 621: Teardown processing complete. ContainerID="7bc34c14754db37c601400724208d0149eed870786c0e4c22e4170c08cf1afc0" Jul 2 00:18:44.315925 containerd[1472]: time="2024-07-02T00:18:44.315887287Z" level=info msg="TearDown network for sandbox \"7bc34c14754db37c601400724208d0149eed870786c0e4c22e4170c08cf1afc0\" successfully" Jul 2 00:18:44.316056 containerd[1472]: time="2024-07-02T00:18:44.315989401Z" level=info msg="StopPodSandbox for \"7bc34c14754db37c601400724208d0149eed870786c0e4c22e4170c08cf1afc0\" returns successfully" Jul 2 00:18:44.317129 containerd[1472]: time="2024-07-02T00:18:44.316695377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dpgqd,Uid:d3d95f80-f22a-4d64-99eb-0d72b7beb76e,Namespace:calico-system,Attempt:1,}" Jul 2 00:18:44.575285 systemd-networkd[1371]: cali02e80a3009e: Link UP Jul 2 00:18:44.578447 systemd-networkd[1371]: cali02e80a3009e: Gained carrier Jul 2 00:18:44.601767 kubelet[2543]: E0702 00:18:44.601706 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:18:44.620304 kubelet[2543]: E0702 00:18:44.611461 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:18:44.620304 kubelet[2543]: E0702 00:18:44.616515 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:18:44.656740 systemd[1]: run-netns-cni\x2d4376e9fc\x2d4340\x2d590e\x2d4229\x2d4f05114da6ec.mount: Deactivated successfully. Jul 2 00:18:44.669934 systemd[1]: Started sshd@7-64.23.132.250:22-147.75.109.163:55924.service - OpenSSH per-connection server daemon (147.75.109.163:55924). Jul 2 00:18:44.702575 containerd[1472]: 2024-07-02 00:18:44.391 [INFO][4661] utils.go 100: File /var/lib/calico/mtu does not exist Jul 2 00:18:44.702575 containerd[1472]: 2024-07-02 00:18:44.423 [INFO][4661] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.1.1--c--5be545c9fd-k8s-csi--node--driver--dpgqd-eth0 csi-node-driver- calico-system d3d95f80-f22a-4d64-99eb-0d72b7beb76e 945 0 2024-07-02 00:18:13 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6cc9df58f4 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ci-3975.1.1-c-5be545c9fd csi-node-driver-dpgqd eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali02e80a3009e [] []}} ContainerID="cf653f145c373f4c2b27138fd56c6e2f39fca2a96723a0c4973e487c6ec05770" Namespace="calico-system" Pod="csi-node-driver-dpgqd" WorkloadEndpoint="ci--3975.1.1--c--5be545c9fd-k8s-csi--node--driver--dpgqd-" Jul 2 00:18:44.702575 containerd[1472]: 2024-07-02 00:18:44.423 [INFO][4661] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="cf653f145c373f4c2b27138fd56c6e2f39fca2a96723a0c4973e487c6ec05770" Namespace="calico-system" Pod="csi-node-driver-dpgqd" WorkloadEndpoint="ci--3975.1.1--c--5be545c9fd-k8s-csi--node--driver--dpgqd-eth0" Jul 2 00:18:44.702575 containerd[1472]: 2024-07-02 00:18:44.486 [INFO][4672] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cf653f145c373f4c2b27138fd56c6e2f39fca2a96723a0c4973e487c6ec05770" HandleID="k8s-pod-network.cf653f145c373f4c2b27138fd56c6e2f39fca2a96723a0c4973e487c6ec05770" Workload="ci--3975.1.1--c--5be545c9fd-k8s-csi--node--driver--dpgqd-eth0" Jul 2 00:18:44.702575 containerd[1472]: 2024-07-02 00:18:44.500 [INFO][4672] ipam_plugin.go 264: Auto assigning IP ContainerID="cf653f145c373f4c2b27138fd56c6e2f39fca2a96723a0c4973e487c6ec05770" HandleID="k8s-pod-network.cf653f145c373f4c2b27138fd56c6e2f39fca2a96723a0c4973e487c6ec05770" Workload="ci--3975.1.1--c--5be545c9fd-k8s-csi--node--driver--dpgqd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003182f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3975.1.1-c-5be545c9fd", "pod":"csi-node-driver-dpgqd", "timestamp":"2024-07-02 00:18:44.486275581 +0000 UTC"}, Hostname:"ci-3975.1.1-c-5be545c9fd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:18:44.702575 containerd[1472]: 2024-07-02 00:18:44.501 [INFO][4672] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:18:44.702575 containerd[1472]: 2024-07-02 00:18:44.501 [INFO][4672] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:18:44.702575 containerd[1472]: 2024-07-02 00:18:44.501 [INFO][4672] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.1.1-c-5be545c9fd' Jul 2 00:18:44.702575 containerd[1472]: 2024-07-02 00:18:44.506 [INFO][4672] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.cf653f145c373f4c2b27138fd56c6e2f39fca2a96723a0c4973e487c6ec05770" host="ci-3975.1.1-c-5be545c9fd" Jul 2 00:18:44.702575 containerd[1472]: 2024-07-02 00:18:44.515 [INFO][4672] ipam.go 372: Looking up existing affinities for host host="ci-3975.1.1-c-5be545c9fd" Jul 2 00:18:44.702575 containerd[1472]: 2024-07-02 00:18:44.528 [INFO][4672] ipam.go 489: Trying affinity for 192.168.63.64/26 host="ci-3975.1.1-c-5be545c9fd" Jul 2 00:18:44.702575 containerd[1472]: 2024-07-02 00:18:44.533 [INFO][4672] ipam.go 155: Attempting to load block cidr=192.168.63.64/26 host="ci-3975.1.1-c-5be545c9fd" Jul 2 00:18:44.702575 containerd[1472]: 2024-07-02 00:18:44.538 [INFO][4672] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.63.64/26 host="ci-3975.1.1-c-5be545c9fd" Jul 2 00:18:44.702575 containerd[1472]: 2024-07-02 00:18:44.539 [INFO][4672] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.63.64/26 handle="k8s-pod-network.cf653f145c373f4c2b27138fd56c6e2f39fca2a96723a0c4973e487c6ec05770" host="ci-3975.1.1-c-5be545c9fd" Jul 2 00:18:44.702575 containerd[1472]: 2024-07-02 00:18:44.542 [INFO][4672] ipam.go 1685: Creating new handle: k8s-pod-network.cf653f145c373f4c2b27138fd56c6e2f39fca2a96723a0c4973e487c6ec05770 Jul 2 00:18:44.702575 containerd[1472]: 2024-07-02 00:18:44.550 [INFO][4672] ipam.go 1203: Writing block in order to claim IPs block=192.168.63.64/26 handle="k8s-pod-network.cf653f145c373f4c2b27138fd56c6e2f39fca2a96723a0c4973e487c6ec05770" host="ci-3975.1.1-c-5be545c9fd" Jul 2 00:18:44.702575 containerd[1472]: 2024-07-02 00:18:44.561 [INFO][4672] ipam.go 1216: Successfully claimed IPs: [192.168.63.68/26] block=192.168.63.64/26 handle="k8s-pod-network.cf653f145c373f4c2b27138fd56c6e2f39fca2a96723a0c4973e487c6ec05770" host="ci-3975.1.1-c-5be545c9fd" Jul 2 00:18:44.702575 containerd[1472]: 2024-07-02 00:18:44.561 [INFO][4672] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.63.68/26] handle="k8s-pod-network.cf653f145c373f4c2b27138fd56c6e2f39fca2a96723a0c4973e487c6ec05770" host="ci-3975.1.1-c-5be545c9fd" Jul 2 00:18:44.702575 containerd[1472]: 2024-07-02 00:18:44.561 [INFO][4672] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:18:44.702575 containerd[1472]: 2024-07-02 00:18:44.562 [INFO][4672] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.63.68/26] IPv6=[] ContainerID="cf653f145c373f4c2b27138fd56c6e2f39fca2a96723a0c4973e487c6ec05770" HandleID="k8s-pod-network.cf653f145c373f4c2b27138fd56c6e2f39fca2a96723a0c4973e487c6ec05770" Workload="ci--3975.1.1--c--5be545c9fd-k8s-csi--node--driver--dpgqd-eth0" Jul 2 00:18:44.707349 containerd[1472]: 2024-07-02 00:18:44.566 [INFO][4661] k8s.go 386: Populated endpoint ContainerID="cf653f145c373f4c2b27138fd56c6e2f39fca2a96723a0c4973e487c6ec05770" Namespace="calico-system" Pod="csi-node-driver-dpgqd" WorkloadEndpoint="ci--3975.1.1--c--5be545c9fd-k8s-csi--node--driver--dpgqd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--c--5be545c9fd-k8s-csi--node--driver--dpgqd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d3d95f80-f22a-4d64-99eb-0d72b7beb76e", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 18, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-c-5be545c9fd", ContainerID:"", Pod:"csi-node-driver-dpgqd", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.63.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali02e80a3009e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:18:44.707349 containerd[1472]: 2024-07-02 00:18:44.568 [INFO][4661] k8s.go 387: Calico CNI using IPs: [192.168.63.68/32] ContainerID="cf653f145c373f4c2b27138fd56c6e2f39fca2a96723a0c4973e487c6ec05770" Namespace="calico-system" Pod="csi-node-driver-dpgqd" WorkloadEndpoint="ci--3975.1.1--c--5be545c9fd-k8s-csi--node--driver--dpgqd-eth0" Jul 2 00:18:44.707349 containerd[1472]: 2024-07-02 00:18:44.568 [INFO][4661] dataplane_linux.go 68: Setting the host side veth name to cali02e80a3009e ContainerID="cf653f145c373f4c2b27138fd56c6e2f39fca2a96723a0c4973e487c6ec05770" Namespace="calico-system" Pod="csi-node-driver-dpgqd" WorkloadEndpoint="ci--3975.1.1--c--5be545c9fd-k8s-csi--node--driver--dpgqd-eth0" Jul 2 00:18:44.707349 containerd[1472]: 2024-07-02 00:18:44.576 [INFO][4661] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="cf653f145c373f4c2b27138fd56c6e2f39fca2a96723a0c4973e487c6ec05770" Namespace="calico-system" Pod="csi-node-driver-dpgqd" WorkloadEndpoint="ci--3975.1.1--c--5be545c9fd-k8s-csi--node--driver--dpgqd-eth0" Jul 2 00:18:44.707349 containerd[1472]: 2024-07-02 00:18:44.586 [INFO][4661] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="cf653f145c373f4c2b27138fd56c6e2f39fca2a96723a0c4973e487c6ec05770" Namespace="calico-system" Pod="csi-node-driver-dpgqd" WorkloadEndpoint="ci--3975.1.1--c--5be545c9fd-k8s-csi--node--driver--dpgqd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--c--5be545c9fd-k8s-csi--node--driver--dpgqd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d3d95f80-f22a-4d64-99eb-0d72b7beb76e", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 18, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-c-5be545c9fd", ContainerID:"cf653f145c373f4c2b27138fd56c6e2f39fca2a96723a0c4973e487c6ec05770", Pod:"csi-node-driver-dpgqd", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.63.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali02e80a3009e", MAC:"da:fa:80:cf:fa:02", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:18:44.707349 containerd[1472]: 2024-07-02 00:18:44.674 [INFO][4661] k8s.go 500: Wrote updated endpoint to datastore ContainerID="cf653f145c373f4c2b27138fd56c6e2f39fca2a96723a0c4973e487c6ec05770" Namespace="calico-system" Pod="csi-node-driver-dpgqd" WorkloadEndpoint="ci--3975.1.1--c--5be545c9fd-k8s-csi--node--driver--dpgqd-eth0" Jul 2 00:18:44.754276 systemd[1]: run-containerd-runc-k8s.io-7d0fb83e8538b08c6ff6fd8653f3825500e4496ec8633357cee28c65bc118f82-runc.UCIzob.mount: Deactivated successfully. Jul 2 00:18:44.835141 containerd[1472]: time="2024-07-02T00:18:44.834678051Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:18:44.835141 containerd[1472]: time="2024-07-02T00:18:44.834819795Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:18:44.835619 containerd[1472]: time="2024-07-02T00:18:44.834915195Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:18:44.835619 containerd[1472]: time="2024-07-02T00:18:44.834977662Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:18:44.857643 sshd[4688]: Accepted publickey for core from 147.75.109.163 port 55924 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:18:44.861012 sshd[4688]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:18:44.875834 systemd-logind[1445]: New session 8 of user core. Jul 2 00:18:44.878928 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 2 00:18:44.891179 kubelet[2543]: I0702 00:18:44.888466 2543 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-df7b6c459-dxjdh" podStartSLOduration=28.089117569 podStartE2EDuration="30.888445194s" podCreationTimestamp="2024-07-02 00:18:14 +0000 UTC" firstStartedPulling="2024-07-02 00:18:40.825739448 +0000 UTC m=+48.887778709" lastFinishedPulling="2024-07-02 00:18:43.625067067 +0000 UTC m=+51.687106334" observedRunningTime="2024-07-02 00:18:44.719286807 +0000 UTC m=+52.781326086" watchObservedRunningTime="2024-07-02 00:18:44.888445194 +0000 UTC m=+52.950484471" Jul 2 00:18:44.911806 systemd[1]: Started cri-containerd-cf653f145c373f4c2b27138fd56c6e2f39fca2a96723a0c4973e487c6ec05770.scope - libcontainer container cf653f145c373f4c2b27138fd56c6e2f39fca2a96723a0c4973e487c6ec05770. Jul 2 00:18:45.131693 containerd[1472]: time="2024-07-02T00:18:45.130780742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dpgqd,Uid:d3d95f80-f22a-4d64-99eb-0d72b7beb76e,Namespace:calico-system,Attempt:1,} returns sandbox id \"cf653f145c373f4c2b27138fd56c6e2f39fca2a96723a0c4973e487c6ec05770\"" Jul 2 00:18:45.137619 containerd[1472]: time="2024-07-02T00:18:45.136254419Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Jul 2 00:18:45.297553 sshd[4688]: pam_unix(sshd:session): session closed for user core Jul 2 00:18:45.305485 systemd[1]: sshd@7-64.23.132.250:22-147.75.109.163:55924.service: Deactivated successfully. Jul 2 00:18:45.311116 systemd[1]: session-8.scope: Deactivated successfully. Jul 2 00:18:45.315010 systemd-logind[1445]: Session 8 logged out. Waiting for processes to exit. Jul 2 00:18:45.320387 systemd-logind[1445]: Removed session 8. Jul 2 00:18:45.643143 systemd[1]: run-containerd-runc-k8s.io-cf653f145c373f4c2b27138fd56c6e2f39fca2a96723a0c4973e487c6ec05770-runc.mkXw7V.mount: Deactivated successfully. Jul 2 00:18:45.850272 systemd-networkd[1371]: vxlan.calico: Link UP Jul 2 00:18:45.850285 systemd-networkd[1371]: vxlan.calico: Gained carrier Jul 2 00:18:46.319878 systemd-networkd[1371]: cali02e80a3009e: Gained IPv6LL Jul 2 00:18:47.267436 containerd[1472]: time="2024-07-02T00:18:47.266855675Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:18:47.271389 containerd[1472]: time="2024-07-02T00:18:47.270158040Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7641062" Jul 2 00:18:47.273250 containerd[1472]: time="2024-07-02T00:18:47.272834985Z" level=info msg="ImageCreate event name:\"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:18:47.303320 containerd[1472]: time="2024-07-02T00:18:47.303201371Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:18:47.371597 containerd[1472]: time="2024-07-02T00:18:47.371145963Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"9088822\" in 2.234824312s" Jul 2 00:18:47.371597 containerd[1472]: time="2024-07-02T00:18:47.371289018Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\"" Jul 2 00:18:47.380462 containerd[1472]: time="2024-07-02T00:18:47.380384909Z" level=info msg="CreateContainer within sandbox \"cf653f145c373f4c2b27138fd56c6e2f39fca2a96723a0c4973e487c6ec05770\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 2 00:18:47.456970 containerd[1472]: time="2024-07-02T00:18:47.456791346Z" level=info msg="CreateContainer within sandbox \"cf653f145c373f4c2b27138fd56c6e2f39fca2a96723a0c4973e487c6ec05770\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"bc572fd5597f71de68157b5f20b5531e2a2da4ab2fa5e97b01eabc9698764df2\"" Jul 2 00:18:47.458916 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2291679014.mount: Deactivated successfully. Jul 2 00:18:47.463838 containerd[1472]: time="2024-07-02T00:18:47.462899832Z" level=info msg="StartContainer for \"bc572fd5597f71de68157b5f20b5531e2a2da4ab2fa5e97b01eabc9698764df2\"" Jul 2 00:18:47.540039 systemd[1]: Started cri-containerd-bc572fd5597f71de68157b5f20b5531e2a2da4ab2fa5e97b01eabc9698764df2.scope - libcontainer container bc572fd5597f71de68157b5f20b5531e2a2da4ab2fa5e97b01eabc9698764df2. Jul 2 00:18:47.621145 containerd[1472]: time="2024-07-02T00:18:47.620855216Z" level=info msg="StartContainer for \"bc572fd5597f71de68157b5f20b5531e2a2da4ab2fa5e97b01eabc9698764df2\" returns successfully" Jul 2 00:18:47.624372 containerd[1472]: time="2024-07-02T00:18:47.623796907Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Jul 2 00:18:47.856065 systemd-networkd[1371]: vxlan.calico: Gained IPv6LL Jul 2 00:18:49.532318 containerd[1472]: time="2024-07-02T00:18:49.530845177Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:18:49.534795 containerd[1472]: time="2024-07-02T00:18:49.534636651Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=10147655" Jul 2 00:18:49.536144 containerd[1472]: time="2024-07-02T00:18:49.536034862Z" level=info msg="ImageCreate event name:\"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:18:49.550897 containerd[1472]: time="2024-07-02T00:18:49.550654882Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:18:49.554000 containerd[1472]: time="2024-07-02T00:18:49.553454633Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"11595367\" in 1.929597669s" Jul 2 00:18:49.554612 containerd[1472]: time="2024-07-02T00:18:49.554443599Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\"" Jul 2 00:18:49.559587 containerd[1472]: time="2024-07-02T00:18:49.559236802Z" level=info msg="CreateContainer within sandbox \"cf653f145c373f4c2b27138fd56c6e2f39fca2a96723a0c4973e487c6ec05770\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 2 00:18:49.640037 containerd[1472]: time="2024-07-02T00:18:49.639747137Z" level=info msg="CreateContainer within sandbox \"cf653f145c373f4c2b27138fd56c6e2f39fca2a96723a0c4973e487c6ec05770\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"7411bad2f7dcb5ea0c530b9af2ee1d989bb35216d561c049d4d1e11b3a83d90d\"" Jul 2 00:18:49.643629 containerd[1472]: time="2024-07-02T00:18:49.642455114Z" level=info msg="StartContainer for \"7411bad2f7dcb5ea0c530b9af2ee1d989bb35216d561c049d4d1e11b3a83d90d\"" Jul 2 00:18:49.770887 systemd[1]: Started cri-containerd-7411bad2f7dcb5ea0c530b9af2ee1d989bb35216d561c049d4d1e11b3a83d90d.scope - libcontainer container 7411bad2f7dcb5ea0c530b9af2ee1d989bb35216d561c049d4d1e11b3a83d90d. Jul 2 00:18:49.844358 containerd[1472]: time="2024-07-02T00:18:49.844197312Z" level=info msg="StartContainer for \"7411bad2f7dcb5ea0c530b9af2ee1d989bb35216d561c049d4d1e11b3a83d90d\" returns successfully" Jul 2 00:18:50.324457 systemd[1]: Started sshd@8-64.23.132.250:22-147.75.109.163:55932.service - OpenSSH per-connection server daemon (147.75.109.163:55932). Jul 2 00:18:50.567287 sshd[4967]: Accepted publickey for core from 147.75.109.163 port 55932 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:18:50.579105 sshd[4967]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:18:50.601644 systemd-logind[1445]: New session 9 of user core. Jul 2 00:18:50.609340 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 2 00:18:50.644896 kubelet[2543]: I0702 00:18:50.644745 2543 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 2 00:18:50.649771 kubelet[2543]: I0702 00:18:50.649697 2543 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 2 00:18:50.761942 kubelet[2543]: I0702 00:18:50.761537 2543 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-dpgqd" podStartSLOduration=33.340410741 podStartE2EDuration="37.761483922s" podCreationTimestamp="2024-07-02 00:18:13 +0000 UTC" firstStartedPulling="2024-07-02 00:18:45.135636148 +0000 UTC m=+53.197675405" lastFinishedPulling="2024-07-02 00:18:49.556709306 +0000 UTC m=+57.618748586" observedRunningTime="2024-07-02 00:18:50.756699785 +0000 UTC m=+58.818739081" watchObservedRunningTime="2024-07-02 00:18:50.761483922 +0000 UTC m=+58.823523204" Jul 2 00:18:51.255835 sshd[4967]: pam_unix(sshd:session): session closed for user core Jul 2 00:18:51.263455 systemd[1]: sshd@8-64.23.132.250:22-147.75.109.163:55932.service: Deactivated successfully. Jul 2 00:18:51.269378 systemd[1]: session-9.scope: Deactivated successfully. Jul 2 00:18:51.270804 systemd-logind[1445]: Session 9 logged out. Waiting for processes to exit. Jul 2 00:18:51.272323 systemd-logind[1445]: Removed session 9. Jul 2 00:18:52.188631 containerd[1472]: time="2024-07-02T00:18:52.188348884Z" level=info msg="StopPodSandbox for \"2fddbbf7ef31edcc21d2d167bf089806a4220864ed5560eb1bc3a0ea47459a43\"" Jul 2 00:18:52.188631 containerd[1472]: time="2024-07-02T00:18:52.188501753Z" level=info msg="TearDown network for sandbox \"2fddbbf7ef31edcc21d2d167bf089806a4220864ed5560eb1bc3a0ea47459a43\" successfully" Jul 2 00:18:52.188631 containerd[1472]: time="2024-07-02T00:18:52.188515894Z" level=info msg="StopPodSandbox for \"2fddbbf7ef31edcc21d2d167bf089806a4220864ed5560eb1bc3a0ea47459a43\" returns successfully" Jul 2 00:18:52.197601 containerd[1472]: time="2024-07-02T00:18:52.196608650Z" level=info msg="RemovePodSandbox for \"2fddbbf7ef31edcc21d2d167bf089806a4220864ed5560eb1bc3a0ea47459a43\"" Jul 2 00:18:52.203188 containerd[1472]: time="2024-07-02T00:18:52.203039964Z" level=info msg="Forcibly stopping sandbox \"2fddbbf7ef31edcc21d2d167bf089806a4220864ed5560eb1bc3a0ea47459a43\"" Jul 2 00:18:52.221824 containerd[1472]: time="2024-07-02T00:18:52.203191417Z" level=info msg="TearDown network for sandbox \"2fddbbf7ef31edcc21d2d167bf089806a4220864ed5560eb1bc3a0ea47459a43\" successfully" Jul 2 00:18:52.302965 containerd[1472]: time="2024-07-02T00:18:52.302439889Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2fddbbf7ef31edcc21d2d167bf089806a4220864ed5560eb1bc3a0ea47459a43\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 00:18:52.302965 containerd[1472]: time="2024-07-02T00:18:52.302668986Z" level=info msg="RemovePodSandbox \"2fddbbf7ef31edcc21d2d167bf089806a4220864ed5560eb1bc3a0ea47459a43\" returns successfully" Jul 2 00:18:52.308283 containerd[1472]: time="2024-07-02T00:18:52.306624776Z" level=info msg="StopPodSandbox for \"7bc34c14754db37c601400724208d0149eed870786c0e4c22e4170c08cf1afc0\"" Jul 2 00:18:52.484677 containerd[1472]: 2024-07-02 00:18:52.396 [WARNING][5003] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7bc34c14754db37c601400724208d0149eed870786c0e4c22e4170c08cf1afc0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--c--5be545c9fd-k8s-csi--node--driver--dpgqd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d3d95f80-f22a-4d64-99eb-0d72b7beb76e", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 18, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-c-5be545c9fd", ContainerID:"cf653f145c373f4c2b27138fd56c6e2f39fca2a96723a0c4973e487c6ec05770", Pod:"csi-node-driver-dpgqd", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.63.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali02e80a3009e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:18:52.484677 containerd[1472]: 2024-07-02 00:18:52.396 [INFO][5003] k8s.go 608: Cleaning up netns ContainerID="7bc34c14754db37c601400724208d0149eed870786c0e4c22e4170c08cf1afc0" Jul 2 00:18:52.484677 containerd[1472]: 2024-07-02 00:18:52.396 [INFO][5003] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="7bc34c14754db37c601400724208d0149eed870786c0e4c22e4170c08cf1afc0" iface="eth0" netns="" Jul 2 00:18:52.484677 containerd[1472]: 2024-07-02 00:18:52.396 [INFO][5003] k8s.go 615: Releasing IP address(es) ContainerID="7bc34c14754db37c601400724208d0149eed870786c0e4c22e4170c08cf1afc0" Jul 2 00:18:52.484677 containerd[1472]: 2024-07-02 00:18:52.397 [INFO][5003] utils.go 188: Calico CNI releasing IP address ContainerID="7bc34c14754db37c601400724208d0149eed870786c0e4c22e4170c08cf1afc0" Jul 2 00:18:52.484677 containerd[1472]: 2024-07-02 00:18:52.454 [INFO][5010] ipam_plugin.go 411: Releasing address using handleID ContainerID="7bc34c14754db37c601400724208d0149eed870786c0e4c22e4170c08cf1afc0" HandleID="k8s-pod-network.7bc34c14754db37c601400724208d0149eed870786c0e4c22e4170c08cf1afc0" Workload="ci--3975.1.1--c--5be545c9fd-k8s-csi--node--driver--dpgqd-eth0" Jul 2 00:18:52.484677 containerd[1472]: 2024-07-02 00:18:52.454 [INFO][5010] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:18:52.484677 containerd[1472]: 2024-07-02 00:18:52.454 [INFO][5010] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:18:52.484677 containerd[1472]: 2024-07-02 00:18:52.472 [WARNING][5010] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="7bc34c14754db37c601400724208d0149eed870786c0e4c22e4170c08cf1afc0" HandleID="k8s-pod-network.7bc34c14754db37c601400724208d0149eed870786c0e4c22e4170c08cf1afc0" Workload="ci--3975.1.1--c--5be545c9fd-k8s-csi--node--driver--dpgqd-eth0" Jul 2 00:18:52.484677 containerd[1472]: 2024-07-02 00:18:52.472 [INFO][5010] ipam_plugin.go 439: Releasing address using workloadID ContainerID="7bc34c14754db37c601400724208d0149eed870786c0e4c22e4170c08cf1afc0" HandleID="k8s-pod-network.7bc34c14754db37c601400724208d0149eed870786c0e4c22e4170c08cf1afc0" Workload="ci--3975.1.1--c--5be545c9fd-k8s-csi--node--driver--dpgqd-eth0" Jul 2 00:18:52.484677 containerd[1472]: 2024-07-02 00:18:52.477 [INFO][5010] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:18:52.484677 containerd[1472]: 2024-07-02 00:18:52.481 [INFO][5003] k8s.go 621: Teardown processing complete. ContainerID="7bc34c14754db37c601400724208d0149eed870786c0e4c22e4170c08cf1afc0" Jul 2 00:18:52.485883 containerd[1472]: time="2024-07-02T00:18:52.485449278Z" level=info msg="TearDown network for sandbox \"7bc34c14754db37c601400724208d0149eed870786c0e4c22e4170c08cf1afc0\" successfully" Jul 2 00:18:52.485883 containerd[1472]: time="2024-07-02T00:18:52.485505903Z" level=info msg="StopPodSandbox for \"7bc34c14754db37c601400724208d0149eed870786c0e4c22e4170c08cf1afc0\" returns successfully" Jul 2 00:18:52.486383 containerd[1472]: time="2024-07-02T00:18:52.486216785Z" level=info msg="RemovePodSandbox for \"7bc34c14754db37c601400724208d0149eed870786c0e4c22e4170c08cf1afc0\"" Jul 2 00:18:52.486383 containerd[1472]: time="2024-07-02T00:18:52.486260442Z" level=info msg="Forcibly stopping sandbox \"7bc34c14754db37c601400724208d0149eed870786c0e4c22e4170c08cf1afc0\"" Jul 2 00:18:52.605329 containerd[1472]: 2024-07-02 00:18:52.542 [WARNING][5029] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7bc34c14754db37c601400724208d0149eed870786c0e4c22e4170c08cf1afc0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--c--5be545c9fd-k8s-csi--node--driver--dpgqd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d3d95f80-f22a-4d64-99eb-0d72b7beb76e", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 18, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-c-5be545c9fd", ContainerID:"cf653f145c373f4c2b27138fd56c6e2f39fca2a96723a0c4973e487c6ec05770", Pod:"csi-node-driver-dpgqd", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.63.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali02e80a3009e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:18:52.605329 containerd[1472]: 2024-07-02 00:18:52.543 [INFO][5029] k8s.go 608: Cleaning up netns ContainerID="7bc34c14754db37c601400724208d0149eed870786c0e4c22e4170c08cf1afc0" Jul 2 00:18:52.605329 containerd[1472]: 2024-07-02 00:18:52.543 [INFO][5029] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="7bc34c14754db37c601400724208d0149eed870786c0e4c22e4170c08cf1afc0" iface="eth0" netns="" Jul 2 00:18:52.605329 containerd[1472]: 2024-07-02 00:18:52.543 [INFO][5029] k8s.go 615: Releasing IP address(es) ContainerID="7bc34c14754db37c601400724208d0149eed870786c0e4c22e4170c08cf1afc0" Jul 2 00:18:52.605329 containerd[1472]: 2024-07-02 00:18:52.543 [INFO][5029] utils.go 188: Calico CNI releasing IP address ContainerID="7bc34c14754db37c601400724208d0149eed870786c0e4c22e4170c08cf1afc0" Jul 2 00:18:52.605329 containerd[1472]: 2024-07-02 00:18:52.572 [INFO][5035] ipam_plugin.go 411: Releasing address using handleID ContainerID="7bc34c14754db37c601400724208d0149eed870786c0e4c22e4170c08cf1afc0" HandleID="k8s-pod-network.7bc34c14754db37c601400724208d0149eed870786c0e4c22e4170c08cf1afc0" Workload="ci--3975.1.1--c--5be545c9fd-k8s-csi--node--driver--dpgqd-eth0" Jul 2 00:18:52.605329 containerd[1472]: 2024-07-02 00:18:52.573 [INFO][5035] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:18:52.605329 containerd[1472]: 2024-07-02 00:18:52.573 [INFO][5035] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:18:52.605329 containerd[1472]: 2024-07-02 00:18:52.585 [WARNING][5035] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="7bc34c14754db37c601400724208d0149eed870786c0e4c22e4170c08cf1afc0" HandleID="k8s-pod-network.7bc34c14754db37c601400724208d0149eed870786c0e4c22e4170c08cf1afc0" Workload="ci--3975.1.1--c--5be545c9fd-k8s-csi--node--driver--dpgqd-eth0" Jul 2 00:18:52.605329 containerd[1472]: 2024-07-02 00:18:52.585 [INFO][5035] ipam_plugin.go 439: Releasing address using workloadID ContainerID="7bc34c14754db37c601400724208d0149eed870786c0e4c22e4170c08cf1afc0" HandleID="k8s-pod-network.7bc34c14754db37c601400724208d0149eed870786c0e4c22e4170c08cf1afc0" Workload="ci--3975.1.1--c--5be545c9fd-k8s-csi--node--driver--dpgqd-eth0" Jul 2 00:18:52.605329 containerd[1472]: 2024-07-02 00:18:52.600 [INFO][5035] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:18:52.605329 containerd[1472]: 2024-07-02 00:18:52.602 [INFO][5029] k8s.go 621: Teardown processing complete. ContainerID="7bc34c14754db37c601400724208d0149eed870786c0e4c22e4170c08cf1afc0" Jul 2 00:18:52.607931 containerd[1472]: time="2024-07-02T00:18:52.606206298Z" level=info msg="TearDown network for sandbox \"7bc34c14754db37c601400724208d0149eed870786c0e4c22e4170c08cf1afc0\" successfully" Jul 2 00:18:52.611961 containerd[1472]: time="2024-07-02T00:18:52.611867753Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7bc34c14754db37c601400724208d0149eed870786c0e4c22e4170c08cf1afc0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 00:18:52.612402 containerd[1472]: time="2024-07-02T00:18:52.612253559Z" level=info msg="RemovePodSandbox \"7bc34c14754db37c601400724208d0149eed870786c0e4c22e4170c08cf1afc0\" returns successfully" Jul 2 00:18:52.613009 containerd[1472]: time="2024-07-02T00:18:52.612964268Z" level=info msg="StopPodSandbox for \"ba87404fbf5eb0bf45b945bc63ac08d665feeca9c221cab7f5408ef4cea5b998\"" Jul 2 00:18:52.613152 containerd[1472]: time="2024-07-02T00:18:52.613118105Z" level=info msg="TearDown network for sandbox \"ba87404fbf5eb0bf45b945bc63ac08d665feeca9c221cab7f5408ef4cea5b998\" successfully" Jul 2 00:18:52.613152 containerd[1472]: time="2024-07-02T00:18:52.613142635Z" level=info msg="StopPodSandbox for \"ba87404fbf5eb0bf45b945bc63ac08d665feeca9c221cab7f5408ef4cea5b998\" returns successfully" Jul 2 00:18:52.613774 containerd[1472]: time="2024-07-02T00:18:52.613729231Z" level=info msg="RemovePodSandbox for \"ba87404fbf5eb0bf45b945bc63ac08d665feeca9c221cab7f5408ef4cea5b998\"" Jul 2 00:18:52.613841 containerd[1472]: time="2024-07-02T00:18:52.613777197Z" level=info msg="Forcibly stopping sandbox \"ba87404fbf5eb0bf45b945bc63ac08d665feeca9c221cab7f5408ef4cea5b998\"" Jul 2 00:18:52.613902 containerd[1472]: time="2024-07-02T00:18:52.613854072Z" level=info msg="TearDown network for sandbox \"ba87404fbf5eb0bf45b945bc63ac08d665feeca9c221cab7f5408ef4cea5b998\" successfully" Jul 2 00:18:52.631407 containerd[1472]: time="2024-07-02T00:18:52.631264147Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ba87404fbf5eb0bf45b945bc63ac08d665feeca9c221cab7f5408ef4cea5b998\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 00:18:52.631407 containerd[1472]: time="2024-07-02T00:18:52.631372131Z" level=info msg="RemovePodSandbox \"ba87404fbf5eb0bf45b945bc63ac08d665feeca9c221cab7f5408ef4cea5b998\" returns successfully" Jul 2 00:18:52.633097 containerd[1472]: time="2024-07-02T00:18:52.632642383Z" level=info msg="StopPodSandbox for \"12b99c5f3a8bdcfd505f08fac04fc69a4fa09a4298b9923461c319f4a03e314b\"" Jul 2 00:18:52.778416 containerd[1472]: 2024-07-02 00:18:52.717 [WARNING][5053] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="12b99c5f3a8bdcfd505f08fac04fc69a4fa09a4298b9923461c319f4a03e314b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--c--5be545c9fd-k8s-coredns--7db6d8ff4d--vplpg-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"f18b362e-cb9c-4d57-98c4-b6cecd0957a3", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 18, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-c-5be545c9fd", ContainerID:"cb2b687de32c91a9a972ff69b6161db696277d692e0fa7a7cc0defe915eb21da", Pod:"coredns-7db6d8ff4d-vplpg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.63.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali33386cc29b4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:18:52.778416 containerd[1472]: 2024-07-02 00:18:52.718 [INFO][5053] k8s.go 608: Cleaning up netns ContainerID="12b99c5f3a8bdcfd505f08fac04fc69a4fa09a4298b9923461c319f4a03e314b" Jul 2 00:18:52.778416 containerd[1472]: 2024-07-02 00:18:52.718 [INFO][5053] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="12b99c5f3a8bdcfd505f08fac04fc69a4fa09a4298b9923461c319f4a03e314b" iface="eth0" netns="" Jul 2 00:18:52.778416 containerd[1472]: 2024-07-02 00:18:52.718 [INFO][5053] k8s.go 615: Releasing IP address(es) ContainerID="12b99c5f3a8bdcfd505f08fac04fc69a4fa09a4298b9923461c319f4a03e314b" Jul 2 00:18:52.778416 containerd[1472]: 2024-07-02 00:18:52.718 [INFO][5053] utils.go 188: Calico CNI releasing IP address ContainerID="12b99c5f3a8bdcfd505f08fac04fc69a4fa09a4298b9923461c319f4a03e314b" Jul 2 00:18:52.778416 containerd[1472]: 2024-07-02 00:18:52.760 [INFO][5059] ipam_plugin.go 411: Releasing address using handleID ContainerID="12b99c5f3a8bdcfd505f08fac04fc69a4fa09a4298b9923461c319f4a03e314b" HandleID="k8s-pod-network.12b99c5f3a8bdcfd505f08fac04fc69a4fa09a4298b9923461c319f4a03e314b" Workload="ci--3975.1.1--c--5be545c9fd-k8s-coredns--7db6d8ff4d--vplpg-eth0" Jul 2 00:18:52.778416 containerd[1472]: 2024-07-02 00:18:52.760 [INFO][5059] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:18:52.778416 containerd[1472]: 2024-07-02 00:18:52.761 [INFO][5059] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:18:52.778416 containerd[1472]: 2024-07-02 00:18:52.771 [WARNING][5059] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="12b99c5f3a8bdcfd505f08fac04fc69a4fa09a4298b9923461c319f4a03e314b" HandleID="k8s-pod-network.12b99c5f3a8bdcfd505f08fac04fc69a4fa09a4298b9923461c319f4a03e314b" Workload="ci--3975.1.1--c--5be545c9fd-k8s-coredns--7db6d8ff4d--vplpg-eth0" Jul 2 00:18:52.778416 containerd[1472]: 2024-07-02 00:18:52.771 [INFO][5059] ipam_plugin.go 439: Releasing address using workloadID ContainerID="12b99c5f3a8bdcfd505f08fac04fc69a4fa09a4298b9923461c319f4a03e314b" HandleID="k8s-pod-network.12b99c5f3a8bdcfd505f08fac04fc69a4fa09a4298b9923461c319f4a03e314b" Workload="ci--3975.1.1--c--5be545c9fd-k8s-coredns--7db6d8ff4d--vplpg-eth0" Jul 2 00:18:52.778416 containerd[1472]: 2024-07-02 00:18:52.775 [INFO][5059] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:18:52.778416 containerd[1472]: 2024-07-02 00:18:52.776 [INFO][5053] k8s.go 621: Teardown processing complete. ContainerID="12b99c5f3a8bdcfd505f08fac04fc69a4fa09a4298b9923461c319f4a03e314b" Jul 2 00:18:52.778416 containerd[1472]: time="2024-07-02T00:18:52.778381359Z" level=info msg="TearDown network for sandbox \"12b99c5f3a8bdcfd505f08fac04fc69a4fa09a4298b9923461c319f4a03e314b\" successfully" Jul 2 00:18:52.778416 containerd[1472]: time="2024-07-02T00:18:52.778407952Z" level=info msg="StopPodSandbox for \"12b99c5f3a8bdcfd505f08fac04fc69a4fa09a4298b9923461c319f4a03e314b\" returns successfully" Jul 2 00:18:52.781248 containerd[1472]: time="2024-07-02T00:18:52.781197979Z" level=info msg="RemovePodSandbox for \"12b99c5f3a8bdcfd505f08fac04fc69a4fa09a4298b9923461c319f4a03e314b\"" Jul 2 00:18:52.781358 containerd[1472]: time="2024-07-02T00:18:52.781261295Z" level=info msg="Forcibly stopping sandbox \"12b99c5f3a8bdcfd505f08fac04fc69a4fa09a4298b9923461c319f4a03e314b\"" Jul 2 00:18:52.878629 containerd[1472]: 2024-07-02 00:18:52.829 [WARNING][5077] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="12b99c5f3a8bdcfd505f08fac04fc69a4fa09a4298b9923461c319f4a03e314b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--c--5be545c9fd-k8s-coredns--7db6d8ff4d--vplpg-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"f18b362e-cb9c-4d57-98c4-b6cecd0957a3", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 18, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-c-5be545c9fd", ContainerID:"cb2b687de32c91a9a972ff69b6161db696277d692e0fa7a7cc0defe915eb21da", Pod:"coredns-7db6d8ff4d-vplpg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.63.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali33386cc29b4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:18:52.878629 containerd[1472]: 2024-07-02 00:18:52.830 [INFO][5077] k8s.go 608: Cleaning up netns ContainerID="12b99c5f3a8bdcfd505f08fac04fc69a4fa09a4298b9923461c319f4a03e314b" Jul 2 00:18:52.878629 containerd[1472]: 2024-07-02 00:18:52.830 [INFO][5077] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="12b99c5f3a8bdcfd505f08fac04fc69a4fa09a4298b9923461c319f4a03e314b" iface="eth0" netns="" Jul 2 00:18:52.878629 containerd[1472]: 2024-07-02 00:18:52.830 [INFO][5077] k8s.go 615: Releasing IP address(es) ContainerID="12b99c5f3a8bdcfd505f08fac04fc69a4fa09a4298b9923461c319f4a03e314b" Jul 2 00:18:52.878629 containerd[1472]: 2024-07-02 00:18:52.830 [INFO][5077] utils.go 188: Calico CNI releasing IP address ContainerID="12b99c5f3a8bdcfd505f08fac04fc69a4fa09a4298b9923461c319f4a03e314b" Jul 2 00:18:52.878629 containerd[1472]: 2024-07-02 00:18:52.857 [INFO][5083] ipam_plugin.go 411: Releasing address using handleID ContainerID="12b99c5f3a8bdcfd505f08fac04fc69a4fa09a4298b9923461c319f4a03e314b" HandleID="k8s-pod-network.12b99c5f3a8bdcfd505f08fac04fc69a4fa09a4298b9923461c319f4a03e314b" Workload="ci--3975.1.1--c--5be545c9fd-k8s-coredns--7db6d8ff4d--vplpg-eth0" Jul 2 00:18:52.878629 containerd[1472]: 2024-07-02 00:18:52.857 [INFO][5083] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:18:52.878629 containerd[1472]: 2024-07-02 00:18:52.857 [INFO][5083] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:18:52.878629 containerd[1472]: 2024-07-02 00:18:52.866 [WARNING][5083] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="12b99c5f3a8bdcfd505f08fac04fc69a4fa09a4298b9923461c319f4a03e314b" HandleID="k8s-pod-network.12b99c5f3a8bdcfd505f08fac04fc69a4fa09a4298b9923461c319f4a03e314b" Workload="ci--3975.1.1--c--5be545c9fd-k8s-coredns--7db6d8ff4d--vplpg-eth0" Jul 2 00:18:52.878629 containerd[1472]: 2024-07-02 00:18:52.866 [INFO][5083] ipam_plugin.go 439: Releasing address using workloadID ContainerID="12b99c5f3a8bdcfd505f08fac04fc69a4fa09a4298b9923461c319f4a03e314b" HandleID="k8s-pod-network.12b99c5f3a8bdcfd505f08fac04fc69a4fa09a4298b9923461c319f4a03e314b" Workload="ci--3975.1.1--c--5be545c9fd-k8s-coredns--7db6d8ff4d--vplpg-eth0" Jul 2 00:18:52.878629 containerd[1472]: 2024-07-02 00:18:52.872 [INFO][5083] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:18:52.878629 containerd[1472]: 2024-07-02 00:18:52.875 [INFO][5077] k8s.go 621: Teardown processing complete. ContainerID="12b99c5f3a8bdcfd505f08fac04fc69a4fa09a4298b9923461c319f4a03e314b" Jul 2 00:18:52.878629 containerd[1472]: time="2024-07-02T00:18:52.877773538Z" level=info msg="TearDown network for sandbox \"12b99c5f3a8bdcfd505f08fac04fc69a4fa09a4298b9923461c319f4a03e314b\" successfully" Jul 2 00:18:52.882968 containerd[1472]: time="2024-07-02T00:18:52.882778842Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"12b99c5f3a8bdcfd505f08fac04fc69a4fa09a4298b9923461c319f4a03e314b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 00:18:52.883146 containerd[1472]: time="2024-07-02T00:18:52.882985114Z" level=info msg="RemovePodSandbox \"12b99c5f3a8bdcfd505f08fac04fc69a4fa09a4298b9923461c319f4a03e314b\" returns successfully" Jul 2 00:18:52.884363 containerd[1472]: time="2024-07-02T00:18:52.884006356Z" level=info msg="StopPodSandbox for \"c4f38e218dc634aa72d5da94fba0e397028ec66505646ea6fe1cced0fa8d0284\"" Jul 2 00:18:52.983329 containerd[1472]: 2024-07-02 00:18:52.935 [WARNING][5101] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c4f38e218dc634aa72d5da94fba0e397028ec66505646ea6fe1cced0fa8d0284" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--c--5be545c9fd-k8s-calico--kube--controllers--df7b6c459--dxjdh-eth0", GenerateName:"calico-kube-controllers-df7b6c459-", Namespace:"calico-system", SelfLink:"", UID:"78d1aa2f-5769-4d6f-8574-3eb177d83dcb", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 18, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"df7b6c459", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-c-5be545c9fd", ContainerID:"91e26ae58c8bf5ca48067d4fb68c5582a8c218b20d01ec532e8b89d770e619b0", Pod:"calico-kube-controllers-df7b6c459-dxjdh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.63.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califd1516030ca", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:18:52.983329 containerd[1472]: 2024-07-02 00:18:52.936 [INFO][5101] k8s.go 608: Cleaning up netns ContainerID="c4f38e218dc634aa72d5da94fba0e397028ec66505646ea6fe1cced0fa8d0284" Jul 2 00:18:52.983329 containerd[1472]: 2024-07-02 00:18:52.936 [INFO][5101] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="c4f38e218dc634aa72d5da94fba0e397028ec66505646ea6fe1cced0fa8d0284" iface="eth0" netns="" Jul 2 00:18:52.983329 containerd[1472]: 2024-07-02 00:18:52.936 [INFO][5101] k8s.go 615: Releasing IP address(es) ContainerID="c4f38e218dc634aa72d5da94fba0e397028ec66505646ea6fe1cced0fa8d0284" Jul 2 00:18:52.983329 containerd[1472]: 2024-07-02 00:18:52.936 [INFO][5101] utils.go 188: Calico CNI releasing IP address ContainerID="c4f38e218dc634aa72d5da94fba0e397028ec66505646ea6fe1cced0fa8d0284" Jul 2 00:18:52.983329 containerd[1472]: 2024-07-02 00:18:52.966 [INFO][5107] ipam_plugin.go 411: Releasing address using handleID ContainerID="c4f38e218dc634aa72d5da94fba0e397028ec66505646ea6fe1cced0fa8d0284" HandleID="k8s-pod-network.c4f38e218dc634aa72d5da94fba0e397028ec66505646ea6fe1cced0fa8d0284" Workload="ci--3975.1.1--c--5be545c9fd-k8s-calico--kube--controllers--df7b6c459--dxjdh-eth0" Jul 2 00:18:52.983329 containerd[1472]: 2024-07-02 00:18:52.966 [INFO][5107] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:18:52.983329 containerd[1472]: 2024-07-02 00:18:52.966 [INFO][5107] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:18:52.983329 containerd[1472]: 2024-07-02 00:18:52.975 [WARNING][5107] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="c4f38e218dc634aa72d5da94fba0e397028ec66505646ea6fe1cced0fa8d0284" HandleID="k8s-pod-network.c4f38e218dc634aa72d5da94fba0e397028ec66505646ea6fe1cced0fa8d0284" Workload="ci--3975.1.1--c--5be545c9fd-k8s-calico--kube--controllers--df7b6c459--dxjdh-eth0" Jul 2 00:18:52.983329 containerd[1472]: 2024-07-02 00:18:52.976 [INFO][5107] ipam_plugin.go 439: Releasing address using workloadID ContainerID="c4f38e218dc634aa72d5da94fba0e397028ec66505646ea6fe1cced0fa8d0284" HandleID="k8s-pod-network.c4f38e218dc634aa72d5da94fba0e397028ec66505646ea6fe1cced0fa8d0284" Workload="ci--3975.1.1--c--5be545c9fd-k8s-calico--kube--controllers--df7b6c459--dxjdh-eth0" Jul 2 00:18:52.983329 containerd[1472]: 2024-07-02 00:18:52.979 [INFO][5107] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:18:52.983329 containerd[1472]: 2024-07-02 00:18:52.981 [INFO][5101] k8s.go 621: Teardown processing complete. ContainerID="c4f38e218dc634aa72d5da94fba0e397028ec66505646ea6fe1cced0fa8d0284" Jul 2 00:18:52.983329 containerd[1472]: time="2024-07-02T00:18:52.983135517Z" level=info msg="TearDown network for sandbox \"c4f38e218dc634aa72d5da94fba0e397028ec66505646ea6fe1cced0fa8d0284\" successfully" Jul 2 00:18:52.983329 containerd[1472]: time="2024-07-02T00:18:52.983175939Z" level=info msg="StopPodSandbox for \"c4f38e218dc634aa72d5da94fba0e397028ec66505646ea6fe1cced0fa8d0284\" returns successfully" Jul 2 00:18:52.985649 containerd[1472]: time="2024-07-02T00:18:52.985171531Z" level=info msg="RemovePodSandbox for \"c4f38e218dc634aa72d5da94fba0e397028ec66505646ea6fe1cced0fa8d0284\"" Jul 2 00:18:52.985649 containerd[1472]: time="2024-07-02T00:18:52.985248393Z" level=info msg="Forcibly stopping sandbox \"c4f38e218dc634aa72d5da94fba0e397028ec66505646ea6fe1cced0fa8d0284\"" Jul 2 00:18:53.108314 containerd[1472]: 2024-07-02 00:18:53.051 [WARNING][5126] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c4f38e218dc634aa72d5da94fba0e397028ec66505646ea6fe1cced0fa8d0284" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--c--5be545c9fd-k8s-calico--kube--controllers--df7b6c459--dxjdh-eth0", GenerateName:"calico-kube-controllers-df7b6c459-", Namespace:"calico-system", SelfLink:"", UID:"78d1aa2f-5769-4d6f-8574-3eb177d83dcb", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 18, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"df7b6c459", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-c-5be545c9fd", ContainerID:"91e26ae58c8bf5ca48067d4fb68c5582a8c218b20d01ec532e8b89d770e619b0", Pod:"calico-kube-controllers-df7b6c459-dxjdh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.63.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califd1516030ca", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:18:53.108314 containerd[1472]: 2024-07-02 00:18:53.052 [INFO][5126] k8s.go 608: Cleaning up netns ContainerID="c4f38e218dc634aa72d5da94fba0e397028ec66505646ea6fe1cced0fa8d0284" Jul 2 00:18:53.108314 containerd[1472]: 2024-07-02 00:18:53.052 [INFO][5126] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="c4f38e218dc634aa72d5da94fba0e397028ec66505646ea6fe1cced0fa8d0284" iface="eth0" netns="" Jul 2 00:18:53.108314 containerd[1472]: 2024-07-02 00:18:53.052 [INFO][5126] k8s.go 615: Releasing IP address(es) ContainerID="c4f38e218dc634aa72d5da94fba0e397028ec66505646ea6fe1cced0fa8d0284" Jul 2 00:18:53.108314 containerd[1472]: 2024-07-02 00:18:53.052 [INFO][5126] utils.go 188: Calico CNI releasing IP address ContainerID="c4f38e218dc634aa72d5da94fba0e397028ec66505646ea6fe1cced0fa8d0284" Jul 2 00:18:53.108314 containerd[1472]: 2024-07-02 00:18:53.090 [INFO][5133] ipam_plugin.go 411: Releasing address using handleID ContainerID="c4f38e218dc634aa72d5da94fba0e397028ec66505646ea6fe1cced0fa8d0284" HandleID="k8s-pod-network.c4f38e218dc634aa72d5da94fba0e397028ec66505646ea6fe1cced0fa8d0284" Workload="ci--3975.1.1--c--5be545c9fd-k8s-calico--kube--controllers--df7b6c459--dxjdh-eth0" Jul 2 00:18:53.108314 containerd[1472]: 2024-07-02 00:18:53.091 [INFO][5133] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:18:53.108314 containerd[1472]: 2024-07-02 00:18:53.091 [INFO][5133] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:18:53.108314 containerd[1472]: 2024-07-02 00:18:53.101 [WARNING][5133] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="c4f38e218dc634aa72d5da94fba0e397028ec66505646ea6fe1cced0fa8d0284" HandleID="k8s-pod-network.c4f38e218dc634aa72d5da94fba0e397028ec66505646ea6fe1cced0fa8d0284" Workload="ci--3975.1.1--c--5be545c9fd-k8s-calico--kube--controllers--df7b6c459--dxjdh-eth0" Jul 2 00:18:53.108314 containerd[1472]: 2024-07-02 00:18:53.101 [INFO][5133] ipam_plugin.go 439: Releasing address using workloadID ContainerID="c4f38e218dc634aa72d5da94fba0e397028ec66505646ea6fe1cced0fa8d0284" HandleID="k8s-pod-network.c4f38e218dc634aa72d5da94fba0e397028ec66505646ea6fe1cced0fa8d0284" Workload="ci--3975.1.1--c--5be545c9fd-k8s-calico--kube--controllers--df7b6c459--dxjdh-eth0" Jul 2 00:18:53.108314 containerd[1472]: 2024-07-02 00:18:53.104 [INFO][5133] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:18:53.108314 containerd[1472]: 2024-07-02 00:18:53.106 [INFO][5126] k8s.go 621: Teardown processing complete. ContainerID="c4f38e218dc634aa72d5da94fba0e397028ec66505646ea6fe1cced0fa8d0284" Jul 2 00:18:53.111038 containerd[1472]: time="2024-07-02T00:18:53.108628728Z" level=info msg="TearDown network for sandbox \"c4f38e218dc634aa72d5da94fba0e397028ec66505646ea6fe1cced0fa8d0284\" successfully" Jul 2 00:18:53.113084 containerd[1472]: time="2024-07-02T00:18:53.113009349Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c4f38e218dc634aa72d5da94fba0e397028ec66505646ea6fe1cced0fa8d0284\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 00:18:53.113228 containerd[1472]: time="2024-07-02T00:18:53.113101969Z" level=info msg="RemovePodSandbox \"c4f38e218dc634aa72d5da94fba0e397028ec66505646ea6fe1cced0fa8d0284\" returns successfully" Jul 2 00:18:53.114338 containerd[1472]: time="2024-07-02T00:18:53.114015221Z" level=info msg="StopPodSandbox for \"e1015a203fc652227776daa37af45b6ddf61b3085e9f5250b16ee6a95f44d572\"" Jul 2 00:18:53.215595 containerd[1472]: 2024-07-02 00:18:53.165 [WARNING][5152] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e1015a203fc652227776daa37af45b6ddf61b3085e9f5250b16ee6a95f44d572" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--c--5be545c9fd-k8s-coredns--7db6d8ff4d--mq5j8-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"78f44af7-be8f-4f60-9f8c-68664bae1d7c", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 18, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-c-5be545c9fd", ContainerID:"1ee49ff0b48606ca5f750ff81157abb0b628341c93b5b0283a3198809fefcf2d", Pod:"coredns-7db6d8ff4d-mq5j8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.63.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie9c0b744628", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:18:53.215595 containerd[1472]: 2024-07-02 00:18:53.166 [INFO][5152] k8s.go 608: Cleaning up netns ContainerID="e1015a203fc652227776daa37af45b6ddf61b3085e9f5250b16ee6a95f44d572" Jul 2 00:18:53.215595 containerd[1472]: 2024-07-02 00:18:53.166 [INFO][5152] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="e1015a203fc652227776daa37af45b6ddf61b3085e9f5250b16ee6a95f44d572" iface="eth0" netns="" Jul 2 00:18:53.215595 containerd[1472]: 2024-07-02 00:18:53.166 [INFO][5152] k8s.go 615: Releasing IP address(es) ContainerID="e1015a203fc652227776daa37af45b6ddf61b3085e9f5250b16ee6a95f44d572" Jul 2 00:18:53.215595 containerd[1472]: 2024-07-02 00:18:53.166 [INFO][5152] utils.go 188: Calico CNI releasing IP address ContainerID="e1015a203fc652227776daa37af45b6ddf61b3085e9f5250b16ee6a95f44d572" Jul 2 00:18:53.215595 containerd[1472]: 2024-07-02 00:18:53.196 [INFO][5158] ipam_plugin.go 411: Releasing address using handleID ContainerID="e1015a203fc652227776daa37af45b6ddf61b3085e9f5250b16ee6a95f44d572" HandleID="k8s-pod-network.e1015a203fc652227776daa37af45b6ddf61b3085e9f5250b16ee6a95f44d572" Workload="ci--3975.1.1--c--5be545c9fd-k8s-coredns--7db6d8ff4d--mq5j8-eth0" Jul 2 00:18:53.215595 containerd[1472]: 2024-07-02 00:18:53.196 [INFO][5158] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:18:53.215595 containerd[1472]: 2024-07-02 00:18:53.196 [INFO][5158] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:18:53.215595 containerd[1472]: 2024-07-02 00:18:53.207 [WARNING][5158] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="e1015a203fc652227776daa37af45b6ddf61b3085e9f5250b16ee6a95f44d572" HandleID="k8s-pod-network.e1015a203fc652227776daa37af45b6ddf61b3085e9f5250b16ee6a95f44d572" Workload="ci--3975.1.1--c--5be545c9fd-k8s-coredns--7db6d8ff4d--mq5j8-eth0" Jul 2 00:18:53.215595 containerd[1472]: 2024-07-02 00:18:53.207 [INFO][5158] ipam_plugin.go 439: Releasing address using workloadID ContainerID="e1015a203fc652227776daa37af45b6ddf61b3085e9f5250b16ee6a95f44d572" HandleID="k8s-pod-network.e1015a203fc652227776daa37af45b6ddf61b3085e9f5250b16ee6a95f44d572" Workload="ci--3975.1.1--c--5be545c9fd-k8s-coredns--7db6d8ff4d--mq5j8-eth0" Jul 2 00:18:53.215595 containerd[1472]: 2024-07-02 00:18:53.210 [INFO][5158] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:18:53.215595 containerd[1472]: 2024-07-02 00:18:53.213 [INFO][5152] k8s.go 621: Teardown processing complete. ContainerID="e1015a203fc652227776daa37af45b6ddf61b3085e9f5250b16ee6a95f44d572" Jul 2 00:18:53.217381 containerd[1472]: time="2024-07-02T00:18:53.215646250Z" level=info msg="TearDown network for sandbox \"e1015a203fc652227776daa37af45b6ddf61b3085e9f5250b16ee6a95f44d572\" successfully" Jul 2 00:18:53.217381 containerd[1472]: time="2024-07-02T00:18:53.215672687Z" level=info msg="StopPodSandbox for \"e1015a203fc652227776daa37af45b6ddf61b3085e9f5250b16ee6a95f44d572\" returns successfully" Jul 2 00:18:53.217381 containerd[1472]: time="2024-07-02T00:18:53.216284995Z" level=info msg="RemovePodSandbox for \"e1015a203fc652227776daa37af45b6ddf61b3085e9f5250b16ee6a95f44d572\"" Jul 2 00:18:53.217381 containerd[1472]: time="2024-07-02T00:18:53.216318450Z" level=info msg="Forcibly stopping sandbox \"e1015a203fc652227776daa37af45b6ddf61b3085e9f5250b16ee6a95f44d572\"" Jul 2 00:18:53.311639 containerd[1472]: 2024-07-02 00:18:53.265 [WARNING][5177] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e1015a203fc652227776daa37af45b6ddf61b3085e9f5250b16ee6a95f44d572" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--c--5be545c9fd-k8s-coredns--7db6d8ff4d--mq5j8-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"78f44af7-be8f-4f60-9f8c-68664bae1d7c", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 18, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-c-5be545c9fd", ContainerID:"1ee49ff0b48606ca5f750ff81157abb0b628341c93b5b0283a3198809fefcf2d", Pod:"coredns-7db6d8ff4d-mq5j8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.63.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie9c0b744628", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:18:53.311639 containerd[1472]: 2024-07-02 00:18:53.266 [INFO][5177] k8s.go 608: Cleaning up netns ContainerID="e1015a203fc652227776daa37af45b6ddf61b3085e9f5250b16ee6a95f44d572" Jul 2 00:18:53.311639 containerd[1472]: 2024-07-02 00:18:53.266 [INFO][5177] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="e1015a203fc652227776daa37af45b6ddf61b3085e9f5250b16ee6a95f44d572" iface="eth0" netns="" Jul 2 00:18:53.311639 containerd[1472]: 2024-07-02 00:18:53.266 [INFO][5177] k8s.go 615: Releasing IP address(es) ContainerID="e1015a203fc652227776daa37af45b6ddf61b3085e9f5250b16ee6a95f44d572" Jul 2 00:18:53.311639 containerd[1472]: 2024-07-02 00:18:53.266 [INFO][5177] utils.go 188: Calico CNI releasing IP address ContainerID="e1015a203fc652227776daa37af45b6ddf61b3085e9f5250b16ee6a95f44d572" Jul 2 00:18:53.311639 containerd[1472]: 2024-07-02 00:18:53.296 [INFO][5183] ipam_plugin.go 411: Releasing address using handleID ContainerID="e1015a203fc652227776daa37af45b6ddf61b3085e9f5250b16ee6a95f44d572" HandleID="k8s-pod-network.e1015a203fc652227776daa37af45b6ddf61b3085e9f5250b16ee6a95f44d572" Workload="ci--3975.1.1--c--5be545c9fd-k8s-coredns--7db6d8ff4d--mq5j8-eth0" Jul 2 00:18:53.311639 containerd[1472]: 2024-07-02 00:18:53.296 [INFO][5183] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:18:53.311639 containerd[1472]: 2024-07-02 00:18:53.296 [INFO][5183] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:18:53.311639 containerd[1472]: 2024-07-02 00:18:53.304 [WARNING][5183] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="e1015a203fc652227776daa37af45b6ddf61b3085e9f5250b16ee6a95f44d572" HandleID="k8s-pod-network.e1015a203fc652227776daa37af45b6ddf61b3085e9f5250b16ee6a95f44d572" Workload="ci--3975.1.1--c--5be545c9fd-k8s-coredns--7db6d8ff4d--mq5j8-eth0" Jul 2 00:18:53.311639 containerd[1472]: 2024-07-02 00:18:53.304 [INFO][5183] ipam_plugin.go 439: Releasing address using workloadID ContainerID="e1015a203fc652227776daa37af45b6ddf61b3085e9f5250b16ee6a95f44d572" HandleID="k8s-pod-network.e1015a203fc652227776daa37af45b6ddf61b3085e9f5250b16ee6a95f44d572" Workload="ci--3975.1.1--c--5be545c9fd-k8s-coredns--7db6d8ff4d--mq5j8-eth0" Jul 2 00:18:53.311639 containerd[1472]: 2024-07-02 00:18:53.307 [INFO][5183] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:18:53.311639 containerd[1472]: 2024-07-02 00:18:53.309 [INFO][5177] k8s.go 621: Teardown processing complete. ContainerID="e1015a203fc652227776daa37af45b6ddf61b3085e9f5250b16ee6a95f44d572" Jul 2 00:18:53.312936 containerd[1472]: time="2024-07-02T00:18:53.311723120Z" level=info msg="TearDown network for sandbox \"e1015a203fc652227776daa37af45b6ddf61b3085e9f5250b16ee6a95f44d572\" successfully" Jul 2 00:18:53.319665 containerd[1472]: time="2024-07-02T00:18:53.319520428Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e1015a203fc652227776daa37af45b6ddf61b3085e9f5250b16ee6a95f44d572\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 00:18:53.319932 containerd[1472]: time="2024-07-02T00:18:53.319704099Z" level=info msg="RemovePodSandbox \"e1015a203fc652227776daa37af45b6ddf61b3085e9f5250b16ee6a95f44d572\" returns successfully" Jul 2 00:18:56.275883 systemd[1]: Started sshd@9-64.23.132.250:22-147.75.109.163:57102.service - OpenSSH per-connection server daemon (147.75.109.163:57102). Jul 2 00:18:56.365350 sshd[5203]: Accepted publickey for core from 147.75.109.163 port 57102 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:18:56.367417 sshd[5203]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:18:56.374485 systemd-logind[1445]: New session 10 of user core. Jul 2 00:18:56.378787 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 2 00:18:56.591350 sshd[5203]: pam_unix(sshd:session): session closed for user core Jul 2 00:18:56.605490 systemd[1]: sshd@9-64.23.132.250:22-147.75.109.163:57102.service: Deactivated successfully. Jul 2 00:18:56.609587 systemd[1]: session-10.scope: Deactivated successfully. Jul 2 00:18:56.612354 systemd-logind[1445]: Session 10 logged out. Waiting for processes to exit. Jul 2 00:18:56.622063 systemd[1]: Started sshd@10-64.23.132.250:22-147.75.109.163:57106.service - OpenSSH per-connection server daemon (147.75.109.163:57106). Jul 2 00:18:56.623719 systemd-logind[1445]: Removed session 10. Jul 2 00:18:56.692659 sshd[5217]: Accepted publickey for core from 147.75.109.163 port 57106 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:18:56.695987 sshd[5217]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:18:56.705103 systemd-logind[1445]: New session 11 of user core. Jul 2 00:18:56.716848 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 2 00:18:56.967237 sshd[5217]: pam_unix(sshd:session): session closed for user core Jul 2 00:18:56.985201 systemd[1]: sshd@10-64.23.132.250:22-147.75.109.163:57106.service: Deactivated successfully. Jul 2 00:18:56.990373 systemd[1]: session-11.scope: Deactivated successfully. Jul 2 00:18:56.997938 systemd-logind[1445]: Session 11 logged out. Waiting for processes to exit. Jul 2 00:18:57.007620 systemd[1]: Started sshd@11-64.23.132.250:22-147.75.109.163:57116.service - OpenSSH per-connection server daemon (147.75.109.163:57116). Jul 2 00:18:57.012513 systemd-logind[1445]: Removed session 11. Jul 2 00:18:57.103854 sshd[5228]: Accepted publickey for core from 147.75.109.163 port 57116 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:18:57.106824 sshd[5228]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:18:57.114295 systemd-logind[1445]: New session 12 of user core. Jul 2 00:18:57.122865 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 2 00:18:57.333933 sshd[5228]: pam_unix(sshd:session): session closed for user core Jul 2 00:18:57.341973 systemd[1]: sshd@11-64.23.132.250:22-147.75.109.163:57116.service: Deactivated successfully. Jul 2 00:18:57.348355 systemd[1]: session-12.scope: Deactivated successfully. Jul 2 00:18:57.351572 systemd-logind[1445]: Session 12 logged out. Waiting for processes to exit. Jul 2 00:18:57.354612 systemd-logind[1445]: Removed session 12. Jul 2 00:18:57.415911 systemd[1]: run-containerd-runc-k8s.io-7d0fb83e8538b08c6ff6fd8653f3825500e4496ec8633357cee28c65bc118f82-runc.GcMOUY.mount: Deactivated successfully. Jul 2 00:18:58.234111 kubelet[2543]: E0702 00:18:58.234037 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:19:02.361498 systemd[1]: Started sshd@12-64.23.132.250:22-147.75.109.163:57130.service - OpenSSH per-connection server daemon (147.75.109.163:57130). Jul 2 00:19:02.414621 sshd[5279]: Accepted publickey for core from 147.75.109.163 port 57130 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:19:02.416759 sshd[5279]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:19:02.426585 systemd-logind[1445]: New session 13 of user core. Jul 2 00:19:02.432248 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 2 00:19:02.636882 sshd[5279]: pam_unix(sshd:session): session closed for user core Jul 2 00:19:02.647845 systemd[1]: sshd@12-64.23.132.250:22-147.75.109.163:57130.service: Deactivated successfully. Jul 2 00:19:02.648597 systemd-logind[1445]: Session 13 logged out. Waiting for processes to exit. Jul 2 00:19:02.653800 systemd[1]: session-13.scope: Deactivated successfully. Jul 2 00:19:02.659206 systemd-logind[1445]: Removed session 13. Jul 2 00:19:06.137929 kubelet[2543]: E0702 00:19:06.137644 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:19:07.661050 systemd[1]: Started sshd@13-64.23.132.250:22-147.75.109.163:42740.service - OpenSSH per-connection server daemon (147.75.109.163:42740). Jul 2 00:19:07.748293 sshd[5311]: Accepted publickey for core from 147.75.109.163 port 42740 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:19:07.750423 sshd[5311]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:19:07.758572 systemd-logind[1445]: New session 14 of user core. Jul 2 00:19:07.768877 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 2 00:19:07.941952 sshd[5311]: pam_unix(sshd:session): session closed for user core Jul 2 00:19:07.948985 systemd[1]: sshd@13-64.23.132.250:22-147.75.109.163:42740.service: Deactivated successfully. Jul 2 00:19:07.952399 systemd[1]: session-14.scope: Deactivated successfully. Jul 2 00:19:07.954118 systemd-logind[1445]: Session 14 logged out. Waiting for processes to exit. Jul 2 00:19:07.955322 systemd-logind[1445]: Removed session 14. Jul 2 00:19:12.977119 systemd[1]: Started sshd@14-64.23.132.250:22-147.75.109.163:35892.service - OpenSSH per-connection server daemon (147.75.109.163:35892). Jul 2 00:19:13.071203 sshd[5325]: Accepted publickey for core from 147.75.109.163 port 35892 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:19:13.075041 sshd[5325]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:19:13.084710 systemd-logind[1445]: New session 15 of user core. Jul 2 00:19:13.093557 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 2 00:19:13.382475 sshd[5325]: pam_unix(sshd:session): session closed for user core Jul 2 00:19:13.395775 systemd[1]: sshd@14-64.23.132.250:22-147.75.109.163:35892.service: Deactivated successfully. Jul 2 00:19:13.404342 systemd[1]: session-15.scope: Deactivated successfully. Jul 2 00:19:13.407081 systemd-logind[1445]: Session 15 logged out. Waiting for processes to exit. Jul 2 00:19:13.409053 systemd-logind[1445]: Removed session 15. Jul 2 00:19:14.138964 kubelet[2543]: E0702 00:19:14.138166 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:19:15.528916 systemd[1]: run-containerd-runc-k8s.io-7d0fb83e8538b08c6ff6fd8653f3825500e4496ec8633357cee28c65bc118f82-runc.JTcALf.mount: Deactivated successfully. Jul 2 00:19:18.410032 systemd[1]: Started sshd@15-64.23.132.250:22-147.75.109.163:35902.service - OpenSSH per-connection server daemon (147.75.109.163:35902). Jul 2 00:19:18.472507 sshd[5361]: Accepted publickey for core from 147.75.109.163 port 35902 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:19:18.474957 sshd[5361]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:19:18.482832 systemd-logind[1445]: New session 16 of user core. Jul 2 00:19:18.490064 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 2 00:19:18.742080 sshd[5361]: pam_unix(sshd:session): session closed for user core Jul 2 00:19:18.756502 systemd[1]: sshd@15-64.23.132.250:22-147.75.109.163:35902.service: Deactivated successfully. Jul 2 00:19:18.760798 systemd[1]: session-16.scope: Deactivated successfully. Jul 2 00:19:18.765254 systemd-logind[1445]: Session 16 logged out. Waiting for processes to exit. Jul 2 00:19:18.767184 systemd-logind[1445]: Removed session 16. Jul 2 00:19:21.139590 kubelet[2543]: E0702 00:19:21.137771 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:19:23.763036 systemd[1]: Started sshd@16-64.23.132.250:22-147.75.109.163:54562.service - OpenSSH per-connection server daemon (147.75.109.163:54562). Jul 2 00:19:23.849140 sshd[5374]: Accepted publickey for core from 147.75.109.163 port 54562 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:19:23.851489 sshd[5374]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:19:23.858878 systemd-logind[1445]: New session 17 of user core. Jul 2 00:19:23.863912 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 2 00:19:24.097018 sshd[5374]: pam_unix(sshd:session): session closed for user core Jul 2 00:19:24.110269 systemd[1]: sshd@16-64.23.132.250:22-147.75.109.163:54562.service: Deactivated successfully. Jul 2 00:19:24.114272 systemd[1]: session-17.scope: Deactivated successfully. Jul 2 00:19:24.118693 systemd-logind[1445]: Session 17 logged out. Waiting for processes to exit. Jul 2 00:19:24.127095 systemd[1]: Started sshd@17-64.23.132.250:22-147.75.109.163:54564.service - OpenSSH per-connection server daemon (147.75.109.163:54564). Jul 2 00:19:24.128347 systemd-logind[1445]: Removed session 17. Jul 2 00:19:24.178993 sshd[5386]: Accepted publickey for core from 147.75.109.163 port 54564 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:19:24.181150 sshd[5386]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:19:24.189796 systemd-logind[1445]: New session 18 of user core. Jul 2 00:19:24.195936 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 2 00:19:24.601717 sshd[5386]: pam_unix(sshd:session): session closed for user core Jul 2 00:19:24.618639 systemd[1]: sshd@17-64.23.132.250:22-147.75.109.163:54564.service: Deactivated successfully. Jul 2 00:19:24.622204 systemd[1]: session-18.scope: Deactivated successfully. Jul 2 00:19:24.626871 systemd-logind[1445]: Session 18 logged out. Waiting for processes to exit. Jul 2 00:19:24.636020 systemd[1]: Started sshd@18-64.23.132.250:22-147.75.109.163:54574.service - OpenSSH per-connection server daemon (147.75.109.163:54574). Jul 2 00:19:24.640293 systemd-logind[1445]: Removed session 18. Jul 2 00:19:24.714503 sshd[5398]: Accepted publickey for core from 147.75.109.163 port 54574 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:19:24.716768 sshd[5398]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:19:24.724665 systemd-logind[1445]: New session 19 of user core. Jul 2 00:19:24.735957 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 2 00:19:25.146308 kubelet[2543]: E0702 00:19:25.146036 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:19:27.031042 sshd[5398]: pam_unix(sshd:session): session closed for user core Jul 2 00:19:27.051359 systemd[1]: sshd@18-64.23.132.250:22-147.75.109.163:54574.service: Deactivated successfully. Jul 2 00:19:27.057349 systemd[1]: session-19.scope: Deactivated successfully. Jul 2 00:19:27.060628 systemd-logind[1445]: Session 19 logged out. Waiting for processes to exit. Jul 2 00:19:27.071505 systemd[1]: Started sshd@19-64.23.132.250:22-147.75.109.163:54584.service - OpenSSH per-connection server daemon (147.75.109.163:54584). Jul 2 00:19:27.075909 systemd-logind[1445]: Removed session 19. Jul 2 00:19:27.166373 sshd[5430]: Accepted publickey for core from 147.75.109.163 port 54584 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:19:27.171136 sshd[5430]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:19:27.182613 systemd-logind[1445]: New session 20 of user core. Jul 2 00:19:27.188851 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 2 00:19:27.965570 kubelet[2543]: I0702 00:19:27.955674 2543 topology_manager.go:215] "Topology Admit Handler" podUID="6db5bb32-c2dc-4d17-8c40-c94fa3b8f15b" podNamespace="calico-apiserver" podName="calico-apiserver-7776b8f564-bbkz2" Jul 2 00:19:28.010172 systemd[1]: Created slice kubepods-besteffort-pod6db5bb32_c2dc_4d17_8c40_c94fa3b8f15b.slice - libcontainer container kubepods-besteffort-pod6db5bb32_c2dc_4d17_8c40_c94fa3b8f15b.slice. Jul 2 00:19:28.101728 kubelet[2543]: I0702 00:19:28.101484 2543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6db5bb32-c2dc-4d17-8c40-c94fa3b8f15b-calico-apiserver-certs\") pod \"calico-apiserver-7776b8f564-bbkz2\" (UID: \"6db5bb32-c2dc-4d17-8c40-c94fa3b8f15b\") " pod="calico-apiserver/calico-apiserver-7776b8f564-bbkz2" Jul 2 00:19:28.102000 kubelet[2543]: I0702 00:19:28.101744 2543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74xxj\" (UniqueName: \"kubernetes.io/projected/6db5bb32-c2dc-4d17-8c40-c94fa3b8f15b-kube-api-access-74xxj\") pod \"calico-apiserver-7776b8f564-bbkz2\" (UID: \"6db5bb32-c2dc-4d17-8c40-c94fa3b8f15b\") " pod="calico-apiserver/calico-apiserver-7776b8f564-bbkz2" Jul 2 00:19:28.207461 kubelet[2543]: E0702 00:19:28.207245 2543 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Jul 2 00:19:28.221802 sshd[5430]: pam_unix(sshd:session): session closed for user core Jul 2 00:19:28.239867 systemd[1]: sshd@19-64.23.132.250:22-147.75.109.163:54584.service: Deactivated successfully. Jul 2 00:19:28.257445 systemd[1]: session-20.scope: Deactivated successfully. Jul 2 00:19:28.264361 systemd-logind[1445]: Session 20 logged out. Waiting for processes to exit. Jul 2 00:19:28.275933 systemd[1]: Started sshd@20-64.23.132.250:22-147.75.109.163:54592.service - OpenSSH per-connection server daemon (147.75.109.163:54592). Jul 2 00:19:28.282178 kubelet[2543]: E0702 00:19:28.279329 2543 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6db5bb32-c2dc-4d17-8c40-c94fa3b8f15b-calico-apiserver-certs podName:6db5bb32-c2dc-4d17-8c40-c94fa3b8f15b nodeName:}" failed. No retries permitted until 2024-07-02 00:19:28.715792967 +0000 UTC m=+96.777832246 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/6db5bb32-c2dc-4d17-8c40-c94fa3b8f15b-calico-apiserver-certs") pod "calico-apiserver-7776b8f564-bbkz2" (UID: "6db5bb32-c2dc-4d17-8c40-c94fa3b8f15b") : secret "calico-apiserver-certs" not found Jul 2 00:19:28.282478 systemd-logind[1445]: Removed session 20. Jul 2 00:19:28.375665 sshd[5488]: Accepted publickey for core from 147.75.109.163 port 54592 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:19:28.376631 sshd[5488]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:19:28.390469 systemd-logind[1445]: New session 21 of user core. Jul 2 00:19:28.395834 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 2 00:19:28.688837 sshd[5488]: pam_unix(sshd:session): session closed for user core Jul 2 00:19:28.694241 systemd[1]: sshd@20-64.23.132.250:22-147.75.109.163:54592.service: Deactivated successfully. Jul 2 00:19:28.697795 systemd[1]: session-21.scope: Deactivated successfully. Jul 2 00:19:28.702325 systemd-logind[1445]: Session 21 logged out. Waiting for processes to exit. Jul 2 00:19:28.703629 systemd-logind[1445]: Removed session 21. Jul 2 00:19:28.936035 containerd[1472]: time="2024-07-02T00:19:28.935966368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7776b8f564-bbkz2,Uid:6db5bb32-c2dc-4d17-8c40-c94fa3b8f15b,Namespace:calico-apiserver,Attempt:0,}" Jul 2 00:19:29.265025 systemd-networkd[1371]: cali6a5bfc98320: Link UP Jul 2 00:19:29.265362 systemd-networkd[1371]: cali6a5bfc98320: Gained carrier Jul 2 00:19:29.291189 containerd[1472]: 2024-07-02 00:19:29.104 [INFO][5507] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.1.1--c--5be545c9fd-k8s-calico--apiserver--7776b8f564--bbkz2-eth0 calico-apiserver-7776b8f564- calico-apiserver 6db5bb32-c2dc-4d17-8c40-c94fa3b8f15b 1246 0 2024-07-02 00:19:27 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7776b8f564 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3975.1.1-c-5be545c9fd calico-apiserver-7776b8f564-bbkz2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6a5bfc98320 [] []}} ContainerID="556eb56461742ce8e28c1b65df00a6f3f6431d4b497986ead39bd22d173962e0" Namespace="calico-apiserver" Pod="calico-apiserver-7776b8f564-bbkz2" WorkloadEndpoint="ci--3975.1.1--c--5be545c9fd-k8s-calico--apiserver--7776b8f564--bbkz2-" Jul 2 00:19:29.291189 containerd[1472]: 2024-07-02 00:19:29.105 [INFO][5507] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="556eb56461742ce8e28c1b65df00a6f3f6431d4b497986ead39bd22d173962e0" Namespace="calico-apiserver" Pod="calico-apiserver-7776b8f564-bbkz2" WorkloadEndpoint="ci--3975.1.1--c--5be545c9fd-k8s-calico--apiserver--7776b8f564--bbkz2-eth0" Jul 2 00:19:29.291189 containerd[1472]: 2024-07-02 00:19:29.178 [INFO][5515] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="556eb56461742ce8e28c1b65df00a6f3f6431d4b497986ead39bd22d173962e0" HandleID="k8s-pod-network.556eb56461742ce8e28c1b65df00a6f3f6431d4b497986ead39bd22d173962e0" Workload="ci--3975.1.1--c--5be545c9fd-k8s-calico--apiserver--7776b8f564--bbkz2-eth0" Jul 2 00:19:29.291189 containerd[1472]: 2024-07-02 00:19:29.192 [INFO][5515] ipam_plugin.go 264: Auto assigning IP ContainerID="556eb56461742ce8e28c1b65df00a6f3f6431d4b497986ead39bd22d173962e0" HandleID="k8s-pod-network.556eb56461742ce8e28c1b65df00a6f3f6431d4b497986ead39bd22d173962e0" Workload="ci--3975.1.1--c--5be545c9fd-k8s-calico--apiserver--7776b8f564--bbkz2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001fff60), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3975.1.1-c-5be545c9fd", "pod":"calico-apiserver-7776b8f564-bbkz2", "timestamp":"2024-07-02 00:19:29.178617111 +0000 UTC"}, Hostname:"ci-3975.1.1-c-5be545c9fd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:19:29.291189 containerd[1472]: 2024-07-02 00:19:29.192 [INFO][5515] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:19:29.291189 containerd[1472]: 2024-07-02 00:19:29.193 [INFO][5515] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:19:29.291189 containerd[1472]: 2024-07-02 00:19:29.193 [INFO][5515] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.1.1-c-5be545c9fd' Jul 2 00:19:29.291189 containerd[1472]: 2024-07-02 00:19:29.196 [INFO][5515] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.556eb56461742ce8e28c1b65df00a6f3f6431d4b497986ead39bd22d173962e0" host="ci-3975.1.1-c-5be545c9fd" Jul 2 00:19:29.291189 containerd[1472]: 2024-07-02 00:19:29.207 [INFO][5515] ipam.go 372: Looking up existing affinities for host host="ci-3975.1.1-c-5be545c9fd" Jul 2 00:19:29.291189 containerd[1472]: 2024-07-02 00:19:29.217 [INFO][5515] ipam.go 489: Trying affinity for 192.168.63.64/26 host="ci-3975.1.1-c-5be545c9fd" Jul 2 00:19:29.291189 containerd[1472]: 2024-07-02 00:19:29.221 [INFO][5515] ipam.go 155: Attempting to load block cidr=192.168.63.64/26 host="ci-3975.1.1-c-5be545c9fd" Jul 2 00:19:29.291189 containerd[1472]: 2024-07-02 00:19:29.228 [INFO][5515] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.63.64/26 host="ci-3975.1.1-c-5be545c9fd" Jul 2 00:19:29.291189 containerd[1472]: 2024-07-02 00:19:29.228 [INFO][5515] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.63.64/26 handle="k8s-pod-network.556eb56461742ce8e28c1b65df00a6f3f6431d4b497986ead39bd22d173962e0" host="ci-3975.1.1-c-5be545c9fd" Jul 2 00:19:29.291189 containerd[1472]: 2024-07-02 00:19:29.236 [INFO][5515] ipam.go 1685: Creating new handle: k8s-pod-network.556eb56461742ce8e28c1b65df00a6f3f6431d4b497986ead39bd22d173962e0 Jul 2 00:19:29.291189 containerd[1472]: 2024-07-02 00:19:29.242 [INFO][5515] ipam.go 1203: Writing block in order to claim IPs block=192.168.63.64/26 handle="k8s-pod-network.556eb56461742ce8e28c1b65df00a6f3f6431d4b497986ead39bd22d173962e0" host="ci-3975.1.1-c-5be545c9fd" Jul 2 00:19:29.291189 containerd[1472]: 2024-07-02 00:19:29.252 [INFO][5515] ipam.go 1216: Successfully claimed IPs: [192.168.63.69/26] block=192.168.63.64/26 handle="k8s-pod-network.556eb56461742ce8e28c1b65df00a6f3f6431d4b497986ead39bd22d173962e0" host="ci-3975.1.1-c-5be545c9fd" Jul 2 00:19:29.291189 containerd[1472]: 2024-07-02 00:19:29.252 [INFO][5515] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.63.69/26] handle="k8s-pod-network.556eb56461742ce8e28c1b65df00a6f3f6431d4b497986ead39bd22d173962e0" host="ci-3975.1.1-c-5be545c9fd" Jul 2 00:19:29.291189 containerd[1472]: 2024-07-02 00:19:29.252 [INFO][5515] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:19:29.291189 containerd[1472]: 2024-07-02 00:19:29.252 [INFO][5515] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.63.69/26] IPv6=[] ContainerID="556eb56461742ce8e28c1b65df00a6f3f6431d4b497986ead39bd22d173962e0" HandleID="k8s-pod-network.556eb56461742ce8e28c1b65df00a6f3f6431d4b497986ead39bd22d173962e0" Workload="ci--3975.1.1--c--5be545c9fd-k8s-calico--apiserver--7776b8f564--bbkz2-eth0" Jul 2 00:19:29.297406 containerd[1472]: 2024-07-02 00:19:29.257 [INFO][5507] k8s.go 386: Populated endpoint ContainerID="556eb56461742ce8e28c1b65df00a6f3f6431d4b497986ead39bd22d173962e0" Namespace="calico-apiserver" Pod="calico-apiserver-7776b8f564-bbkz2" WorkloadEndpoint="ci--3975.1.1--c--5be545c9fd-k8s-calico--apiserver--7776b8f564--bbkz2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--c--5be545c9fd-k8s-calico--apiserver--7776b8f564--bbkz2-eth0", GenerateName:"calico-apiserver-7776b8f564-", Namespace:"calico-apiserver", SelfLink:"", UID:"6db5bb32-c2dc-4d17-8c40-c94fa3b8f15b", ResourceVersion:"1246", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 19, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7776b8f564", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-c-5be545c9fd", ContainerID:"", Pod:"calico-apiserver-7776b8f564-bbkz2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.63.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6a5bfc98320", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:19:29.297406 containerd[1472]: 2024-07-02 00:19:29.258 [INFO][5507] k8s.go 387: Calico CNI using IPs: [192.168.63.69/32] ContainerID="556eb56461742ce8e28c1b65df00a6f3f6431d4b497986ead39bd22d173962e0" Namespace="calico-apiserver" Pod="calico-apiserver-7776b8f564-bbkz2" WorkloadEndpoint="ci--3975.1.1--c--5be545c9fd-k8s-calico--apiserver--7776b8f564--bbkz2-eth0" Jul 2 00:19:29.297406 containerd[1472]: 2024-07-02 00:19:29.258 [INFO][5507] dataplane_linux.go 68: Setting the host side veth name to cali6a5bfc98320 ContainerID="556eb56461742ce8e28c1b65df00a6f3f6431d4b497986ead39bd22d173962e0" Namespace="calico-apiserver" Pod="calico-apiserver-7776b8f564-bbkz2" WorkloadEndpoint="ci--3975.1.1--c--5be545c9fd-k8s-calico--apiserver--7776b8f564--bbkz2-eth0" Jul 2 00:19:29.297406 containerd[1472]: 2024-07-02 00:19:29.263 [INFO][5507] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="556eb56461742ce8e28c1b65df00a6f3f6431d4b497986ead39bd22d173962e0" Namespace="calico-apiserver" Pod="calico-apiserver-7776b8f564-bbkz2" WorkloadEndpoint="ci--3975.1.1--c--5be545c9fd-k8s-calico--apiserver--7776b8f564--bbkz2-eth0" Jul 2 00:19:29.297406 containerd[1472]: 2024-07-02 00:19:29.267 [INFO][5507] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="556eb56461742ce8e28c1b65df00a6f3f6431d4b497986ead39bd22d173962e0" Namespace="calico-apiserver" Pod="calico-apiserver-7776b8f564-bbkz2" WorkloadEndpoint="ci--3975.1.1--c--5be545c9fd-k8s-calico--apiserver--7776b8f564--bbkz2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--c--5be545c9fd-k8s-calico--apiserver--7776b8f564--bbkz2-eth0", GenerateName:"calico-apiserver-7776b8f564-", Namespace:"calico-apiserver", SelfLink:"", UID:"6db5bb32-c2dc-4d17-8c40-c94fa3b8f15b", ResourceVersion:"1246", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 19, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7776b8f564", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-c-5be545c9fd", ContainerID:"556eb56461742ce8e28c1b65df00a6f3f6431d4b497986ead39bd22d173962e0", Pod:"calico-apiserver-7776b8f564-bbkz2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.63.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6a5bfc98320", MAC:"ba:a3:76:c4:9f:52", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:19:29.297406 containerd[1472]: 2024-07-02 00:19:29.283 [INFO][5507] k8s.go 500: Wrote updated endpoint to datastore ContainerID="556eb56461742ce8e28c1b65df00a6f3f6431d4b497986ead39bd22d173962e0" Namespace="calico-apiserver" Pod="calico-apiserver-7776b8f564-bbkz2" WorkloadEndpoint="ci--3975.1.1--c--5be545c9fd-k8s-calico--apiserver--7776b8f564--bbkz2-eth0" Jul 2 00:19:29.393501 containerd[1472]: time="2024-07-02T00:19:29.393366597Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:19:29.393501 containerd[1472]: time="2024-07-02T00:19:29.393451262Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:19:29.393908 containerd[1472]: time="2024-07-02T00:19:29.393853825Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:19:29.393908 containerd[1472]: time="2024-07-02T00:19:29.393890192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:19:29.457784 systemd[1]: Started cri-containerd-556eb56461742ce8e28c1b65df00a6f3f6431d4b497986ead39bd22d173962e0.scope - libcontainer container 556eb56461742ce8e28c1b65df00a6f3f6431d4b497986ead39bd22d173962e0. Jul 2 00:19:29.578365 containerd[1472]: time="2024-07-02T00:19:29.578320409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7776b8f564-bbkz2,Uid:6db5bb32-c2dc-4d17-8c40-c94fa3b8f15b,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"556eb56461742ce8e28c1b65df00a6f3f6431d4b497986ead39bd22d173962e0\"" Jul 2 00:19:29.589378 containerd[1472]: time="2024-07-02T00:19:29.581251705Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jul 2 00:19:30.799836 systemd-networkd[1371]: cali6a5bfc98320: Gained IPv6LL Jul 2 00:19:32.733920 update_engine[1446]: I0702 00:19:32.733331 1446 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jul 2 00:19:32.733920 update_engine[1446]: I0702 00:19:32.733412 1446 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jul 2 00:19:32.737216 update_engine[1446]: I0702 00:19:32.736477 1446 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jul 2 00:19:32.738299 update_engine[1446]: I0702 00:19:32.738240 1446 omaha_request_params.cc:62] Current group set to beta Jul 2 00:19:32.738854 update_engine[1446]: I0702 00:19:32.738440 1446 update_attempter.cc:499] Already updated boot flags. Skipping. Jul 2 00:19:32.738854 update_engine[1446]: I0702 00:19:32.738456 1446 update_attempter.cc:643] Scheduling an action processor start. Jul 2 00:19:32.738854 update_engine[1446]: I0702 00:19:32.738476 1446 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 2 00:19:32.739562 update_engine[1446]: I0702 00:19:32.739490 1446 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jul 2 00:19:32.739690 update_engine[1446]: I0702 00:19:32.739670 1446 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 2 00:19:32.739726 update_engine[1446]: I0702 00:19:32.739686 1446 omaha_request_action.cc:272] Request: Jul 2 00:19:32.739726 update_engine[1446]: Jul 2 00:19:32.739726 update_engine[1446]: Jul 2 00:19:32.739726 update_engine[1446]: Jul 2 00:19:32.739726 update_engine[1446]: Jul 2 00:19:32.739726 update_engine[1446]: Jul 2 00:19:32.739726 update_engine[1446]: Jul 2 00:19:32.739726 update_engine[1446]: Jul 2 00:19:32.739726 update_engine[1446]: Jul 2 00:19:32.739726 update_engine[1446]: I0702 00:19:32.739696 1446 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 2 00:19:32.781664 update_engine[1446]: I0702 00:19:32.781059 1446 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 2 00:19:32.781664 update_engine[1446]: I0702 00:19:32.781399 1446 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 2 00:19:32.794485 update_engine[1446]: E0702 00:19:32.792277 1446 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 2 00:19:32.794485 update_engine[1446]: I0702 00:19:32.794343 1446 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jul 2 00:19:32.803105 locksmithd[1481]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jul 2 00:19:33.722030 systemd[1]: Started sshd@21-64.23.132.250:22-147.75.109.163:41520.service - OpenSSH per-connection server daemon (147.75.109.163:41520). Jul 2 00:19:33.767429 containerd[1472]: time="2024-07-02T00:19:33.702907799Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=40421260" Jul 2 00:19:33.777435 containerd[1472]: time="2024-07-02T00:19:33.777030854Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:19:33.838495 containerd[1472]: time="2024-07-02T00:19:33.837947133Z" level=info msg="ImageCreate event name:\"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:19:33.899742 sshd[5588]: Accepted publickey for core from 147.75.109.163 port 41520 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:19:33.906417 sshd[5588]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:19:33.923966 systemd-logind[1445]: New session 22 of user core. Jul 2 00:19:33.926874 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 2 00:19:34.125699 containerd[1472]: time="2024-07-02T00:19:34.124190259Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:19:34.132500 containerd[1472]: time="2024-07-02T00:19:34.132421977Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"41869036\" in 4.536124031s" Jul 2 00:19:34.132767 containerd[1472]: time="2024-07-02T00:19:34.132740210Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\"" Jul 2 00:19:34.155092 containerd[1472]: time="2024-07-02T00:19:34.154998088Z" level=info msg="CreateContainer within sandbox \"556eb56461742ce8e28c1b65df00a6f3f6431d4b497986ead39bd22d173962e0\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 2 00:19:34.268607 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2931133717.mount: Deactivated successfully. Jul 2 00:19:34.324934 containerd[1472]: time="2024-07-02T00:19:34.324747178Z" level=info msg="CreateContainer within sandbox \"556eb56461742ce8e28c1b65df00a6f3f6431d4b497986ead39bd22d173962e0\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"49728f58911f9d6ee9df178040426cc53f8fcb69fcc51bd79c3a39382d30ac49\"" Jul 2 00:19:34.329315 containerd[1472]: time="2024-07-02T00:19:34.326873773Z" level=info msg="StartContainer for \"49728f58911f9d6ee9df178040426cc53f8fcb69fcc51bd79c3a39382d30ac49\"" Jul 2 00:19:34.488874 systemd[1]: Started cri-containerd-49728f58911f9d6ee9df178040426cc53f8fcb69fcc51bd79c3a39382d30ac49.scope - libcontainer container 49728f58911f9d6ee9df178040426cc53f8fcb69fcc51bd79c3a39382d30ac49. Jul 2 00:19:34.598732 containerd[1472]: time="2024-07-02T00:19:34.597245144Z" level=info msg="StartContainer for \"49728f58911f9d6ee9df178040426cc53f8fcb69fcc51bd79c3a39382d30ac49\" returns successfully" Jul 2 00:19:34.917867 sshd[5588]: pam_unix(sshd:session): session closed for user core Jul 2 00:19:34.924066 systemd[1]: sshd@21-64.23.132.250:22-147.75.109.163:41520.service: Deactivated successfully. Jul 2 00:19:34.929211 systemd[1]: session-22.scope: Deactivated successfully. Jul 2 00:19:34.932766 systemd-logind[1445]: Session 22 logged out. Waiting for processes to exit. Jul 2 00:19:34.938015 systemd-logind[1445]: Removed session 22. Jul 2 00:19:35.682581 kubelet[2543]: I0702 00:19:35.681315 2543 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7776b8f564-bbkz2" podStartSLOduration=4.104704402 podStartE2EDuration="8.657741762s" podCreationTimestamp="2024-07-02 00:19:27 +0000 UTC" firstStartedPulling="2024-07-02 00:19:29.580837991 +0000 UTC m=+97.642877262" lastFinishedPulling="2024-07-02 00:19:34.133875351 +0000 UTC m=+102.195914622" observedRunningTime="2024-07-02 00:19:35.147251115 +0000 UTC m=+103.209290395" watchObservedRunningTime="2024-07-02 00:19:35.657741762 +0000 UTC m=+103.719781040" Jul 2 00:19:39.934916 systemd[1]: Started sshd@22-64.23.132.250:22-147.75.109.163:41536.service - OpenSSH per-connection server daemon (147.75.109.163:41536). Jul 2 00:19:40.037226 sshd[5655]: Accepted publickey for core from 147.75.109.163 port 41536 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:19:40.040511 sshd[5655]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:19:40.046787 systemd-logind[1445]: New session 23 of user core. Jul 2 00:19:40.056888 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 2 00:19:40.385827 sshd[5655]: pam_unix(sshd:session): session closed for user core Jul 2 00:19:40.390373 systemd[1]: sshd@22-64.23.132.250:22-147.75.109.163:41536.service: Deactivated successfully. Jul 2 00:19:40.395099 systemd[1]: session-23.scope: Deactivated successfully. Jul 2 00:19:40.398637 systemd-logind[1445]: Session 23 logged out. Waiting for processes to exit. Jul 2 00:19:40.400547 systemd-logind[1445]: Removed session 23. Jul 2 00:19:42.681631 update_engine[1446]: I0702 00:19:42.681366 1446 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 2 00:19:42.682197 update_engine[1446]: I0702 00:19:42.681670 1446 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 2 00:19:42.682933 update_engine[1446]: I0702 00:19:42.682857 1446 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 2 00:19:42.684057 update_engine[1446]: E0702 00:19:42.683902 1446 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 2 00:19:42.684057 update_engine[1446]: I0702 00:19:42.683995 1446 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jul 2 00:19:45.407037 systemd[1]: Started sshd@23-64.23.132.250:22-147.75.109.163:44594.service - OpenSSH per-connection server daemon (147.75.109.163:44594). Jul 2 00:19:45.475350 sshd[5671]: Accepted publickey for core from 147.75.109.163 port 44594 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:19:45.478513 sshd[5671]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:19:45.487275 systemd-logind[1445]: New session 24 of user core. Jul 2 00:19:45.493133 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 2 00:19:45.689034 sshd[5671]: pam_unix(sshd:session): session closed for user core Jul 2 00:19:45.697043 systemd[1]: sshd@23-64.23.132.250:22-147.75.109.163:44594.service: Deactivated successfully. Jul 2 00:19:45.701392 systemd[1]: session-24.scope: Deactivated successfully. Jul 2 00:19:45.703238 systemd-logind[1445]: Session 24 logged out. Waiting for processes to exit. Jul 2 00:19:45.705296 systemd-logind[1445]: Removed session 24. Jul 2 00:19:46.139253 kubelet[2543]: E0702 00:19:46.138268 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:19:48.137362 kubelet[2543]: E0702 00:19:48.137040 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:19:50.709931 systemd[1]: Started sshd@24-64.23.132.250:22-147.75.109.163:44608.service - OpenSSH per-connection server daemon (147.75.109.163:44608). Jul 2 00:19:50.766419 sshd[5690]: Accepted publickey for core from 147.75.109.163 port 44608 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:19:50.768380 sshd[5690]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:19:50.773930 systemd-logind[1445]: New session 25 of user core. Jul 2 00:19:50.780815 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 2 00:19:50.943731 sshd[5690]: pam_unix(sshd:session): session closed for user core Jul 2 00:19:50.951030 systemd[1]: sshd@24-64.23.132.250:22-147.75.109.163:44608.service: Deactivated successfully. Jul 2 00:19:50.956703 systemd[1]: session-25.scope: Deactivated successfully. Jul 2 00:19:50.958164 systemd-logind[1445]: Session 25 logged out. Waiting for processes to exit. Jul 2 00:19:50.959769 systemd-logind[1445]: Removed session 25. Jul 2 00:19:52.680971 update_engine[1446]: I0702 00:19:52.680917 1446 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 2 00:19:52.681517 update_engine[1446]: I0702 00:19:52.681139 1446 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 2 00:19:52.681517 update_engine[1446]: I0702 00:19:52.681391 1446 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 2 00:19:52.682805 update_engine[1446]: E0702 00:19:52.682770 1446 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 2 00:19:52.682957 update_engine[1446]: I0702 00:19:52.682834 1446 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jul 2 00:19:55.970545 systemd[1]: Started sshd@25-64.23.132.250:22-147.75.109.163:43656.service - OpenSSH per-connection server daemon (147.75.109.163:43656). Jul 2 00:19:56.021633 sshd[5705]: Accepted publickey for core from 147.75.109.163 port 43656 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:19:56.023943 sshd[5705]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:19:56.031436 systemd-logind[1445]: New session 26 of user core. Jul 2 00:19:56.039119 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 2 00:19:56.216981 sshd[5705]: pam_unix(sshd:session): session closed for user core Jul 2 00:19:56.222399 systemd-logind[1445]: Session 26 logged out. Waiting for processes to exit. Jul 2 00:19:56.224546 systemd[1]: sshd@25-64.23.132.250:22-147.75.109.163:43656.service: Deactivated successfully. Jul 2 00:19:56.229616 systemd[1]: session-26.scope: Deactivated successfully. Jul 2 00:19:56.232151 systemd-logind[1445]: Removed session 26. Jul 2 00:20:01.245118 systemd[1]: Started sshd@26-64.23.132.250:22-147.75.109.163:43668.service - OpenSSH per-connection server daemon (147.75.109.163:43668). Jul 2 00:20:01.307044 sshd[5765]: Accepted publickey for core from 147.75.109.163 port 43668 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:20:01.310931 sshd[5765]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:20:01.323214 systemd-logind[1445]: New session 27 of user core. Jul 2 00:20:01.330147 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 2 00:20:01.541451 sshd[5765]: pam_unix(sshd:session): session closed for user core Jul 2 00:20:01.556342 systemd[1]: sshd@26-64.23.132.250:22-147.75.109.163:43668.service: Deactivated successfully. Jul 2 00:20:01.560764 systemd[1]: session-27.scope: Deactivated successfully. Jul 2 00:20:01.566131 systemd-logind[1445]: Session 27 logged out. Waiting for processes to exit. Jul 2 00:20:01.570308 systemd-logind[1445]: Removed session 27. Jul 2 00:20:02.680428 update_engine[1446]: I0702 00:20:02.679658 1446 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 2 00:20:02.680428 update_engine[1446]: I0702 00:20:02.679960 1446 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 2 00:20:02.680428 update_engine[1446]: I0702 00:20:02.680344 1446 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 2 00:20:02.709558 update_engine[1446]: E0702 00:20:02.708521 1446 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 2 00:20:02.709558 update_engine[1446]: I0702 00:20:02.708614 1446 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 2 00:20:02.709558 update_engine[1446]: I0702 00:20:02.708621 1446 omaha_request_action.cc:617] Omaha request response: Jul 2 00:20:02.709558 update_engine[1446]: E0702 00:20:02.708742 1446 omaha_request_action.cc:636] Omaha request network transfer failed. Jul 2 00:20:02.709558 update_engine[1446]: I0702 00:20:02.708778 1446 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jul 2 00:20:02.709558 update_engine[1446]: I0702 00:20:02.708782 1446 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 2 00:20:02.709558 update_engine[1446]: I0702 00:20:02.708787 1446 update_attempter.cc:306] Processing Done. Jul 2 00:20:02.709558 update_engine[1446]: E0702 00:20:02.708806 1446 update_attempter.cc:619] Update failed. Jul 2 00:20:02.709558 update_engine[1446]: I0702 00:20:02.708811 1446 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jul 2 00:20:02.709558 update_engine[1446]: I0702 00:20:02.708817 1446 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jul 2 00:20:02.709558 update_engine[1446]: I0702 00:20:02.708822 1446 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jul 2 00:20:02.709558 update_engine[1446]: I0702 00:20:02.708913 1446 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 2 00:20:02.709558 update_engine[1446]: I0702 00:20:02.708941 1446 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 2 00:20:02.709558 update_engine[1446]: I0702 00:20:02.708946 1446 omaha_request_action.cc:272] Request: Jul 2 00:20:02.709558 update_engine[1446]: Jul 2 00:20:02.709558 update_engine[1446]: Jul 2 00:20:02.709558 update_engine[1446]: Jul 2 00:20:02.710347 update_engine[1446]: Jul 2 00:20:02.710347 update_engine[1446]: Jul 2 00:20:02.710347 update_engine[1446]: Jul 2 00:20:02.710347 update_engine[1446]: I0702 00:20:02.708952 1446 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 2 00:20:02.710347 update_engine[1446]: I0702 00:20:02.709116 1446 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 2 00:20:02.710347 update_engine[1446]: I0702 00:20:02.709482 1446 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 2 00:20:02.711362 locksmithd[1481]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jul 2 00:20:02.718675 update_engine[1446]: E0702 00:20:02.718631 1446 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 2 00:20:02.718895 update_engine[1446]: I0702 00:20:02.718883 1446 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 2 00:20:02.718975 update_engine[1446]: I0702 00:20:02.718966 1446 omaha_request_action.cc:617] Omaha request response: Jul 2 00:20:02.719027 update_engine[1446]: I0702 00:20:02.719018 1446 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 2 00:20:02.719074 update_engine[1446]: I0702 00:20:02.719066 1446 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 2 00:20:02.721781 update_engine[1446]: I0702 00:20:02.721576 1446 update_attempter.cc:306] Processing Done. Jul 2 00:20:02.721781 update_engine[1446]: I0702 00:20:02.721626 1446 update_attempter.cc:310] Error event sent. Jul 2 00:20:02.721781 update_engine[1446]: I0702 00:20:02.721655 1446 update_check_scheduler.cc:74] Next update check in 42m44s Jul 2 00:20:02.723148 locksmithd[1481]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0