Aug 6 07:52:54.952395 kernel: Linux version 6.6.43-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Mon Aug 5 20:36:27 -00 2024 Aug 6 07:52:54.952429 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=4a86c72568bc3f74d57effa5e252d5620941ef6d74241fc198859d020a6392c5 Aug 6 07:52:54.952445 kernel: BIOS-provided physical RAM map: Aug 6 07:52:54.952452 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Aug 6 07:52:54.952458 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Aug 6 07:52:54.952465 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Aug 6 07:52:54.952473 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffd7fff] usable Aug 6 07:52:54.952480 kernel: BIOS-e820: [mem 0x000000007ffd8000-0x000000007fffffff] reserved Aug 6 07:52:54.952486 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Aug 6 07:52:54.952496 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Aug 6 07:52:54.952503 kernel: NX (Execute Disable) protection: active Aug 6 07:52:54.952509 kernel: APIC: Static calls initialized Aug 6 07:52:54.952516 kernel: SMBIOS 2.8 present. Aug 6 07:52:54.952523 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Aug 6 07:52:54.952532 kernel: Hypervisor detected: KVM Aug 6 07:52:54.952542 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 6 07:52:54.952550 kernel: kvm-clock: using sched offset of 3660185523 cycles Aug 6 07:52:54.952559 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 6 07:52:54.952567 kernel: tsc: Detected 2494.138 MHz processor Aug 6 07:52:54.952575 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 6 07:52:54.952583 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 6 07:52:54.952591 kernel: last_pfn = 0x7ffd8 max_arch_pfn = 0x400000000 Aug 6 07:52:54.952599 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Aug 6 07:52:54.952607 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 6 07:52:54.952618 kernel: ACPI: Early table checksum verification disabled Aug 6 07:52:54.952626 kernel: ACPI: RSDP 0x00000000000F5A50 000014 (v00 BOCHS ) Aug 6 07:52:54.952634 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 6 07:52:54.952642 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 6 07:52:54.952650 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 6 07:52:54.952658 kernel: ACPI: FACS 0x000000007FFE0000 000040 Aug 6 07:52:54.952666 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 6 07:52:54.952673 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 6 07:52:54.952681 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 6 07:52:54.952692 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 6 07:52:54.952700 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Aug 6 07:52:54.952711 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Aug 6 07:52:54.952722 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Aug 6 07:52:54.952734 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Aug 6 07:52:54.952745 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Aug 6 07:52:54.952756 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Aug 6 07:52:54.952776 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Aug 6 07:52:54.952788 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Aug 6 07:52:54.952799 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Aug 6 07:52:54.952813 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Aug 6 07:52:54.952838 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Aug 6 07:52:54.952852 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffd7fff] -> [mem 0x00000000-0x7ffd7fff] Aug 6 07:52:54.952867 kernel: NODE_DATA(0) allocated [mem 0x7ffd2000-0x7ffd7fff] Aug 6 07:52:54.952882 kernel: Zone ranges: Aug 6 07:52:54.952890 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 6 07:52:54.952898 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffd7fff] Aug 6 07:52:54.952906 kernel: Normal empty Aug 6 07:52:54.952915 kernel: Movable zone start for each node Aug 6 07:52:54.952923 kernel: Early memory node ranges Aug 6 07:52:54.952933 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Aug 6 07:52:54.952947 kernel: node 0: [mem 0x0000000000100000-0x000000007ffd7fff] Aug 6 07:52:54.952958 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffd7fff] Aug 6 07:52:54.952974 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 6 07:52:54.952986 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Aug 6 07:52:54.952998 kernel: On node 0, zone DMA32: 40 pages in unavailable ranges Aug 6 07:52:54.953009 kernel: ACPI: PM-Timer IO Port: 0x608 Aug 6 07:52:54.953020 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 6 07:52:54.953031 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 6 07:52:54.953044 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Aug 6 07:52:54.953054 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 6 07:52:54.953062 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 6 07:52:54.953074 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 6 07:52:54.953083 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 6 07:52:54.953091 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 6 07:52:54.953100 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Aug 6 07:52:54.953108 kernel: TSC deadline timer available Aug 6 07:52:54.953116 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Aug 6 07:52:54.953160 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Aug 6 07:52:54.953168 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Aug 6 07:52:54.953177 kernel: Booting paravirtualized kernel on KVM Aug 6 07:52:54.953189 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 6 07:52:54.953198 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Aug 6 07:52:54.953206 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Aug 6 07:52:54.953214 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Aug 6 07:52:54.953223 kernel: pcpu-alloc: [0] 0 1 Aug 6 07:52:54.953235 kernel: kvm-guest: PV spinlocks disabled, no host support Aug 6 07:52:54.953248 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=4a86c72568bc3f74d57effa5e252d5620941ef6d74241fc198859d020a6392c5 Aug 6 07:52:54.953257 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 6 07:52:54.953268 kernel: random: crng init done Aug 6 07:52:54.953276 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 6 07:52:54.953285 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Aug 6 07:52:54.953293 kernel: Fallback order for Node 0: 0 Aug 6 07:52:54.953302 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515800 Aug 6 07:52:54.953310 kernel: Policy zone: DMA32 Aug 6 07:52:54.953319 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 6 07:52:54.953328 kernel: Memory: 1965048K/2096600K available (12288K kernel code, 2302K rwdata, 22640K rodata, 49328K init, 2016K bss, 131292K reserved, 0K cma-reserved) Aug 6 07:52:54.953336 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 6 07:52:54.953348 kernel: Kernel/User page tables isolation: enabled Aug 6 07:52:54.953356 kernel: ftrace: allocating 37659 entries in 148 pages Aug 6 07:52:54.953364 kernel: ftrace: allocated 148 pages with 3 groups Aug 6 07:52:54.953372 kernel: Dynamic Preempt: voluntary Aug 6 07:52:54.953380 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 6 07:52:54.953390 kernel: rcu: RCU event tracing is enabled. Aug 6 07:52:54.953398 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 6 07:52:54.953407 kernel: Trampoline variant of Tasks RCU enabled. Aug 6 07:52:54.953415 kernel: Rude variant of Tasks RCU enabled. Aug 6 07:52:54.953427 kernel: Tracing variant of Tasks RCU enabled. Aug 6 07:52:54.953435 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 6 07:52:54.953444 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 6 07:52:54.953452 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Aug 6 07:52:54.953460 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 6 07:52:54.953468 kernel: Console: colour VGA+ 80x25 Aug 6 07:52:54.953477 kernel: printk: console [tty0] enabled Aug 6 07:52:54.953485 kernel: printk: console [ttyS0] enabled Aug 6 07:52:54.953493 kernel: ACPI: Core revision 20230628 Aug 6 07:52:54.953502 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Aug 6 07:52:54.953514 kernel: APIC: Switch to symmetric I/O mode setup Aug 6 07:52:54.953522 kernel: x2apic enabled Aug 6 07:52:54.953530 kernel: APIC: Switched APIC routing to: physical x2apic Aug 6 07:52:54.953539 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Aug 6 07:52:54.953548 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns Aug 6 07:52:54.953556 kernel: Calibrating delay loop (skipped) preset value.. 4988.27 BogoMIPS (lpj=2494138) Aug 6 07:52:54.953565 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Aug 6 07:52:54.953574 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Aug 6 07:52:54.953594 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 6 07:52:54.953602 kernel: Spectre V2 : Mitigation: Retpolines Aug 6 07:52:54.953611 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Aug 6 07:52:54.953623 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Aug 6 07:52:54.953632 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Aug 6 07:52:54.953641 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 6 07:52:54.953649 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Aug 6 07:52:54.953658 kernel: MDS: Mitigation: Clear CPU buffers Aug 6 07:52:54.953667 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Aug 6 07:52:54.953679 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 6 07:52:54.953688 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 6 07:52:54.953697 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 6 07:52:54.953705 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 6 07:52:54.953714 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Aug 6 07:52:54.953723 kernel: Freeing SMP alternatives memory: 32K Aug 6 07:52:54.953732 kernel: pid_max: default: 32768 minimum: 301 Aug 6 07:52:54.953741 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Aug 6 07:52:54.953753 kernel: SELinux: Initializing. Aug 6 07:52:54.953761 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Aug 6 07:52:54.953770 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Aug 6 07:52:54.953779 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Aug 6 07:52:54.953788 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Aug 6 07:52:54.953797 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Aug 6 07:52:54.953806 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Aug 6 07:52:54.953814 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Aug 6 07:52:54.953827 kernel: signal: max sigframe size: 1776 Aug 6 07:52:54.953835 kernel: rcu: Hierarchical SRCU implementation. Aug 6 07:52:54.953844 kernel: rcu: Max phase no-delay instances is 400. Aug 6 07:52:54.953855 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Aug 6 07:52:54.953867 kernel: smp: Bringing up secondary CPUs ... Aug 6 07:52:54.953881 kernel: smpboot: x86: Booting SMP configuration: Aug 6 07:52:54.953892 kernel: .... node #0, CPUs: #1 Aug 6 07:52:54.953904 kernel: smp: Brought up 1 node, 2 CPUs Aug 6 07:52:54.953916 kernel: smpboot: Max logical packages: 1 Aug 6 07:52:54.953930 kernel: smpboot: Total of 2 processors activated (9976.55 BogoMIPS) Aug 6 07:52:54.953949 kernel: devtmpfs: initialized Aug 6 07:52:54.953961 kernel: x86/mm: Memory block size: 128MB Aug 6 07:52:54.953973 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 6 07:52:54.953985 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 6 07:52:54.953999 kernel: pinctrl core: initialized pinctrl subsystem Aug 6 07:52:54.954011 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 6 07:52:54.954024 kernel: audit: initializing netlink subsys (disabled) Aug 6 07:52:54.954037 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 6 07:52:54.954050 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 6 07:52:54.954069 kernel: audit: type=2000 audit(1722930773.999:1): state=initialized audit_enabled=0 res=1 Aug 6 07:52:54.954083 kernel: cpuidle: using governor menu Aug 6 07:52:54.954095 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 6 07:52:54.954121 kernel: dca service started, version 1.12.1 Aug 6 07:52:54.954152 kernel: PCI: Using configuration type 1 for base access Aug 6 07:52:54.954165 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 6 07:52:54.954178 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 6 07:52:54.954191 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 6 07:52:54.954202 kernel: ACPI: Added _OSI(Module Device) Aug 6 07:52:54.954221 kernel: ACPI: Added _OSI(Processor Device) Aug 6 07:52:54.954234 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Aug 6 07:52:54.954248 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 6 07:52:54.954260 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 6 07:52:54.954273 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Aug 6 07:52:54.954287 kernel: ACPI: Interpreter enabled Aug 6 07:52:54.954298 kernel: ACPI: PM: (supports S0 S5) Aug 6 07:52:54.954310 kernel: ACPI: Using IOAPIC for interrupt routing Aug 6 07:52:54.954322 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 6 07:52:54.954339 kernel: PCI: Using E820 reservations for host bridge windows Aug 6 07:52:54.954368 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Aug 6 07:52:54.954382 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 6 07:52:54.954638 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Aug 6 07:52:54.954786 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Aug 6 07:52:54.954939 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Aug 6 07:52:54.954962 kernel: acpiphp: Slot [3] registered Aug 6 07:52:54.954986 kernel: acpiphp: Slot [4] registered Aug 6 07:52:54.955000 kernel: acpiphp: Slot [5] registered Aug 6 07:52:54.955014 kernel: acpiphp: Slot [6] registered Aug 6 07:52:54.955029 kernel: acpiphp: Slot [7] registered Aug 6 07:52:54.955050 kernel: acpiphp: Slot [8] registered Aug 6 07:52:54.955065 kernel: acpiphp: Slot [9] registered Aug 6 07:52:54.955082 kernel: acpiphp: Slot [10] registered Aug 6 07:52:54.955098 kernel: acpiphp: Slot [11] registered Aug 6 07:52:54.955168 kernel: acpiphp: Slot [12] registered Aug 6 07:52:54.955185 kernel: acpiphp: Slot [13] registered Aug 6 07:52:54.955199 kernel: acpiphp: Slot [14] registered Aug 6 07:52:54.955212 kernel: acpiphp: Slot [15] registered Aug 6 07:52:54.955225 kernel: acpiphp: Slot [16] registered Aug 6 07:52:54.955238 kernel: acpiphp: Slot [17] registered Aug 6 07:52:54.955249 kernel: acpiphp: Slot [18] registered Aug 6 07:52:54.955261 kernel: acpiphp: Slot [19] registered Aug 6 07:52:54.955274 kernel: acpiphp: Slot [20] registered Aug 6 07:52:54.955289 kernel: acpiphp: Slot [21] registered Aug 6 07:52:54.955300 kernel: acpiphp: Slot [22] registered Aug 6 07:52:54.955317 kernel: acpiphp: Slot [23] registered Aug 6 07:52:54.955331 kernel: acpiphp: Slot [24] registered Aug 6 07:52:54.955345 kernel: acpiphp: Slot [25] registered Aug 6 07:52:54.955359 kernel: acpiphp: Slot [26] registered Aug 6 07:52:54.955373 kernel: acpiphp: Slot [27] registered Aug 6 07:52:54.955385 kernel: acpiphp: Slot [28] registered Aug 6 07:52:54.955394 kernel: acpiphp: Slot [29] registered Aug 6 07:52:54.955402 kernel: acpiphp: Slot [30] registered Aug 6 07:52:54.955411 kernel: acpiphp: Slot [31] registered Aug 6 07:52:54.955424 kernel: PCI host bridge to bus 0000:00 Aug 6 07:52:54.955588 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 6 07:52:54.955896 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 6 07:52:54.956006 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 6 07:52:54.956091 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Aug 6 07:52:54.956195 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Aug 6 07:52:54.956279 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 6 07:52:54.956415 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Aug 6 07:52:54.956521 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Aug 6 07:52:54.956663 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Aug 6 07:52:54.956812 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Aug 6 07:52:54.956924 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Aug 6 07:52:54.957045 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Aug 6 07:52:54.957196 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Aug 6 07:52:54.957348 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Aug 6 07:52:54.957510 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Aug 6 07:52:54.957667 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Aug 6 07:52:54.957825 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Aug 6 07:52:54.957948 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Aug 6 07:52:54.958043 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Aug 6 07:52:54.958189 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Aug 6 07:52:54.958292 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Aug 6 07:52:54.958429 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Aug 6 07:52:54.958525 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Aug 6 07:52:54.958647 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Aug 6 07:52:54.958785 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 6 07:52:54.958926 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Aug 6 07:52:54.959044 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Aug 6 07:52:54.959172 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Aug 6 07:52:54.959270 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Aug 6 07:52:54.959446 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Aug 6 07:52:54.959580 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Aug 6 07:52:54.959695 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Aug 6 07:52:54.959856 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Aug 6 07:52:54.959966 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Aug 6 07:52:54.960078 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Aug 6 07:52:54.960315 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Aug 6 07:52:54.960478 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Aug 6 07:52:54.960651 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Aug 6 07:52:54.960798 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Aug 6 07:52:54.960938 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Aug 6 07:52:54.961079 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Aug 6 07:52:54.961327 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Aug 6 07:52:54.961431 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Aug 6 07:52:54.961530 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Aug 6 07:52:54.961664 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Aug 6 07:52:54.961830 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Aug 6 07:52:54.961944 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Aug 6 07:52:54.962063 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Aug 6 07:52:54.962081 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 6 07:52:54.962097 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 6 07:52:54.962169 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 6 07:52:54.962182 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 6 07:52:54.962191 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Aug 6 07:52:54.962205 kernel: iommu: Default domain type: Translated Aug 6 07:52:54.962215 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 6 07:52:54.962223 kernel: PCI: Using ACPI for IRQ routing Aug 6 07:52:54.962233 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 6 07:52:54.962242 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Aug 6 07:52:54.962251 kernel: e820: reserve RAM buffer [mem 0x7ffd8000-0x7fffffff] Aug 6 07:52:54.962368 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Aug 6 07:52:54.962480 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Aug 6 07:52:54.962614 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 6 07:52:54.962633 kernel: vgaarb: loaded Aug 6 07:52:54.962643 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Aug 6 07:52:54.962652 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Aug 6 07:52:54.962661 kernel: clocksource: Switched to clocksource kvm-clock Aug 6 07:52:54.962670 kernel: VFS: Disk quotas dquot_6.6.0 Aug 6 07:52:54.962679 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 6 07:52:54.962688 kernel: pnp: PnP ACPI init Aug 6 07:52:54.962697 kernel: pnp: PnP ACPI: found 4 devices Aug 6 07:52:54.962707 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 6 07:52:54.962720 kernel: NET: Registered PF_INET protocol family Aug 6 07:52:54.962737 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 6 07:52:54.962752 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Aug 6 07:52:54.962767 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 6 07:52:54.962781 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 6 07:52:54.962797 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Aug 6 07:52:54.962813 kernel: TCP: Hash tables configured (established 16384 bind 16384) Aug 6 07:52:54.962828 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Aug 6 07:52:54.962843 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Aug 6 07:52:54.962862 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 6 07:52:54.962876 kernel: NET: Registered PF_XDP protocol family Aug 6 07:52:54.962993 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 6 07:52:54.963084 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 6 07:52:54.963235 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 6 07:52:54.963324 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Aug 6 07:52:54.963410 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Aug 6 07:52:54.963514 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Aug 6 07:52:54.963624 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Aug 6 07:52:54.963637 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Aug 6 07:52:54.963762 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7b0 took 35129 usecs Aug 6 07:52:54.963781 kernel: PCI: CLS 0 bytes, default 64 Aug 6 07:52:54.963795 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Aug 6 07:52:54.963809 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns Aug 6 07:52:54.963823 kernel: Initialise system trusted keyrings Aug 6 07:52:54.963837 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Aug 6 07:52:54.963855 kernel: Key type asymmetric registered Aug 6 07:52:54.963864 kernel: Asymmetric key parser 'x509' registered Aug 6 07:52:54.963873 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Aug 6 07:52:54.963882 kernel: io scheduler mq-deadline registered Aug 6 07:52:54.963891 kernel: io scheduler kyber registered Aug 6 07:52:54.963900 kernel: io scheduler bfq registered Aug 6 07:52:54.963909 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 6 07:52:54.963919 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Aug 6 07:52:54.963928 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Aug 6 07:52:54.963937 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Aug 6 07:52:54.963948 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 6 07:52:54.963957 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 6 07:52:54.963966 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 6 07:52:54.963975 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 6 07:52:54.963984 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 6 07:52:54.964175 kernel: rtc_cmos 00:03: RTC can wake from S4 Aug 6 07:52:54.964190 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 6 07:52:54.964282 kernel: rtc_cmos 00:03: registered as rtc0 Aug 6 07:52:54.964374 kernel: rtc_cmos 00:03: setting system clock to 2024-08-06T07:52:54 UTC (1722930774) Aug 6 07:52:54.964463 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Aug 6 07:52:54.964474 kernel: intel_pstate: CPU model not supported Aug 6 07:52:54.964483 kernel: NET: Registered PF_INET6 protocol family Aug 6 07:52:54.964492 kernel: Segment Routing with IPv6 Aug 6 07:52:54.964501 kernel: In-situ OAM (IOAM) with IPv6 Aug 6 07:52:54.964510 kernel: NET: Registered PF_PACKET protocol family Aug 6 07:52:54.964519 kernel: Key type dns_resolver registered Aug 6 07:52:54.964531 kernel: IPI shorthand broadcast: enabled Aug 6 07:52:54.964539 kernel: sched_clock: Marking stable (1119008938, 109048925)->(1269986338, -41928475) Aug 6 07:52:54.964549 kernel: registered taskstats version 1 Aug 6 07:52:54.964558 kernel: Loading compiled-in X.509 certificates Aug 6 07:52:54.964567 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.43-flatcar: e31e857530e65c19b206dbf3ab8297cc37ac5d55' Aug 6 07:52:54.964576 kernel: Key type .fscrypt registered Aug 6 07:52:54.964584 kernel: Key type fscrypt-provisioning registered Aug 6 07:52:54.964593 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 6 07:52:54.964602 kernel: ima: Allocated hash algorithm: sha1 Aug 6 07:52:54.964615 kernel: ima: No architecture policies found Aug 6 07:52:54.964624 kernel: clk: Disabling unused clocks Aug 6 07:52:54.964633 kernel: Freeing unused kernel image (initmem) memory: 49328K Aug 6 07:52:54.964642 kernel: Write protecting the kernel read-only data: 36864k Aug 6 07:52:54.964651 kernel: Freeing unused kernel image (rodata/data gap) memory: 1936K Aug 6 07:52:54.964681 kernel: Run /init as init process Aug 6 07:52:54.964693 kernel: with arguments: Aug 6 07:52:54.964703 kernel: /init Aug 6 07:52:54.964712 kernel: with environment: Aug 6 07:52:54.964724 kernel: HOME=/ Aug 6 07:52:54.964734 kernel: TERM=linux Aug 6 07:52:54.964743 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 6 07:52:54.964755 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 6 07:52:54.964767 systemd[1]: Detected virtualization kvm. Aug 6 07:52:54.964777 systemd[1]: Detected architecture x86-64. Aug 6 07:52:54.964787 systemd[1]: Running in initrd. Aug 6 07:52:54.964796 systemd[1]: No hostname configured, using default hostname. Aug 6 07:52:54.964808 systemd[1]: Hostname set to . Aug 6 07:52:54.964818 systemd[1]: Initializing machine ID from VM UUID. Aug 6 07:52:54.964828 systemd[1]: Queued start job for default target initrd.target. Aug 6 07:52:54.964838 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 6 07:52:54.964848 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 6 07:52:54.964857 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 6 07:52:54.964868 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 6 07:52:54.964877 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 6 07:52:54.964890 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 6 07:52:54.964902 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 6 07:52:54.964911 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 6 07:52:54.964922 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 6 07:52:54.964932 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 6 07:52:54.964942 systemd[1]: Reached target paths.target - Path Units. Aug 6 07:52:54.964955 systemd[1]: Reached target slices.target - Slice Units. Aug 6 07:52:54.964965 systemd[1]: Reached target swap.target - Swaps. Aug 6 07:52:54.964975 systemd[1]: Reached target timers.target - Timer Units. Aug 6 07:52:54.964988 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 6 07:52:54.964998 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 6 07:52:54.965008 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 6 07:52:54.965021 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 6 07:52:54.965031 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 6 07:52:54.965041 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 6 07:52:54.965051 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 6 07:52:54.965061 systemd[1]: Reached target sockets.target - Socket Units. Aug 6 07:52:54.965071 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 6 07:52:54.965081 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 6 07:52:54.965091 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 6 07:52:54.965104 systemd[1]: Starting systemd-fsck-usr.service... Aug 6 07:52:54.965125 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 6 07:52:54.965145 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 6 07:52:54.965155 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 6 07:52:54.965194 systemd-journald[183]: Collecting audit messages is disabled. Aug 6 07:52:54.965222 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 6 07:52:54.965233 systemd-journald[183]: Journal started Aug 6 07:52:54.965255 systemd-journald[183]: Runtime Journal (/run/log/journal/a8d5f44991bf4f639c06a5a7ccc16a9e) is 4.9M, max 39.3M, 34.4M free. Aug 6 07:52:54.972549 systemd[1]: Started systemd-journald.service - Journal Service. Aug 6 07:52:54.972510 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 6 07:52:54.974647 systemd[1]: Finished systemd-fsck-usr.service. Aug 6 07:52:54.987467 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 6 07:52:54.991161 systemd-modules-load[184]: Inserted module 'overlay' Aug 6 07:52:54.997317 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Aug 6 07:52:55.011091 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 6 07:52:55.043705 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 6 07:52:55.043758 kernel: Bridge firewalling registered Aug 6 07:52:55.042918 systemd-modules-load[184]: Inserted module 'br_netfilter' Aug 6 07:52:55.043689 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 6 07:52:55.048681 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 6 07:52:55.049299 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 6 07:52:55.056414 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 6 07:52:55.062397 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 6 07:52:55.063962 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 6 07:52:55.084691 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 6 07:52:55.085730 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 6 07:52:55.087338 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 6 07:52:55.093431 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 6 07:52:55.096308 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 6 07:52:55.109900 dracut-cmdline[219]: dracut-dracut-053 Aug 6 07:52:55.118148 dracut-cmdline[219]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=4a86c72568bc3f74d57effa5e252d5620941ef6d74241fc198859d020a6392c5 Aug 6 07:52:55.141724 systemd-resolved[220]: Positive Trust Anchors: Aug 6 07:52:55.141744 systemd-resolved[220]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 6 07:52:55.141783 systemd-resolved[220]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Aug 6 07:52:55.145266 systemd-resolved[220]: Defaulting to hostname 'linux'. Aug 6 07:52:55.148668 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 6 07:52:55.149514 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 6 07:52:55.231144 kernel: SCSI subsystem initialized Aug 6 07:52:55.244160 kernel: Loading iSCSI transport class v2.0-870. Aug 6 07:52:55.260159 kernel: iscsi: registered transport (tcp) Aug 6 07:52:55.290155 kernel: iscsi: registered transport (qla4xxx) Aug 6 07:52:55.290222 kernel: QLogic iSCSI HBA Driver Aug 6 07:52:55.355466 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 6 07:52:55.362374 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 6 07:52:55.401945 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 6 07:52:55.402031 kernel: device-mapper: uevent: version 1.0.3 Aug 6 07:52:55.403488 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 6 07:52:55.455187 kernel: raid6: avx2x4 gen() 17062 MB/s Aug 6 07:52:55.472183 kernel: raid6: avx2x2 gen() 16558 MB/s Aug 6 07:52:55.489480 kernel: raid6: avx2x1 gen() 11080 MB/s Aug 6 07:52:55.489555 kernel: raid6: using algorithm avx2x4 gen() 17062 MB/s Aug 6 07:52:55.507240 kernel: raid6: .... xor() 9165 MB/s, rmw enabled Aug 6 07:52:55.507321 kernel: raid6: using avx2x2 recovery algorithm Aug 6 07:52:55.536157 kernel: xor: automatically using best checksumming function avx Aug 6 07:52:55.737165 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 6 07:52:55.753689 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 6 07:52:55.765460 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 6 07:52:55.783578 systemd-udevd[403]: Using default interface naming scheme 'v255'. Aug 6 07:52:55.789665 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 6 07:52:55.799923 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 6 07:52:55.819625 dracut-pre-trigger[408]: rd.md=0: removing MD RAID activation Aug 6 07:52:55.863090 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 6 07:52:55.869421 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 6 07:52:55.939283 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 6 07:52:55.947226 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 6 07:52:55.975682 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 6 07:52:55.985955 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 6 07:52:55.987339 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 6 07:52:55.988527 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 6 07:52:55.993377 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 6 07:52:56.017175 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 6 07:52:56.023156 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Aug 6 07:52:56.060819 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Aug 6 07:52:56.060971 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 6 07:52:56.060986 kernel: GPT:9289727 != 125829119 Aug 6 07:52:56.060998 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 6 07:52:56.061009 kernel: GPT:9289727 != 125829119 Aug 6 07:52:56.061029 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 6 07:52:56.061040 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 6 07:52:56.064168 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Aug 6 07:52:56.072841 kernel: virtio_blk virtio5: [vdb] 968 512-byte logical blocks (496 kB/484 KiB) Aug 6 07:52:56.086148 kernel: ACPI: bus type USB registered Aug 6 07:52:56.087129 kernel: usbcore: registered new interface driver usbfs Aug 6 07:52:56.087173 kernel: cryptd: max_cpu_qlen set to 1000 Aug 6 07:52:56.090376 kernel: usbcore: registered new interface driver hub Aug 6 07:52:56.094210 kernel: usbcore: registered new device driver usb Aug 6 07:52:56.108158 kernel: scsi host0: Virtio SCSI HBA Aug 6 07:52:56.126558 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 6 07:52:56.126697 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 6 07:52:56.127370 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 6 07:52:56.129849 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 6 07:52:56.130054 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 6 07:52:56.130646 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 6 07:52:56.140237 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 6 07:52:56.165458 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Aug 6 07:52:56.178057 kernel: AVX2 version of gcm_enc/dec engaged. Aug 6 07:52:56.179146 kernel: AES CTR mode by8 optimization enabled Aug 6 07:52:56.201144 kernel: libata version 3.00 loaded. Aug 6 07:52:56.210147 kernel: ata_piix 0000:00:01.1: version 2.13 Aug 6 07:52:56.215169 kernel: scsi host1: ata_piix Aug 6 07:52:56.215407 kernel: scsi host2: ata_piix Aug 6 07:52:56.215586 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Aug 6 07:52:56.215621 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Aug 6 07:52:56.220758 kernel: BTRFS: device fsid d3844c60-0a2c-449a-9ee9-2a875f8d8e12 devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (462) Aug 6 07:52:56.220825 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (449) Aug 6 07:52:56.249759 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Aug 6 07:52:56.276837 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 6 07:52:56.291128 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Aug 6 07:52:56.291385 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Aug 6 07:52:56.291565 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Aug 6 07:52:56.291744 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Aug 6 07:52:56.291920 kernel: hub 1-0:1.0: USB hub found Aug 6 07:52:56.292124 kernel: hub 1-0:1.0: 2 ports detected Aug 6 07:52:56.291668 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Aug 6 07:52:56.292495 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Aug 6 07:52:56.298293 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 6 07:52:56.303393 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 6 07:52:56.306445 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 6 07:52:56.316923 disk-uuid[533]: Primary Header is updated. Aug 6 07:52:56.316923 disk-uuid[533]: Secondary Entries is updated. Aug 6 07:52:56.316923 disk-uuid[533]: Secondary Header is updated. Aug 6 07:52:56.330849 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 6 07:52:56.334048 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 6 07:52:56.344155 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 6 07:52:57.338157 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 6 07:52:57.338237 disk-uuid[534]: The operation has completed successfully. Aug 6 07:52:57.387598 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 6 07:52:57.387774 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 6 07:52:57.401365 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 6 07:52:57.406231 sh[562]: Success Aug 6 07:52:57.428152 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Aug 6 07:52:57.502056 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 6 07:52:57.510281 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 6 07:52:57.515530 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 6 07:52:57.541254 kernel: BTRFS info (device dm-0): first mount of filesystem d3844c60-0a2c-449a-9ee9-2a875f8d8e12 Aug 6 07:52:57.541322 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 6 07:52:57.541337 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 6 07:52:57.542423 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 6 07:52:57.543183 kernel: BTRFS info (device dm-0): using free space tree Aug 6 07:52:57.552168 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 6 07:52:57.553273 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 6 07:52:57.559375 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 6 07:52:57.563301 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 6 07:52:57.576394 kernel: BTRFS info (device vda6): first mount of filesystem b6695624-d538-4f05-9ddd-23ee987404c1 Aug 6 07:52:57.576478 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 6 07:52:57.576493 kernel: BTRFS info (device vda6): using free space tree Aug 6 07:52:57.584630 kernel: BTRFS info (device vda6): auto enabling async discard Aug 6 07:52:57.599397 kernel: BTRFS info (device vda6): last unmount of filesystem b6695624-d538-4f05-9ddd-23ee987404c1 Aug 6 07:52:57.598909 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 6 07:52:57.611489 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 6 07:52:57.620448 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 6 07:52:57.734612 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 6 07:52:57.748592 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 6 07:52:57.778210 ignition[654]: Ignition 2.18.0 Aug 6 07:52:57.778225 ignition[654]: Stage: fetch-offline Aug 6 07:52:57.781127 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 6 07:52:57.778316 ignition[654]: no configs at "/usr/lib/ignition/base.d" Aug 6 07:52:57.782802 systemd-networkd[748]: lo: Link UP Aug 6 07:52:57.778335 ignition[654]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 6 07:52:57.782810 systemd-networkd[748]: lo: Gained carrier Aug 6 07:52:57.778614 ignition[654]: parsed url from cmdline: "" Aug 6 07:52:57.778621 ignition[654]: no config URL provided Aug 6 07:52:57.778631 ignition[654]: reading system config file "/usr/lib/ignition/user.ign" Aug 6 07:52:57.778648 ignition[654]: no config at "/usr/lib/ignition/user.ign" Aug 6 07:52:57.786330 systemd-networkd[748]: Enumeration completed Aug 6 07:52:57.778657 ignition[654]: failed to fetch config: resource requires networking Aug 6 07:52:57.786952 systemd-networkd[748]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Aug 6 07:52:57.778962 ignition[654]: Ignition finished successfully Aug 6 07:52:57.786958 systemd-networkd[748]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Aug 6 07:52:57.786969 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 6 07:52:57.788288 systemd-networkd[748]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 6 07:52:57.788292 systemd-networkd[748]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 6 07:52:57.788390 systemd[1]: Reached target network.target - Network. Aug 6 07:52:57.789582 systemd-networkd[748]: eth0: Link UP Aug 6 07:52:57.789589 systemd-networkd[748]: eth0: Gained carrier Aug 6 07:52:57.789601 systemd-networkd[748]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Aug 6 07:52:57.796504 systemd-networkd[748]: eth1: Link UP Aug 6 07:52:57.796509 systemd-networkd[748]: eth1: Gained carrier Aug 6 07:52:57.796525 systemd-networkd[748]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 6 07:52:57.797360 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 6 07:52:57.806234 systemd-networkd[748]: eth0: DHCPv4 address 143.244.180.140/20, gateway 143.244.176.1 acquired from 169.254.169.253 Aug 6 07:52:57.810200 systemd-networkd[748]: eth1: DHCPv4 address 10.124.0.10/20 acquired from 169.254.169.253 Aug 6 07:52:57.818167 ignition[756]: Ignition 2.18.0 Aug 6 07:52:57.818183 ignition[756]: Stage: fetch Aug 6 07:52:57.818455 ignition[756]: no configs at "/usr/lib/ignition/base.d" Aug 6 07:52:57.818469 ignition[756]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 6 07:52:57.818569 ignition[756]: parsed url from cmdline: "" Aug 6 07:52:57.818573 ignition[756]: no config URL provided Aug 6 07:52:57.818579 ignition[756]: reading system config file "/usr/lib/ignition/user.ign" Aug 6 07:52:57.818588 ignition[756]: no config at "/usr/lib/ignition/user.ign" Aug 6 07:52:57.818607 ignition[756]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Aug 6 07:52:57.832578 ignition[756]: GET result: OK Aug 6 07:52:57.832907 ignition[756]: parsing config with SHA512: cdea3ee48e0ad500363bd8328af5e9d30074e2a44bc8023c9acba02d8bd115277435d45ccb1a36bbe4d2bc3b99a66b46f608708ca4612bbb5de5617df6f7196c Aug 6 07:52:57.841149 unknown[756]: fetched base config from "system" Aug 6 07:52:57.841167 unknown[756]: fetched base config from "system" Aug 6 07:52:57.841742 ignition[756]: fetch: fetch complete Aug 6 07:52:57.841175 unknown[756]: fetched user config from "digitalocean" Aug 6 07:52:57.841749 ignition[756]: fetch: fetch passed Aug 6 07:52:57.844029 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 6 07:52:57.841810 ignition[756]: Ignition finished successfully Aug 6 07:52:57.854423 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 6 07:52:57.871859 ignition[764]: Ignition 2.18.0 Aug 6 07:52:57.871875 ignition[764]: Stage: kargs Aug 6 07:52:57.872084 ignition[764]: no configs at "/usr/lib/ignition/base.d" Aug 6 07:52:57.872098 ignition[764]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 6 07:52:57.873053 ignition[764]: kargs: kargs passed Aug 6 07:52:57.874722 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 6 07:52:57.873132 ignition[764]: Ignition finished successfully Aug 6 07:52:57.881401 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 6 07:52:57.903161 ignition[771]: Ignition 2.18.0 Aug 6 07:52:57.903176 ignition[771]: Stage: disks Aug 6 07:52:57.903408 ignition[771]: no configs at "/usr/lib/ignition/base.d" Aug 6 07:52:57.903420 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 6 07:52:57.904746 ignition[771]: disks: disks passed Aug 6 07:52:57.904814 ignition[771]: Ignition finished successfully Aug 6 07:52:57.907526 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 6 07:52:57.909486 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 6 07:52:57.913720 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 6 07:52:57.914984 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 6 07:52:57.916008 systemd[1]: Reached target sysinit.target - System Initialization. Aug 6 07:52:57.916822 systemd[1]: Reached target basic.target - Basic System. Aug 6 07:52:57.926403 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 6 07:52:57.944764 systemd-fsck[780]: ROOT: clean, 14/553520 files, 52654/553472 blocks Aug 6 07:52:57.948500 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 6 07:52:57.957062 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 6 07:52:58.069160 kernel: EXT4-fs (vda9): mounted filesystem e865ac73-053b-4efa-9a0f-50dec3f650d9 r/w with ordered data mode. Quota mode: none. Aug 6 07:52:58.070159 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 6 07:52:58.071302 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 6 07:52:58.088322 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 6 07:52:58.091250 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 6 07:52:58.099365 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Aug 6 07:52:58.107459 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (788) Aug 6 07:52:58.107496 kernel: BTRFS info (device vda6): first mount of filesystem b6695624-d538-4f05-9ddd-23ee987404c1 Aug 6 07:52:58.107510 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 6 07:52:58.107522 kernel: BTRFS info (device vda6): using free space tree Aug 6 07:52:58.104464 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Aug 6 07:52:58.111064 kernel: BTRFS info (device vda6): auto enabling async discard Aug 6 07:52:58.108485 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 6 07:52:58.108525 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 6 07:52:58.116540 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 6 07:52:58.117670 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 6 07:52:58.127347 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 6 07:52:58.198147 initrd-setup-root[818]: cut: /sysroot/etc/passwd: No such file or directory Aug 6 07:52:58.213619 coreos-metadata[804]: Aug 06 07:52:58.212 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Aug 6 07:52:58.215239 initrd-setup-root[825]: cut: /sysroot/etc/group: No such file or directory Aug 6 07:52:58.217569 coreos-metadata[790]: Aug 06 07:52:58.217 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Aug 6 07:52:58.222502 initrd-setup-root[832]: cut: /sysroot/etc/shadow: No such file or directory Aug 6 07:52:58.229905 coreos-metadata[804]: Aug 06 07:52:58.226 INFO Fetch successful Aug 6 07:52:58.231422 coreos-metadata[790]: Aug 06 07:52:58.231 INFO Fetch successful Aug 6 07:52:58.233514 initrd-setup-root[839]: cut: /sysroot/etc/gshadow: No such file or directory Aug 6 07:52:58.236678 coreos-metadata[804]: Aug 06 07:52:58.236 INFO wrote hostname ci-3975.2.0-f-5a6fbdc7ed to /sysroot/etc/hostname Aug 6 07:52:58.239264 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Aug 6 07:52:58.239381 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Aug 6 07:52:58.240173 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Aug 6 07:52:58.359323 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 6 07:52:58.364340 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 6 07:52:58.366348 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 6 07:52:58.382230 kernel: BTRFS info (device vda6): last unmount of filesystem b6695624-d538-4f05-9ddd-23ee987404c1 Aug 6 07:52:58.404506 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 6 07:52:58.412922 ignition[909]: INFO : Ignition 2.18.0 Aug 6 07:52:58.415166 ignition[909]: INFO : Stage: mount Aug 6 07:52:58.415166 ignition[909]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 6 07:52:58.415166 ignition[909]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 6 07:52:58.417075 ignition[909]: INFO : mount: mount passed Aug 6 07:52:58.417075 ignition[909]: INFO : Ignition finished successfully Aug 6 07:52:58.417858 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 6 07:52:58.434414 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 6 07:52:58.540240 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 6 07:52:58.545408 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 6 07:52:58.559498 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (921) Aug 6 07:52:58.559575 kernel: BTRFS info (device vda6): first mount of filesystem b6695624-d538-4f05-9ddd-23ee987404c1 Aug 6 07:52:58.561301 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 6 07:52:58.561364 kernel: BTRFS info (device vda6): using free space tree Aug 6 07:52:58.566481 kernel: BTRFS info (device vda6): auto enabling async discard Aug 6 07:52:58.568078 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 6 07:52:58.603492 ignition[937]: INFO : Ignition 2.18.0 Aug 6 07:52:58.604617 ignition[937]: INFO : Stage: files Aug 6 07:52:58.604617 ignition[937]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 6 07:52:58.604617 ignition[937]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 6 07:52:58.606556 ignition[937]: DEBUG : files: compiled without relabeling support, skipping Aug 6 07:52:58.607239 ignition[937]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 6 07:52:58.607239 ignition[937]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 6 07:52:58.611948 ignition[937]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 6 07:52:58.612814 ignition[937]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 6 07:52:58.612814 ignition[937]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 6 07:52:58.612681 unknown[937]: wrote ssh authorized keys file for user: core Aug 6 07:52:58.615148 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 6 07:52:58.615148 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Aug 6 07:52:58.644733 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 6 07:52:58.698667 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 6 07:52:58.700914 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Aug 6 07:52:58.700914 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Aug 6 07:52:58.700914 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 6 07:52:58.703178 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 6 07:52:58.703178 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 6 07:52:58.703178 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 6 07:52:58.703178 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 6 07:52:58.703178 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 6 07:52:58.703178 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 6 07:52:58.703178 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 6 07:52:58.703178 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Aug 6 07:52:58.703178 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Aug 6 07:52:58.703178 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Aug 6 07:52:58.703178 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-x86-64.raw: attempt #1 Aug 6 07:52:59.002624 systemd-networkd[748]: eth0: Gained IPv6LL Aug 6 07:52:59.130526 systemd-networkd[748]: eth1: Gained IPv6LL Aug 6 07:52:59.163571 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Aug 6 07:52:59.406325 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Aug 6 07:52:59.407848 ignition[937]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Aug 6 07:52:59.410241 ignition[937]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 6 07:52:59.412500 ignition[937]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 6 07:52:59.412500 ignition[937]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Aug 6 07:52:59.412500 ignition[937]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Aug 6 07:52:59.412500 ignition[937]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Aug 6 07:52:59.412500 ignition[937]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 6 07:52:59.412500 ignition[937]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 6 07:52:59.412500 ignition[937]: INFO : files: files passed Aug 6 07:52:59.412500 ignition[937]: INFO : Ignition finished successfully Aug 6 07:52:59.414037 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 6 07:52:59.422419 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 6 07:52:59.426281 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 6 07:52:59.429328 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 6 07:52:59.429484 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 6 07:52:59.453238 initrd-setup-root-after-ignition[967]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 6 07:52:59.453238 initrd-setup-root-after-ignition[967]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 6 07:52:59.455932 initrd-setup-root-after-ignition[971]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 6 07:52:59.457572 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 6 07:52:59.459039 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 6 07:52:59.467452 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 6 07:52:59.511352 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 6 07:52:59.511474 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 6 07:52:59.513294 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 6 07:52:59.513906 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 6 07:52:59.514953 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 6 07:52:59.520389 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 6 07:52:59.543462 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 6 07:52:59.552395 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 6 07:52:59.567572 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 6 07:52:59.568927 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 6 07:52:59.569536 systemd[1]: Stopped target timers.target - Timer Units. Aug 6 07:52:59.570071 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 6 07:52:59.570256 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 6 07:52:59.571496 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 6 07:52:59.572225 systemd[1]: Stopped target basic.target - Basic System. Aug 6 07:52:59.573315 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 6 07:52:59.574413 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 6 07:52:59.575419 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 6 07:52:59.576370 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 6 07:52:59.577237 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 6 07:52:59.578357 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 6 07:52:59.579372 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 6 07:52:59.580461 systemd[1]: Stopped target swap.target - Swaps. Aug 6 07:52:59.581300 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 6 07:52:59.581492 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 6 07:52:59.582719 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 6 07:52:59.583877 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 6 07:52:59.584615 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 6 07:52:59.584762 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 6 07:52:59.585607 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 6 07:52:59.585795 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 6 07:52:59.587249 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 6 07:52:59.587515 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 6 07:52:59.588693 systemd[1]: ignition-files.service: Deactivated successfully. Aug 6 07:52:59.588898 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 6 07:52:59.589759 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Aug 6 07:52:59.589901 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Aug 6 07:52:59.607498 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 6 07:52:59.609145 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 6 07:52:59.609342 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 6 07:52:59.612027 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 6 07:52:59.612552 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 6 07:52:59.612763 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 6 07:52:59.615414 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 6 07:52:59.615604 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 6 07:52:59.626698 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 6 07:52:59.626862 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 6 07:52:59.637151 ignition[991]: INFO : Ignition 2.18.0 Aug 6 07:52:59.637151 ignition[991]: INFO : Stage: umount Aug 6 07:52:59.637151 ignition[991]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 6 07:52:59.637151 ignition[991]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 6 07:52:59.640369 ignition[991]: INFO : umount: umount passed Aug 6 07:52:59.640369 ignition[991]: INFO : Ignition finished successfully Aug 6 07:52:59.640222 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 6 07:52:59.640342 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 6 07:52:59.641641 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 6 07:52:59.641770 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 6 07:52:59.644310 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 6 07:52:59.644401 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 6 07:52:59.644991 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 6 07:52:59.645060 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 6 07:52:59.651575 systemd[1]: Stopped target network.target - Network. Aug 6 07:52:59.652412 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 6 07:52:59.652511 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 6 07:52:59.653416 systemd[1]: Stopped target paths.target - Path Units. Aug 6 07:52:59.655942 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 6 07:52:59.659563 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 6 07:52:59.688448 systemd[1]: Stopped target slices.target - Slice Units. Aug 6 07:52:59.690093 systemd[1]: Stopped target sockets.target - Socket Units. Aug 6 07:52:59.691220 systemd[1]: iscsid.socket: Deactivated successfully. Aug 6 07:52:59.691305 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 6 07:52:59.717189 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 6 07:52:59.717291 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 6 07:52:59.718368 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 6 07:52:59.718498 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 6 07:52:59.719410 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 6 07:52:59.719496 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 6 07:52:59.721169 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 6 07:52:59.722384 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 6 07:52:59.724864 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 6 07:52:59.725749 systemd-networkd[748]: eth0: DHCPv6 lease lost Aug 6 07:52:59.735265 systemd-networkd[748]: eth1: DHCPv6 lease lost Aug 6 07:52:59.736911 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 6 07:52:59.737467 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 6 07:52:59.742444 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 6 07:52:59.742670 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 6 07:52:59.743929 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 6 07:52:59.744061 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 6 07:52:59.748853 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 6 07:52:59.748952 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 6 07:52:59.750278 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 6 07:52:59.750387 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 6 07:52:59.765500 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 6 07:52:59.766808 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 6 07:52:59.766945 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 6 07:52:59.768054 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 6 07:52:59.768165 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 6 07:52:59.768913 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 6 07:52:59.768978 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 6 07:52:59.769930 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 6 07:52:59.769985 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 6 07:52:59.772154 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 6 07:52:59.791152 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 6 07:52:59.792673 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 6 07:52:59.794055 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 6 07:52:59.794333 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 6 07:52:59.798162 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 6 07:52:59.798281 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 6 07:52:59.799825 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 6 07:52:59.799905 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 6 07:52:59.801241 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 6 07:52:59.801339 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 6 07:52:59.802620 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 6 07:52:59.802726 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 6 07:52:59.804550 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 6 07:52:59.804670 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 6 07:52:59.817229 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 6 07:52:59.817987 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 6 07:52:59.818145 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 6 07:52:59.818867 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 6 07:52:59.818987 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 6 07:52:59.829808 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 6 07:52:59.829956 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 6 07:52:59.831816 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 6 07:52:59.839554 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 6 07:52:59.857912 systemd[1]: Switching root. Aug 6 07:52:59.894163 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Aug 6 07:52:59.894298 systemd-journald[183]: Journal stopped Aug 6 07:53:01.806736 kernel: SELinux: policy capability network_peer_controls=1 Aug 6 07:53:01.806862 kernel: SELinux: policy capability open_perms=1 Aug 6 07:53:01.806887 kernel: SELinux: policy capability extended_socket_class=1 Aug 6 07:53:01.806904 kernel: SELinux: policy capability always_check_network=0 Aug 6 07:53:01.806920 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 6 07:53:01.806936 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 6 07:53:01.806953 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 6 07:53:01.807004 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 6 07:53:01.807020 kernel: audit: type=1403 audit(1722930780.170:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 6 07:53:01.807051 systemd[1]: Successfully loaded SELinux policy in 57.206ms. Aug 6 07:53:01.807079 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 20.400ms. Aug 6 07:53:01.807099 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 6 07:53:01.807135 systemd[1]: Detected virtualization kvm. Aug 6 07:53:01.807152 systemd[1]: Detected architecture x86-64. Aug 6 07:53:01.807170 systemd[1]: Detected first boot. Aug 6 07:53:01.807188 systemd[1]: Hostname set to . Aug 6 07:53:01.807210 systemd[1]: Initializing machine ID from VM UUID. Aug 6 07:53:01.807229 zram_generator::config[1034]: No configuration found. Aug 6 07:53:01.807246 systemd[1]: Populated /etc with preset unit settings. Aug 6 07:53:01.807263 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 6 07:53:01.807280 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 6 07:53:01.807298 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 6 07:53:01.807317 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 6 07:53:01.807334 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 6 07:53:01.807356 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 6 07:53:01.807374 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 6 07:53:01.807391 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 6 07:53:01.807408 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 6 07:53:01.807437 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 6 07:53:01.807454 systemd[1]: Created slice user.slice - User and Session Slice. Aug 6 07:53:01.807471 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 6 07:53:01.807489 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 6 07:53:01.807505 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 6 07:53:01.807528 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 6 07:53:01.807557 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 6 07:53:01.807575 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 6 07:53:01.807593 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 6 07:53:01.807640 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 6 07:53:01.807659 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 6 07:53:01.807678 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 6 07:53:01.807702 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 6 07:53:01.807720 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 6 07:53:01.807738 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 6 07:53:01.807764 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 6 07:53:01.807782 systemd[1]: Reached target slices.target - Slice Units. Aug 6 07:53:01.807801 systemd[1]: Reached target swap.target - Swaps. Aug 6 07:53:01.807819 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 6 07:53:01.807837 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 6 07:53:01.807860 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 6 07:53:01.807892 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 6 07:53:01.807912 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 6 07:53:01.807930 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 6 07:53:01.807947 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 6 07:53:01.807966 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 6 07:53:01.807984 systemd[1]: Mounting media.mount - External Media Directory... Aug 6 07:53:01.808001 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 6 07:53:01.808019 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 6 07:53:01.808043 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 6 07:53:01.808064 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 6 07:53:01.808087 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 6 07:53:01.808354 systemd[1]: Reached target machines.target - Containers. Aug 6 07:53:01.808393 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 6 07:53:01.808412 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 6 07:53:01.808430 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 6 07:53:01.808449 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 6 07:53:01.808467 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 6 07:53:01.808493 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 6 07:53:01.808561 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 6 07:53:01.808579 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 6 07:53:01.808602 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 6 07:53:01.808638 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 6 07:53:01.808657 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 6 07:53:01.808673 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 6 07:53:01.808692 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 6 07:53:01.808718 systemd[1]: Stopped systemd-fsck-usr.service. Aug 6 07:53:01.808736 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 6 07:53:01.808754 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 6 07:53:01.808771 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 6 07:53:01.808789 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 6 07:53:01.808808 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 6 07:53:01.808902 systemd-journald[1102]: Collecting audit messages is disabled. Aug 6 07:53:01.808953 systemd[1]: verity-setup.service: Deactivated successfully. Aug 6 07:53:01.808971 systemd[1]: Stopped verity-setup.service. Aug 6 07:53:01.808990 systemd-journald[1102]: Journal started Aug 6 07:53:01.809025 systemd-journald[1102]: Runtime Journal (/run/log/journal/a8d5f44991bf4f639c06a5a7ccc16a9e) is 4.9M, max 39.3M, 34.4M free. Aug 6 07:53:01.372592 systemd[1]: Queued start job for default target multi-user.target. Aug 6 07:53:01.409276 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Aug 6 07:53:01.812199 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 6 07:53:01.410220 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 6 07:53:01.819183 systemd[1]: Started systemd-journald.service - Journal Service. Aug 6 07:53:01.822078 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 6 07:53:01.825210 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 6 07:53:01.828540 systemd[1]: Mounted media.mount - External Media Directory. Aug 6 07:53:01.829398 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 6 07:53:01.830199 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 6 07:53:01.831465 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 6 07:53:01.833883 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 6 07:53:01.836851 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 6 07:53:01.837123 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 6 07:53:01.839685 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 6 07:53:01.840025 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 6 07:53:01.866197 kernel: ACPI: bus type drm_connector registered Aug 6 07:53:01.869702 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 6 07:53:01.869991 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 6 07:53:01.873746 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 6 07:53:01.876310 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 6 07:53:01.899219 kernel: loop: module loaded Aug 6 07:53:01.910315 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 6 07:53:01.912935 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 6 07:53:01.918526 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 6 07:53:01.924831 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 6 07:53:01.947836 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 6 07:53:01.953913 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 6 07:53:01.954863 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 6 07:53:01.963889 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 6 07:53:01.966770 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 6 07:53:01.966837 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 6 07:53:01.973305 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Aug 6 07:53:01.985456 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 6 07:53:01.998163 kernel: fuse: init (API version 7.39) Aug 6 07:53:01.997089 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 6 07:53:01.998172 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 6 07:53:02.010583 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 6 07:53:02.018521 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 6 07:53:02.019298 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 6 07:53:02.025533 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 6 07:53:02.033528 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 6 07:53:02.036909 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 6 07:53:02.037241 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 6 07:53:02.040395 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 6 07:53:02.043215 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 6 07:53:02.068403 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 6 07:53:02.082074 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 6 07:53:02.106398 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 6 07:53:02.176757 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 6 07:53:02.179727 systemd-journald[1102]: Time spent on flushing to /var/log/journal/a8d5f44991bf4f639c06a5a7ccc16a9e is 215.218ms for 984 entries. Aug 6 07:53:02.179727 systemd-journald[1102]: System Journal (/var/log/journal/a8d5f44991bf4f639c06a5a7ccc16a9e) is 8.0M, max 195.6M, 187.6M free. Aug 6 07:53:02.426010 systemd-journald[1102]: Received client request to flush runtime journal. Aug 6 07:53:02.426128 kernel: loop0: detected capacity change from 0 to 8 Aug 6 07:53:02.427034 kernel: block loop0: the capability attribute has been deprecated. Aug 6 07:53:02.428967 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 6 07:53:02.429010 kernel: loop1: detected capacity change from 0 to 139904 Aug 6 07:53:02.429036 kernel: loop2: detected capacity change from 0 to 80568 Aug 6 07:53:02.180595 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 6 07:53:02.190729 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Aug 6 07:53:02.193264 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 6 07:53:02.210688 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 6 07:53:02.347044 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 6 07:53:02.369699 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 6 07:53:02.372288 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Aug 6 07:53:02.378282 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 6 07:53:02.395099 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 6 07:53:02.397751 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 6 07:53:02.410553 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 6 07:53:02.437191 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 6 07:53:02.476453 kernel: loop3: detected capacity change from 0 to 209816 Aug 6 07:53:02.488448 udevadm[1168]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Aug 6 07:53:02.559270 kernel: loop4: detected capacity change from 0 to 8 Aug 6 07:53:02.562943 systemd-tmpfiles[1169]: ACLs are not supported, ignoring. Aug 6 07:53:02.564200 systemd-tmpfiles[1169]: ACLs are not supported, ignoring. Aug 6 07:53:02.570141 kernel: loop5: detected capacity change from 0 to 139904 Aug 6 07:53:02.611191 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 6 07:53:02.620239 kernel: loop6: detected capacity change from 0 to 80568 Aug 6 07:53:02.666302 kernel: loop7: detected capacity change from 0 to 209816 Aug 6 07:53:02.703051 (sd-merge)[1177]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Aug 6 07:53:02.704033 (sd-merge)[1177]: Merged extensions into '/usr'. Aug 6 07:53:02.720264 systemd[1]: Reloading requested from client PID 1146 ('systemd-sysext') (unit systemd-sysext.service)... Aug 6 07:53:02.720300 systemd[1]: Reloading... Aug 6 07:53:02.937156 zram_generator::config[1202]: No configuration found. Aug 6 07:53:03.105242 ldconfig[1138]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 6 07:53:03.272561 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 6 07:53:03.368822 systemd[1]: Reloading finished in 647 ms. Aug 6 07:53:03.412161 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 6 07:53:03.415873 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 6 07:53:03.433632 systemd[1]: Starting ensure-sysext.service... Aug 6 07:53:03.442768 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Aug 6 07:53:03.471752 systemd[1]: Reloading requested from client PID 1245 ('systemctl') (unit ensure-sysext.service)... Aug 6 07:53:03.471782 systemd[1]: Reloading... Aug 6 07:53:03.529621 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 6 07:53:03.530219 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 6 07:53:03.531966 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 6 07:53:03.532528 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Aug 6 07:53:03.532608 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Aug 6 07:53:03.538231 systemd-tmpfiles[1246]: Detected autofs mount point /boot during canonicalization of boot. Aug 6 07:53:03.538250 systemd-tmpfiles[1246]: Skipping /boot Aug 6 07:53:03.564656 systemd-tmpfiles[1246]: Detected autofs mount point /boot during canonicalization of boot. Aug 6 07:53:03.564677 systemd-tmpfiles[1246]: Skipping /boot Aug 6 07:53:03.666808 zram_generator::config[1274]: No configuration found. Aug 6 07:53:03.855939 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 6 07:53:03.928320 systemd[1]: Reloading finished in 455 ms. Aug 6 07:53:03.953569 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 6 07:53:03.959218 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 6 07:53:03.984697 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 6 07:53:03.990521 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 6 07:53:04.001444 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 6 07:53:04.018677 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 6 07:53:04.030633 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 6 07:53:04.040596 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 6 07:53:04.059022 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 6 07:53:04.065490 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 6 07:53:04.065826 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 6 07:53:04.074840 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 6 07:53:04.085850 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 6 07:53:04.098214 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 6 07:53:04.100652 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 6 07:53:04.100928 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 6 07:53:04.104556 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 6 07:53:04.109831 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 6 07:53:04.114677 systemd-udevd[1322]: Using default interface naming scheme 'v255'. Aug 6 07:53:04.118147 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 6 07:53:04.118487 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 6 07:53:04.118767 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 6 07:53:04.118950 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 6 07:53:04.119075 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 6 07:53:04.126892 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 6 07:53:04.130030 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 6 07:53:04.142028 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 6 07:53:04.143229 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 6 07:53:04.143509 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 6 07:53:04.143669 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 6 07:53:04.152220 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 6 07:53:04.162523 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 6 07:53:04.164671 systemd[1]: Finished ensure-sysext.service. Aug 6 07:53:04.194362 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 6 07:53:04.194636 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 6 07:53:04.218892 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 6 07:53:04.222389 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 6 07:53:04.233908 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 6 07:53:04.247797 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 6 07:53:04.250477 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 6 07:53:04.265748 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 6 07:53:04.284700 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 6 07:53:04.287705 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 6 07:53:04.289344 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 6 07:53:04.293095 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 6 07:53:04.305905 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 6 07:53:04.307484 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 6 07:53:04.361290 augenrules[1369]: No rules Aug 6 07:53:04.368915 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 6 07:53:04.374700 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1336) Aug 6 07:53:04.375667 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 6 07:53:04.387340 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 6 07:53:04.420342 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Aug 6 07:53:04.422284 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 6 07:53:04.422540 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 6 07:53:04.429838 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 6 07:53:04.436461 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 6 07:53:04.455272 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 6 07:53:04.456985 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 6 07:53:04.457068 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 6 07:53:04.457102 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 6 07:53:04.458030 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 6 07:53:04.460576 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 6 07:53:04.468485 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 6 07:53:04.525195 kernel: ISO 9660 Extensions: RRIP_1991A Aug 6 07:53:04.527439 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Aug 6 07:53:04.543841 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 6 07:53:04.544197 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 6 07:53:04.545164 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 6 07:53:04.561424 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 6 07:53:04.564415 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 6 07:53:04.565248 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1337) Aug 6 07:53:04.566662 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 6 07:53:04.710412 systemd-resolved[1320]: Positive Trust Anchors: Aug 6 07:53:04.710951 systemd-resolved[1320]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 6 07:53:04.711066 systemd-resolved[1320]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Aug 6 07:53:04.711906 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 6 07:53:04.713321 systemd[1]: Reached target time-set.target - System Time Set. Aug 6 07:53:04.720127 systemd-resolved[1320]: Using system hostname 'ci-3975.2.0-f-5a6fbdc7ed'. Aug 6 07:53:04.724305 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 6 07:53:04.725034 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 6 07:53:04.734962 systemd-networkd[1340]: lo: Link UP Aug 6 07:53:04.734975 systemd-networkd[1340]: lo: Gained carrier Aug 6 07:53:04.742225 systemd-networkd[1340]: Enumeration completed Aug 6 07:53:04.742434 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 6 07:53:04.743496 systemd[1]: Reached target network.target - Network. Aug 6 07:53:04.746353 systemd-networkd[1340]: eth0: Configuring with /run/systemd/network/10-de:47:0c:b9:1f:ae.network. Aug 6 07:53:04.752492 systemd-networkd[1340]: eth1: Configuring with /run/systemd/network/10-ba:1d:5b:c3:53:b1.network. Aug 6 07:53:04.753471 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 6 07:53:04.756358 systemd-networkd[1340]: eth0: Link UP Aug 6 07:53:04.756371 systemd-networkd[1340]: eth0: Gained carrier Aug 6 07:53:04.764805 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 6 07:53:04.767645 systemd-networkd[1340]: eth1: Link UP Aug 6 07:53:04.767657 systemd-networkd[1340]: eth1: Gained carrier Aug 6 07:53:04.774482 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 6 07:53:04.775219 systemd-timesyncd[1357]: Network configuration changed, trying to establish connection. Aug 6 07:53:04.777691 systemd-timesyncd[1357]: Network configuration changed, trying to establish connection. Aug 6 07:53:04.800421 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Aug 6 07:53:04.803152 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Aug 6 07:53:04.811201 kernel: ACPI: button: Power Button [PWRF] Aug 6 07:53:04.812814 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 6 07:53:04.847244 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Aug 6 07:53:04.911160 kernel: mousedev: PS/2 mouse device common for all mice Aug 6 07:53:04.925704 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 6 07:53:04.947160 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Aug 6 07:53:04.951128 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Aug 6 07:53:04.960502 kernel: Console: switching to colour dummy device 80x25 Aug 6 07:53:04.960638 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Aug 6 07:53:04.960670 kernel: [drm] features: -context_init Aug 6 07:53:04.964179 kernel: [drm] number of scanouts: 1 Aug 6 07:53:04.968142 kernel: [drm] number of cap sets: 0 Aug 6 07:53:04.968244 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Aug 6 07:53:04.982145 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Aug 6 07:53:04.984579 kernel: Console: switching to colour frame buffer device 128x48 Aug 6 07:53:04.999179 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Aug 6 07:53:05.003271 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 6 07:53:05.003644 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 6 07:53:05.029567 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 6 07:53:05.037615 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 6 07:53:05.037920 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 6 07:53:05.105218 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 6 07:53:05.176149 kernel: EDAC MC: Ver: 3.0.0 Aug 6 07:53:05.206048 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 6 07:53:05.221690 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 6 07:53:05.222546 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 6 07:53:05.243063 lvm[1426]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 6 07:53:05.279263 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 6 07:53:05.280070 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 6 07:53:05.280272 systemd[1]: Reached target sysinit.target - System Initialization. Aug 6 07:53:05.280538 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 6 07:53:05.280756 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 6 07:53:05.281487 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 6 07:53:05.282536 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 6 07:53:05.282654 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 6 07:53:05.282747 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 6 07:53:05.282815 systemd[1]: Reached target paths.target - Path Units. Aug 6 07:53:05.282877 systemd[1]: Reached target timers.target - Timer Units. Aug 6 07:53:05.285619 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 6 07:53:05.288967 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 6 07:53:05.297330 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 6 07:53:05.301570 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 6 07:53:05.304921 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 6 07:53:05.306307 systemd[1]: Reached target sockets.target - Socket Units. Aug 6 07:53:05.309379 systemd[1]: Reached target basic.target - Basic System. Aug 6 07:53:05.311292 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 6 07:53:05.311392 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 6 07:53:05.323373 systemd[1]: Starting containerd.service - containerd container runtime... Aug 6 07:53:05.324682 lvm[1431]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 6 07:53:05.339366 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Aug 6 07:53:05.346812 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 6 07:53:05.354498 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 6 07:53:05.363303 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 6 07:53:05.367129 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 6 07:53:05.371506 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 6 07:53:05.383580 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 6 07:53:05.395387 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 6 07:53:05.407407 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 6 07:53:05.408614 jq[1435]: false Aug 6 07:53:05.424533 dbus-daemon[1434]: [system] SELinux support is enabled Aug 6 07:53:05.424021 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 6 07:53:05.429043 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 6 07:53:05.429913 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 6 07:53:05.438584 systemd[1]: Starting update-engine.service - Update Engine... Aug 6 07:53:05.450373 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 6 07:53:05.451980 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 6 07:53:05.461162 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 6 07:53:05.473734 jq[1446]: true Aug 6 07:53:05.474828 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 6 07:53:05.475071 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 6 07:53:05.479159 coreos-metadata[1433]: Aug 06 07:53:05.478 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Aug 6 07:53:05.522265 coreos-metadata[1433]: Aug 06 07:53:05.500 INFO Fetch successful Aug 6 07:53:05.508434 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 6 07:53:05.508577 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 6 07:53:05.514324 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 6 07:53:05.514447 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Aug 6 07:53:05.514480 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 6 07:53:05.521032 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 6 07:53:05.521365 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 6 07:53:05.543878 systemd[1]: motdgen.service: Deactivated successfully. Aug 6 07:53:05.545246 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 6 07:53:05.570529 extend-filesystems[1436]: Found loop4 Aug 6 07:53:05.570529 extend-filesystems[1436]: Found loop5 Aug 6 07:53:05.570529 extend-filesystems[1436]: Found loop6 Aug 6 07:53:05.570529 extend-filesystems[1436]: Found loop7 Aug 6 07:53:05.570529 extend-filesystems[1436]: Found vda Aug 6 07:53:05.570529 extend-filesystems[1436]: Found vda1 Aug 6 07:53:05.570529 extend-filesystems[1436]: Found vda2 Aug 6 07:53:05.570529 extend-filesystems[1436]: Found vda3 Aug 6 07:53:05.570529 extend-filesystems[1436]: Found usr Aug 6 07:53:05.570529 extend-filesystems[1436]: Found vda4 Aug 6 07:53:05.570529 extend-filesystems[1436]: Found vda6 Aug 6 07:53:05.570529 extend-filesystems[1436]: Found vda7 Aug 6 07:53:05.570529 extend-filesystems[1436]: Found vda9 Aug 6 07:53:05.570529 extend-filesystems[1436]: Checking size of /dev/vda9 Aug 6 07:53:05.672391 update_engine[1444]: I0806 07:53:05.583886 1444 main.cc:92] Flatcar Update Engine starting Aug 6 07:53:05.672391 update_engine[1444]: I0806 07:53:05.606366 1444 update_check_scheduler.cc:74] Next update check in 8m1s Aug 6 07:53:05.614601 systemd[1]: Started update-engine.service - Update Engine. Aug 6 07:53:05.679568 tar[1450]: linux-amd64/helm Aug 6 07:53:05.615331 (ntainerd)[1467]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 6 07:53:05.691316 jq[1451]: true Aug 6 07:53:05.699249 extend-filesystems[1436]: Resized partition /dev/vda9 Aug 6 07:53:05.644733 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 6 07:53:05.710437 extend-filesystems[1478]: resize2fs 1.47.0 (5-Feb-2023) Aug 6 07:53:05.737585 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Aug 6 07:53:05.759942 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Aug 6 07:53:05.764770 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 6 07:53:05.768736 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1358) Aug 6 07:53:05.839447 systemd-logind[1443]: New seat seat0. Aug 6 07:53:05.869866 systemd-logind[1443]: Watching system buttons on /dev/input/event1 (Power Button) Aug 6 07:53:05.869909 systemd-logind[1443]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 6 07:53:05.876252 systemd[1]: Started systemd-logind.service - User Login Management. Aug 6 07:53:05.927861 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Aug 6 07:53:05.989273 extend-filesystems[1478]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Aug 6 07:53:05.989273 extend-filesystems[1478]: old_desc_blocks = 1, new_desc_blocks = 8 Aug 6 07:53:05.989273 extend-filesystems[1478]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Aug 6 07:53:06.008017 extend-filesystems[1436]: Resized filesystem in /dev/vda9 Aug 6 07:53:06.008017 extend-filesystems[1436]: Found vdb Aug 6 07:53:06.002621 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 6 07:53:06.003149 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 6 07:53:06.042523 systemd-networkd[1340]: eth0: Gained IPv6LL Aug 6 07:53:06.043193 systemd-timesyncd[1357]: Network configuration changed, trying to establish connection. Aug 6 07:53:06.075325 bash[1496]: Updated "/home/core/.ssh/authorized_keys" Aug 6 07:53:06.062705 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 6 07:53:06.071876 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 6 07:53:06.078977 systemd[1]: Reached target network-online.target - Network is Online. Aug 6 07:53:06.105781 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 6 07:53:06.127440 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 6 07:53:06.144751 systemd[1]: Starting sshkeys.service... Aug 6 07:53:06.232329 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Aug 6 07:53:06.246349 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 6 07:53:06.249618 systemd-networkd[1340]: eth1: Gained IPv6LL Aug 6 07:53:06.252829 systemd-timesyncd[1357]: Network configuration changed, trying to establish connection. Aug 6 07:53:06.255770 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Aug 6 07:53:06.272319 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 6 07:53:06.281426 locksmithd[1475]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 6 07:53:06.399153 coreos-metadata[1520]: Aug 06 07:53:06.392 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Aug 6 07:53:06.419819 coreos-metadata[1520]: Aug 06 07:53:06.419 INFO Fetch successful Aug 6 07:53:06.443626 sshd_keygen[1471]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 6 07:53:06.462490 unknown[1520]: wrote ssh authorized keys file for user: core Aug 6 07:53:06.603250 update-ssh-keys[1528]: Updated "/home/core/.ssh/authorized_keys" Aug 6 07:53:06.605062 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Aug 6 07:53:06.623294 systemd[1]: Finished sshkeys.service. Aug 6 07:53:06.644405 containerd[1467]: time="2024-08-06T07:53:06.618757822Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Aug 6 07:53:06.662897 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 6 07:53:06.681849 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 6 07:53:06.708406 systemd[1]: Started sshd@0-143.244.180.140:22-147.75.109.163:39056.service - OpenSSH per-connection server daemon (147.75.109.163:39056). Aug 6 07:53:06.777363 containerd[1467]: time="2024-08-06T07:53:06.777210378Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 6 07:53:06.777363 containerd[1467]: time="2024-08-06T07:53:06.777361390Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 6 07:53:06.804191 containerd[1467]: time="2024-08-06T07:53:06.796013704Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.43-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 6 07:53:06.807428 containerd[1467]: time="2024-08-06T07:53:06.804543247Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 6 07:53:06.807428 containerd[1467]: time="2024-08-06T07:53:06.805549244Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 6 07:53:06.807428 containerd[1467]: time="2024-08-06T07:53:06.805585327Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 6 07:53:06.807428 containerd[1467]: time="2024-08-06T07:53:06.805801244Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 6 07:53:06.807428 containerd[1467]: time="2024-08-06T07:53:06.805873675Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 6 07:53:06.807428 containerd[1467]: time="2024-08-06T07:53:06.805898087Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 6 07:53:06.809286 containerd[1467]: time="2024-08-06T07:53:06.809221253Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 6 07:53:06.810028 containerd[1467]: time="2024-08-06T07:53:06.809988444Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 6 07:53:06.812168 containerd[1467]: time="2024-08-06T07:53:06.811365269Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Aug 6 07:53:06.812168 containerd[1467]: time="2024-08-06T07:53:06.811409389Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 6 07:53:06.812168 containerd[1467]: time="2024-08-06T07:53:06.811760958Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 6 07:53:06.812168 containerd[1467]: time="2024-08-06T07:53:06.811791365Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 6 07:53:06.812168 containerd[1467]: time="2024-08-06T07:53:06.811959307Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Aug 6 07:53:06.812168 containerd[1467]: time="2024-08-06T07:53:06.811981856Z" level=info msg="metadata content store policy set" policy=shared Aug 6 07:53:06.810261 systemd[1]: issuegen.service: Deactivated successfully. Aug 6 07:53:06.811005 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 6 07:53:06.832166 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 6 07:53:06.844166 containerd[1467]: time="2024-08-06T07:53:06.843598886Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 6 07:53:06.844166 containerd[1467]: time="2024-08-06T07:53:06.843689632Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 6 07:53:06.844166 containerd[1467]: time="2024-08-06T07:53:06.843710483Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 6 07:53:06.844166 containerd[1467]: time="2024-08-06T07:53:06.843820383Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 6 07:53:06.844166 containerd[1467]: time="2024-08-06T07:53:06.844015528Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 6 07:53:06.844166 containerd[1467]: time="2024-08-06T07:53:06.844043408Z" level=info msg="NRI interface is disabled by configuration." Aug 6 07:53:06.844166 containerd[1467]: time="2024-08-06T07:53:06.844077464Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 6 07:53:06.845774 containerd[1467]: time="2024-08-06T07:53:06.844780111Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 6 07:53:06.845774 containerd[1467]: time="2024-08-06T07:53:06.844821491Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 6 07:53:06.845774 containerd[1467]: time="2024-08-06T07:53:06.844840392Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 6 07:53:06.845774 containerd[1467]: time="2024-08-06T07:53:06.844868756Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 6 07:53:06.845774 containerd[1467]: time="2024-08-06T07:53:06.844890188Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 6 07:53:06.845774 containerd[1467]: time="2024-08-06T07:53:06.844911599Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 6 07:53:06.845774 containerd[1467]: time="2024-08-06T07:53:06.844930445Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 6 07:53:06.845774 containerd[1467]: time="2024-08-06T07:53:06.844950047Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 6 07:53:06.845774 containerd[1467]: time="2024-08-06T07:53:06.844968146Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 6 07:53:06.845774 containerd[1467]: time="2024-08-06T07:53:06.844990418Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 6 07:53:06.845774 containerd[1467]: time="2024-08-06T07:53:06.845004143Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 6 07:53:06.845774 containerd[1467]: time="2024-08-06T07:53:06.845021467Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 6 07:53:06.857756 containerd[1467]: time="2024-08-06T07:53:06.852096057Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 6 07:53:06.857756 containerd[1467]: time="2024-08-06T07:53:06.852830685Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 6 07:53:06.857756 containerd[1467]: time="2024-08-06T07:53:06.852887195Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 6 07:53:06.857756 containerd[1467]: time="2024-08-06T07:53:06.852906461Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 6 07:53:06.857756 containerd[1467]: time="2024-08-06T07:53:06.852947135Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 6 07:53:06.857756 containerd[1467]: time="2024-08-06T07:53:06.853027529Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 6 07:53:06.857756 containerd[1467]: time="2024-08-06T07:53:06.853044220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 6 07:53:06.857756 containerd[1467]: time="2024-08-06T07:53:06.853062745Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 6 07:53:06.857756 containerd[1467]: time="2024-08-06T07:53:06.853316180Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 6 07:53:06.857756 containerd[1467]: time="2024-08-06T07:53:06.853335439Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 6 07:53:06.857756 containerd[1467]: time="2024-08-06T07:53:06.853354294Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 6 07:53:06.857756 containerd[1467]: time="2024-08-06T07:53:06.853371140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 6 07:53:06.857756 containerd[1467]: time="2024-08-06T07:53:06.853398898Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 6 07:53:06.857756 containerd[1467]: time="2024-08-06T07:53:06.853418815Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 6 07:53:06.863467 containerd[1467]: time="2024-08-06T07:53:06.860567323Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 6 07:53:06.863467 containerd[1467]: time="2024-08-06T07:53:06.860673849Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 6 07:53:06.863467 containerd[1467]: time="2024-08-06T07:53:06.860696145Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 6 07:53:06.863467 containerd[1467]: time="2024-08-06T07:53:06.860716293Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 6 07:53:06.863467 containerd[1467]: time="2024-08-06T07:53:06.860735833Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 6 07:53:06.863467 containerd[1467]: time="2024-08-06T07:53:06.860757924Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 6 07:53:06.863467 containerd[1467]: time="2024-08-06T07:53:06.860796512Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 6 07:53:06.863467 containerd[1467]: time="2024-08-06T07:53:06.860813607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 6 07:53:06.864250 containerd[1467]: time="2024-08-06T07:53:06.861310227Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 6 07:53:06.864250 containerd[1467]: time="2024-08-06T07:53:06.861412097Z" level=info msg="Connect containerd service" Aug 6 07:53:06.864250 containerd[1467]: time="2024-08-06T07:53:06.861472648Z" level=info msg="using legacy CRI server" Aug 6 07:53:06.864250 containerd[1467]: time="2024-08-06T07:53:06.861485699Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 6 07:53:06.864250 containerd[1467]: time="2024-08-06T07:53:06.861692990Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 6 07:53:06.875744 containerd[1467]: time="2024-08-06T07:53:06.870227883Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 6 07:53:06.875744 containerd[1467]: time="2024-08-06T07:53:06.873306968Z" level=info msg="Start subscribing containerd event" Aug 6 07:53:06.875744 containerd[1467]: time="2024-08-06T07:53:06.873457722Z" level=info msg="Start recovering state" Aug 6 07:53:06.875744 containerd[1467]: time="2024-08-06T07:53:06.873610388Z" level=info msg="Start event monitor" Aug 6 07:53:06.875744 containerd[1467]: time="2024-08-06T07:53:06.873632838Z" level=info msg="Start snapshots syncer" Aug 6 07:53:06.875744 containerd[1467]: time="2024-08-06T07:53:06.873646630Z" level=info msg="Start cni network conf syncer for default" Aug 6 07:53:06.875744 containerd[1467]: time="2024-08-06T07:53:06.873655968Z" level=info msg="Start streaming server" Aug 6 07:53:06.878178 containerd[1467]: time="2024-08-06T07:53:06.873245718Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 6 07:53:06.878178 containerd[1467]: time="2024-08-06T07:53:06.877099120Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 6 07:53:06.878178 containerd[1467]: time="2024-08-06T07:53:06.877163955Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 6 07:53:06.878178 containerd[1467]: time="2024-08-06T07:53:06.877257077Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 6 07:53:06.885815 containerd[1467]: time="2024-08-06T07:53:06.881648243Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 6 07:53:06.885815 containerd[1467]: time="2024-08-06T07:53:06.881774661Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 6 07:53:06.885815 containerd[1467]: time="2024-08-06T07:53:06.881869731Z" level=info msg="containerd successfully booted in 0.271966s" Aug 6 07:53:06.882081 systemd[1]: Started containerd.service - containerd container runtime. Aug 6 07:53:06.964889 sshd[1539]: Access denied for user core by PAM account configuration [preauth] Aug 6 07:53:06.972449 systemd[1]: sshd@0-143.244.180.140:22-147.75.109.163:39056.service: Deactivated successfully. Aug 6 07:53:06.977646 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 6 07:53:07.006944 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 6 07:53:07.020244 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 6 07:53:07.022924 systemd[1]: Reached target getty.target - Login Prompts. Aug 6 07:53:07.386191 tar[1450]: linux-amd64/LICENSE Aug 6 07:53:07.387488 tar[1450]: linux-amd64/README.md Aug 6 07:53:07.403590 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 6 07:53:07.999462 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 6 07:53:07.999950 (kubelet)[1561]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 6 07:53:08.002667 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 6 07:53:08.006029 systemd[1]: Startup finished in 1.266s (kernel) + 5.436s (initrd) + 7.889s (userspace) = 14.592s. Aug 6 07:53:08.960083 kubelet[1561]: E0806 07:53:08.959958 1561 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 6 07:53:08.963039 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 6 07:53:08.963213 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 6 07:53:08.964053 systemd[1]: kubelet.service: Consumed 1.487s CPU time. Aug 6 07:53:17.005679 systemd[1]: Started sshd@1-143.244.180.140:22-147.75.109.163:49050.service - OpenSSH per-connection server daemon (147.75.109.163:49050). Aug 6 07:53:17.050167 sshd[1574]: Accepted publickey for core from 147.75.109.163 port 49050 ssh2: RSA SHA256:SPvEXBcgWiyzsQCUAYKdH6mATUwSf1+92OLxtqkDC6k Aug 6 07:53:17.053575 sshd[1574]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:53:17.070406 systemd-logind[1443]: New session 1 of user core. Aug 6 07:53:17.073398 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 6 07:53:17.088520 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 6 07:53:17.106917 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 6 07:53:17.113631 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 6 07:53:17.128044 (systemd)[1578]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:53:17.244988 systemd[1578]: Queued start job for default target default.target. Aug 6 07:53:17.251730 systemd[1578]: Created slice app.slice - User Application Slice. Aug 6 07:53:17.251776 systemd[1578]: Reached target paths.target - Paths. Aug 6 07:53:17.251792 systemd[1578]: Reached target timers.target - Timers. Aug 6 07:53:17.253645 systemd[1578]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 6 07:53:17.280401 systemd[1578]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 6 07:53:17.280533 systemd[1578]: Reached target sockets.target - Sockets. Aug 6 07:53:17.280549 systemd[1578]: Reached target basic.target - Basic System. Aug 6 07:53:17.280602 systemd[1578]: Reached target default.target - Main User Target. Aug 6 07:53:17.280638 systemd[1578]: Startup finished in 144ms. Aug 6 07:53:17.281064 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 6 07:53:17.292419 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 6 07:53:17.365424 systemd[1]: Started sshd@2-143.244.180.140:22-147.75.109.163:49054.service - OpenSSH per-connection server daemon (147.75.109.163:49054). Aug 6 07:53:17.405047 sshd[1589]: Accepted publickey for core from 147.75.109.163 port 49054 ssh2: RSA SHA256:SPvEXBcgWiyzsQCUAYKdH6mATUwSf1+92OLxtqkDC6k Aug 6 07:53:17.407545 sshd[1589]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:53:17.414361 systemd-logind[1443]: New session 2 of user core. Aug 6 07:53:17.421417 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 6 07:53:17.482557 sshd[1589]: pam_unix(sshd:session): session closed for user core Aug 6 07:53:17.499706 systemd[1]: sshd@2-143.244.180.140:22-147.75.109.163:49054.service: Deactivated successfully. Aug 6 07:53:17.501673 systemd[1]: session-2.scope: Deactivated successfully. Aug 6 07:53:17.505237 systemd-logind[1443]: Session 2 logged out. Waiting for processes to exit. Aug 6 07:53:17.507640 systemd[1]: Started sshd@3-143.244.180.140:22-147.75.109.163:49060.service - OpenSSH per-connection server daemon (147.75.109.163:49060). Aug 6 07:53:17.509442 systemd-logind[1443]: Removed session 2. Aug 6 07:53:17.559709 sshd[1596]: Accepted publickey for core from 147.75.109.163 port 49060 ssh2: RSA SHA256:SPvEXBcgWiyzsQCUAYKdH6mATUwSf1+92OLxtqkDC6k Aug 6 07:53:17.561353 sshd[1596]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:53:17.566027 systemd-logind[1443]: New session 3 of user core. Aug 6 07:53:17.573463 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 6 07:53:17.629693 sshd[1596]: pam_unix(sshd:session): session closed for user core Aug 6 07:53:17.645946 systemd[1]: sshd@3-143.244.180.140:22-147.75.109.163:49060.service: Deactivated successfully. Aug 6 07:53:17.648788 systemd[1]: session-3.scope: Deactivated successfully. Aug 6 07:53:17.651549 systemd-logind[1443]: Session 3 logged out. Waiting for processes to exit. Aug 6 07:53:17.661554 systemd[1]: Started sshd@4-143.244.180.140:22-147.75.109.163:49076.service - OpenSSH per-connection server daemon (147.75.109.163:49076). Aug 6 07:53:17.663771 systemd-logind[1443]: Removed session 3. Aug 6 07:53:17.702088 sshd[1603]: Accepted publickey for core from 147.75.109.163 port 49076 ssh2: RSA SHA256:SPvEXBcgWiyzsQCUAYKdH6mATUwSf1+92OLxtqkDC6k Aug 6 07:53:17.703801 sshd[1603]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:53:17.708893 systemd-logind[1443]: New session 4 of user core. Aug 6 07:53:17.715546 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 6 07:53:17.778378 sshd[1603]: pam_unix(sshd:session): session closed for user core Aug 6 07:53:17.789966 systemd[1]: sshd@4-143.244.180.140:22-147.75.109.163:49076.service: Deactivated successfully. Aug 6 07:53:17.793065 systemd[1]: session-4.scope: Deactivated successfully. Aug 6 07:53:17.795460 systemd-logind[1443]: Session 4 logged out. Waiting for processes to exit. Aug 6 07:53:17.799591 systemd[1]: Started sshd@5-143.244.180.140:22-147.75.109.163:49080.service - OpenSSH per-connection server daemon (147.75.109.163:49080). Aug 6 07:53:17.801249 systemd-logind[1443]: Removed session 4. Aug 6 07:53:17.848835 sshd[1610]: Accepted publickey for core from 147.75.109.163 port 49080 ssh2: RSA SHA256:SPvEXBcgWiyzsQCUAYKdH6mATUwSf1+92OLxtqkDC6k Aug 6 07:53:17.850668 sshd[1610]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:53:17.855843 systemd-logind[1443]: New session 5 of user core. Aug 6 07:53:17.864444 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 6 07:53:17.935095 sudo[1613]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 6 07:53:17.935501 sudo[1613]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 6 07:53:17.950439 sudo[1613]: pam_unix(sudo:session): session closed for user root Aug 6 07:53:17.954344 sshd[1610]: pam_unix(sshd:session): session closed for user core Aug 6 07:53:17.966400 systemd[1]: sshd@5-143.244.180.140:22-147.75.109.163:49080.service: Deactivated successfully. Aug 6 07:53:17.968750 systemd[1]: session-5.scope: Deactivated successfully. Aug 6 07:53:17.971521 systemd-logind[1443]: Session 5 logged out. Waiting for processes to exit. Aug 6 07:53:17.975692 systemd[1]: Started sshd@6-143.244.180.140:22-147.75.109.163:49092.service - OpenSSH per-connection server daemon (147.75.109.163:49092). Aug 6 07:53:17.977643 systemd-logind[1443]: Removed session 5. Aug 6 07:53:18.020676 sshd[1618]: Accepted publickey for core from 147.75.109.163 port 49092 ssh2: RSA SHA256:SPvEXBcgWiyzsQCUAYKdH6mATUwSf1+92OLxtqkDC6k Aug 6 07:53:18.022491 sshd[1618]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:53:18.030288 systemd-logind[1443]: New session 6 of user core. Aug 6 07:53:18.039575 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 6 07:53:18.101389 sudo[1622]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 6 07:53:18.101683 sudo[1622]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 6 07:53:18.107952 sudo[1622]: pam_unix(sudo:session): session closed for user root Aug 6 07:53:18.116325 sudo[1621]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Aug 6 07:53:18.116694 sudo[1621]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 6 07:53:18.138541 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Aug 6 07:53:18.141271 auditctl[1625]: No rules Aug 6 07:53:18.141774 systemd[1]: audit-rules.service: Deactivated successfully. Aug 6 07:53:18.142025 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Aug 6 07:53:18.152725 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 6 07:53:18.184289 augenrules[1643]: No rules Aug 6 07:53:18.185849 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 6 07:53:18.187560 sudo[1621]: pam_unix(sudo:session): session closed for user root Aug 6 07:53:18.191208 sshd[1618]: pam_unix(sshd:session): session closed for user core Aug 6 07:53:18.204993 systemd[1]: sshd@6-143.244.180.140:22-147.75.109.163:49092.service: Deactivated successfully. Aug 6 07:53:18.206871 systemd[1]: session-6.scope: Deactivated successfully. Aug 6 07:53:18.208201 systemd-logind[1443]: Session 6 logged out. Waiting for processes to exit. Aug 6 07:53:18.214590 systemd[1]: Started sshd@7-143.244.180.140:22-147.75.109.163:49100.service - OpenSSH per-connection server daemon (147.75.109.163:49100). Aug 6 07:53:18.217303 systemd-logind[1443]: Removed session 6. Aug 6 07:53:18.257102 sshd[1651]: Accepted publickey for core from 147.75.109.163 port 49100 ssh2: RSA SHA256:SPvEXBcgWiyzsQCUAYKdH6mATUwSf1+92OLxtqkDC6k Aug 6 07:53:18.259558 sshd[1651]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:53:18.266184 systemd-logind[1443]: New session 7 of user core. Aug 6 07:53:18.275564 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 6 07:53:18.335000 sudo[1654]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 6 07:53:18.335302 sudo[1654]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 6 07:53:18.497583 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 6 07:53:18.497721 (dockerd)[1663]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 6 07:53:18.990785 dockerd[1663]: time="2024-08-06T07:53:18.990300752Z" level=info msg="Starting up" Aug 6 07:53:18.997823 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 6 07:53:19.007005 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 6 07:53:19.058659 systemd[1]: var-lib-docker-metacopy\x2dcheck2666497139-merged.mount: Deactivated successfully. Aug 6 07:53:19.087091 dockerd[1663]: time="2024-08-06T07:53:19.086280911Z" level=info msg="Loading containers: start." Aug 6 07:53:19.218497 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 6 07:53:19.222522 (kubelet)[1705]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 6 07:53:19.302282 kubelet[1705]: E0806 07:53:19.298738 1705 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 6 07:53:19.302863 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 6 07:53:19.303646 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 6 07:53:19.305232 kernel: Initializing XFRM netlink socket Aug 6 07:53:19.342031 systemd-timesyncd[1357]: Network configuration changed, trying to establish connection. Aug 6 07:53:19.403866 systemd-networkd[1340]: docker0: Link UP Aug 6 07:53:19.428617 dockerd[1663]: time="2024-08-06T07:53:19.428576966Z" level=info msg="Loading containers: done." Aug 6 07:53:20.065925 systemd-timesyncd[1357]: Contacted time server 71.162.136.44:123 (2.flatcar.pool.ntp.org). Aug 6 07:53:20.066012 systemd-timesyncd[1357]: Initial clock synchronization to Tue 2024-08-06 07:53:20.065632 UTC. Aug 6 07:53:20.066471 systemd-resolved[1320]: Clock change detected. Flushing caches. Aug 6 07:53:20.117155 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2206349868-merged.mount: Deactivated successfully. Aug 6 07:53:20.123874 dockerd[1663]: time="2024-08-06T07:53:20.123776899Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 6 07:53:20.124084 dockerd[1663]: time="2024-08-06T07:53:20.124067881Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Aug 6 07:53:20.124221 dockerd[1663]: time="2024-08-06T07:53:20.124196171Z" level=info msg="Daemon has completed initialization" Aug 6 07:53:20.170758 dockerd[1663]: time="2024-08-06T07:53:20.170281398Z" level=info msg="API listen on /run/docker.sock" Aug 6 07:53:20.170548 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 6 07:53:21.093033 containerd[1467]: time="2024-08-06T07:53:21.092611656Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.12\"" Aug 6 07:53:21.864732 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3921197256.mount: Deactivated successfully. Aug 6 07:53:23.421465 containerd[1467]: time="2024-08-06T07:53:23.420167322Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.28.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:53:23.422756 containerd[1467]: time="2024-08-06T07:53:23.422706618Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.28.12: active requests=0, bytes read=34527317" Aug 6 07:53:23.424397 containerd[1467]: time="2024-08-06T07:53:23.424360158Z" level=info msg="ImageCreate event name:\"sha256:e273eb47a05653f4156904acde3c077c9d6aa606e8f8326423a0cd229dec41ba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:53:23.427293 containerd[1467]: time="2024-08-06T07:53:23.427245591Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:ac3b6876d95fe7b7691e69f2161a5466adbe9d72d44f342d595674321ce16d23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:53:23.428842 containerd[1467]: time="2024-08-06T07:53:23.428799385Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.28.12\" with image id \"sha256:e273eb47a05653f4156904acde3c077c9d6aa606e8f8326423a0cd229dec41ba\", repo tag \"registry.k8s.io/kube-apiserver:v1.28.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:ac3b6876d95fe7b7691e69f2161a5466adbe9d72d44f342d595674321ce16d23\", size \"34524117\" in 2.336099201s" Aug 6 07:53:23.429016 containerd[1467]: time="2024-08-06T07:53:23.428999414Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.12\" returns image reference \"sha256:e273eb47a05653f4156904acde3c077c9d6aa606e8f8326423a0cd229dec41ba\"" Aug 6 07:53:23.457852 containerd[1467]: time="2024-08-06T07:53:23.457810943Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.12\"" Aug 6 07:53:25.397578 containerd[1467]: time="2024-08-06T07:53:25.397001697Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.28.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:53:25.398676 containerd[1467]: time="2024-08-06T07:53:25.398587226Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.28.12: active requests=0, bytes read=31847067" Aug 6 07:53:25.401417 containerd[1467]: time="2024-08-06T07:53:25.401026514Z" level=info msg="ImageCreate event name:\"sha256:e7dd86d2e68b50ae5c49b982edd7e69404b46696a21dd4c9de65b213e9468512\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:53:25.403588 containerd[1467]: time="2024-08-06T07:53:25.403533468Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:996c6259e4405ab79083fbb52bcf53003691a50b579862bf29b3abaa468460db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:53:25.404967 containerd[1467]: time="2024-08-06T07:53:25.404926939Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.28.12\" with image id \"sha256:e7dd86d2e68b50ae5c49b982edd7e69404b46696a21dd4c9de65b213e9468512\", repo tag \"registry.k8s.io/kube-controller-manager:v1.28.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:996c6259e4405ab79083fbb52bcf53003691a50b579862bf29b3abaa468460db\", size \"33397013\" in 1.947075885s" Aug 6 07:53:25.405556 containerd[1467]: time="2024-08-06T07:53:25.405258784Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.12\" returns image reference \"sha256:e7dd86d2e68b50ae5c49b982edd7e69404b46696a21dd4c9de65b213e9468512\"" Aug 6 07:53:25.444168 containerd[1467]: time="2024-08-06T07:53:25.443961601Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.12\"" Aug 6 07:53:26.696963 containerd[1467]: time="2024-08-06T07:53:26.696889426Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.28.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:53:26.698594 containerd[1467]: time="2024-08-06T07:53:26.698500770Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.28.12: active requests=0, bytes read=17097295" Aug 6 07:53:26.700025 containerd[1467]: time="2024-08-06T07:53:26.699926933Z" level=info msg="ImageCreate event name:\"sha256:ee5fb2190e0207cd765596f1cd7c9a492c9cfded10710d45ef19f23e70d3b4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:53:26.705398 containerd[1467]: time="2024-08-06T07:53:26.704690497Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d93a3b5961248820beb5ec6dfb0320d12c0dba82fc48693d20d345754883551c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:53:26.706275 containerd[1467]: time="2024-08-06T07:53:26.706226209Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.28.12\" with image id \"sha256:ee5fb2190e0207cd765596f1cd7c9a492c9cfded10710d45ef19f23e70d3b4a9\", repo tag \"registry.k8s.io/kube-scheduler:v1.28.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d93a3b5961248820beb5ec6dfb0320d12c0dba82fc48693d20d345754883551c\", size \"18647259\" in 1.26217325s" Aug 6 07:53:26.706275 containerd[1467]: time="2024-08-06T07:53:26.706272885Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.12\" returns image reference \"sha256:ee5fb2190e0207cd765596f1cd7c9a492c9cfded10710d45ef19f23e70d3b4a9\"" Aug 6 07:53:26.735279 containerd[1467]: time="2024-08-06T07:53:26.735207677Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.12\"" Aug 6 07:53:28.063330 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4020455915.mount: Deactivated successfully. Aug 6 07:53:28.560306 containerd[1467]: time="2024-08-06T07:53:28.560168721Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.28.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:53:28.561468 containerd[1467]: time="2024-08-06T07:53:28.561397511Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.28.12: active requests=0, bytes read=28303769" Aug 6 07:53:28.562514 containerd[1467]: time="2024-08-06T07:53:28.562432676Z" level=info msg="ImageCreate event name:\"sha256:1610963ec6edeaf744dc6bc6475bb85db4736faef7394a1ad6f0ccb9d30d2ab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:53:28.564758 containerd[1467]: time="2024-08-06T07:53:28.564690501Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7dd7829fa889ac805a0b1047eba04599fa5006bdbcb5cb9c8d14e1dc8910488b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:53:28.565999 containerd[1467]: time="2024-08-06T07:53:28.565811386Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.28.12\" with image id \"sha256:1610963ec6edeaf744dc6bc6475bb85db4736faef7394a1ad6f0ccb9d30d2ab3\", repo tag \"registry.k8s.io/kube-proxy:v1.28.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:7dd7829fa889ac805a0b1047eba04599fa5006bdbcb5cb9c8d14e1dc8910488b\", size \"28302788\" in 1.830355758s" Aug 6 07:53:28.565999 containerd[1467]: time="2024-08-06T07:53:28.565861283Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.12\" returns image reference \"sha256:1610963ec6edeaf744dc6bc6475bb85db4736faef7394a1ad6f0ccb9d30d2ab3\"" Aug 6 07:53:28.603650 containerd[1467]: time="2024-08-06T07:53:28.603614134Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Aug 6 07:53:29.222510 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount810654456.mount: Deactivated successfully. Aug 6 07:53:29.234052 containerd[1467]: time="2024-08-06T07:53:29.233963208Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:53:29.236112 containerd[1467]: time="2024-08-06T07:53:29.236013813Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Aug 6 07:53:29.240006 containerd[1467]: time="2024-08-06T07:53:29.238393163Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:53:29.246273 containerd[1467]: time="2024-08-06T07:53:29.246207981Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:53:29.248141 containerd[1467]: time="2024-08-06T07:53:29.248091306Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 644.295211ms" Aug 6 07:53:29.248141 containerd[1467]: time="2024-08-06T07:53:29.248137303Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Aug 6 07:53:29.280937 containerd[1467]: time="2024-08-06T07:53:29.280891650Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Aug 6 07:53:29.966483 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 6 07:53:29.976345 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 6 07:53:29.993788 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount86321358.mount: Deactivated successfully. Aug 6 07:53:30.156034 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 6 07:53:30.170621 (kubelet)[1929]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 6 07:53:30.298526 kubelet[1929]: E0806 07:53:30.298292 1929 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 6 07:53:30.304073 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 6 07:53:30.304317 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 6 07:53:32.471024 containerd[1467]: time="2024-08-06T07:53:32.470942401Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:53:32.472855 containerd[1467]: time="2024-08-06T07:53:32.472782797Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Aug 6 07:53:32.474350 containerd[1467]: time="2024-08-06T07:53:32.474234812Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:53:32.483148 containerd[1467]: time="2024-08-06T07:53:32.483047124Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:53:32.486965 containerd[1467]: time="2024-08-06T07:53:32.486372058Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 3.2054261s" Aug 6 07:53:32.486965 containerd[1467]: time="2024-08-06T07:53:32.486443948Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Aug 6 07:53:32.526089 containerd[1467]: time="2024-08-06T07:53:32.526027974Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Aug 6 07:53:33.224853 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount286643291.mount: Deactivated successfully. Aug 6 07:53:33.837512 containerd[1467]: time="2024-08-06T07:53:33.837402749Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:53:33.839027 containerd[1467]: time="2024-08-06T07:53:33.838933927Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.10.1: active requests=0, bytes read=16191749" Aug 6 07:53:33.840170 containerd[1467]: time="2024-08-06T07:53:33.840103981Z" level=info msg="ImageCreate event name:\"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:53:33.843026 containerd[1467]: time="2024-08-06T07:53:33.842694528Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:53:33.843671 containerd[1467]: time="2024-08-06T07:53:33.843629670Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.10.1\" with image id \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\", repo tag \"registry.k8s.io/coredns/coredns:v1.10.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\", size \"16190758\" in 1.317548386s" Aug 6 07:53:33.843745 containerd[1467]: time="2024-08-06T07:53:33.843683463Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Aug 6 07:53:36.506427 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 6 07:53:36.520727 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 6 07:53:36.551741 systemd[1]: Reloading requested from client PID 2052 ('systemctl') (unit session-7.scope)... Aug 6 07:53:36.551769 systemd[1]: Reloading... Aug 6 07:53:36.709010 zram_generator::config[2098]: No configuration found. Aug 6 07:53:36.844406 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 6 07:53:36.942285 systemd[1]: Reloading finished in 389 ms. Aug 6 07:53:37.006234 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 6 07:53:37.006329 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 6 07:53:37.006737 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 6 07:53:37.014531 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 6 07:53:37.204147 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 6 07:53:37.211551 (kubelet)[2144]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 6 07:53:37.279739 kubelet[2144]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 6 07:53:37.280240 kubelet[2144]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 6 07:53:37.280306 kubelet[2144]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 6 07:53:37.280522 kubelet[2144]: I0806 07:53:37.280462 2144 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 6 07:53:37.797434 kubelet[2144]: I0806 07:53:37.797348 2144 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Aug 6 07:53:37.797434 kubelet[2144]: I0806 07:53:37.797391 2144 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 6 07:53:37.797764 kubelet[2144]: I0806 07:53:37.797744 2144 server.go:895] "Client rotation is on, will bootstrap in background" Aug 6 07:53:37.821803 kubelet[2144]: E0806 07:53:37.821441 2144 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://143.244.180.140:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 143.244.180.140:6443: connect: connection refused Aug 6 07:53:37.821803 kubelet[2144]: I0806 07:53:37.821499 2144 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 6 07:53:37.840246 kubelet[2144]: I0806 07:53:37.840197 2144 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 6 07:53:37.842040 kubelet[2144]: I0806 07:53:37.841946 2144 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 6 07:53:37.842239 kubelet[2144]: I0806 07:53:37.842214 2144 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Aug 6 07:53:37.842662 kubelet[2144]: I0806 07:53:37.842632 2144 topology_manager.go:138] "Creating topology manager with none policy" Aug 6 07:53:37.842662 kubelet[2144]: I0806 07:53:37.842657 2144 container_manager_linux.go:301] "Creating device plugin manager" Aug 6 07:53:37.843371 kubelet[2144]: I0806 07:53:37.843335 2144 state_mem.go:36] "Initialized new in-memory state store" Aug 6 07:53:37.845034 kubelet[2144]: I0806 07:53:37.844779 2144 kubelet.go:393] "Attempting to sync node with API server" Aug 6 07:53:37.845034 kubelet[2144]: I0806 07:53:37.844811 2144 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 6 07:53:37.845034 kubelet[2144]: I0806 07:53:37.844841 2144 kubelet.go:309] "Adding apiserver pod source" Aug 6 07:53:37.845034 kubelet[2144]: I0806 07:53:37.844860 2144 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 6 07:53:37.847516 kubelet[2144]: W0806 07:53:37.846731 2144 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://143.244.180.140:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.2.0-f-5a6fbdc7ed&limit=500&resourceVersion=0": dial tcp 143.244.180.140:6443: connect: connection refused Aug 6 07:53:37.847516 kubelet[2144]: E0806 07:53:37.846788 2144 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://143.244.180.140:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.2.0-f-5a6fbdc7ed&limit=500&resourceVersion=0": dial tcp 143.244.180.140:6443: connect: connection refused Aug 6 07:53:37.847516 kubelet[2144]: W0806 07:53:37.847165 2144 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://143.244.180.140:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 143.244.180.140:6443: connect: connection refused Aug 6 07:53:37.847516 kubelet[2144]: E0806 07:53:37.847209 2144 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://143.244.180.140:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 143.244.180.140:6443: connect: connection refused Aug 6 07:53:37.847866 kubelet[2144]: I0806 07:53:37.847852 2144 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Aug 6 07:53:37.851899 kubelet[2144]: W0806 07:53:37.851863 2144 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 6 07:53:37.853398 kubelet[2144]: I0806 07:53:37.853370 2144 server.go:1232] "Started kubelet" Aug 6 07:53:37.855064 kubelet[2144]: I0806 07:53:37.854880 2144 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Aug 6 07:53:37.855672 kubelet[2144]: I0806 07:53:37.855297 2144 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 6 07:53:37.855672 kubelet[2144]: I0806 07:53:37.855362 2144 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Aug 6 07:53:37.857286 kubelet[2144]: I0806 07:53:37.856865 2144 server.go:462] "Adding debug handlers to kubelet server" Aug 6 07:53:37.859328 kubelet[2144]: E0806 07:53:37.859300 2144 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Aug 6 07:53:37.859328 kubelet[2144]: E0806 07:53:37.859334 2144 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 6 07:53:37.860041 kubelet[2144]: I0806 07:53:37.860021 2144 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 6 07:53:37.867254 kubelet[2144]: I0806 07:53:37.866881 2144 volume_manager.go:291] "Starting Kubelet Volume Manager" Aug 6 07:53:37.867573 kubelet[2144]: I0806 07:53:37.867549 2144 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Aug 6 07:53:37.867668 kubelet[2144]: I0806 07:53:37.867657 2144 reconciler_new.go:29] "Reconciler: start to sync state" Aug 6 07:53:37.868664 kubelet[2144]: W0806 07:53:37.868611 2144 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://143.244.180.140:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 143.244.180.140:6443: connect: connection refused Aug 6 07:53:37.868738 kubelet[2144]: E0806 07:53:37.868673 2144 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://143.244.180.140:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 143.244.180.140:6443: connect: connection refused Aug 6 07:53:37.868784 kubelet[2144]: E0806 07:53:37.868774 2144 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://143.244.180.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.2.0-f-5a6fbdc7ed?timeout=10s\": dial tcp 143.244.180.140:6443: connect: connection refused" interval="200ms" Aug 6 07:53:37.871547 kubelet[2144]: E0806 07:53:37.869346 2144 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3975.2.0-f-5a6fbdc7ed.17e914734ab8ab32", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3975.2.0-f-5a6fbdc7ed", UID:"ci-3975.2.0-f-5a6fbdc7ed", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3975.2.0-f-5a6fbdc7ed"}, FirstTimestamp:time.Date(2024, time.August, 6, 7, 53, 37, 853336370, time.Local), LastTimestamp:time.Date(2024, time.August, 6, 7, 53, 37, 853336370, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-3975.2.0-f-5a6fbdc7ed"}': 'Post "https://143.244.180.140:6443/api/v1/namespaces/default/events": dial tcp 143.244.180.140:6443: connect: connection refused'(may retry after sleeping) Aug 6 07:53:37.905024 kubelet[2144]: I0806 07:53:37.904713 2144 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 6 07:53:37.907827 kubelet[2144]: I0806 07:53:37.907795 2144 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 6 07:53:37.908581 kubelet[2144]: I0806 07:53:37.908216 2144 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 6 07:53:37.908581 kubelet[2144]: I0806 07:53:37.908248 2144 kubelet.go:2303] "Starting kubelet main sync loop" Aug 6 07:53:37.908581 kubelet[2144]: E0806 07:53:37.908313 2144 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 6 07:53:37.911573 kubelet[2144]: W0806 07:53:37.911542 2144 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://143.244.180.140:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 143.244.180.140:6443: connect: connection refused Aug 6 07:53:37.911704 kubelet[2144]: E0806 07:53:37.911695 2144 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://143.244.180.140:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 143.244.180.140:6443: connect: connection refused Aug 6 07:53:37.927327 kubelet[2144]: I0806 07:53:37.927290 2144 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 6 07:53:37.927327 kubelet[2144]: I0806 07:53:37.927313 2144 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 6 07:53:37.927327 kubelet[2144]: I0806 07:53:37.927335 2144 state_mem.go:36] "Initialized new in-memory state store" Aug 6 07:53:37.933566 kubelet[2144]: I0806 07:53:37.933498 2144 policy_none.go:49] "None policy: Start" Aug 6 07:53:37.935041 kubelet[2144]: I0806 07:53:37.934903 2144 memory_manager.go:169] "Starting memorymanager" policy="None" Aug 6 07:53:37.935041 kubelet[2144]: I0806 07:53:37.934939 2144 state_mem.go:35] "Initializing new in-memory state store" Aug 6 07:53:37.947374 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 6 07:53:37.964335 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 6 07:53:37.969189 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 6 07:53:37.972006 kubelet[2144]: I0806 07:53:37.971743 2144 kubelet_node_status.go:70] "Attempting to register node" node="ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:53:37.972817 kubelet[2144]: E0806 07:53:37.972799 2144 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://143.244.180.140:6443/api/v1/nodes\": dial tcp 143.244.180.140:6443: connect: connection refused" node="ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:53:37.980769 kubelet[2144]: I0806 07:53:37.980434 2144 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 6 07:53:37.980966 kubelet[2144]: I0806 07:53:37.980788 2144 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 6 07:53:37.984105 kubelet[2144]: E0806 07:53:37.983887 2144 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3975.2.0-f-5a6fbdc7ed\" not found" Aug 6 07:53:38.010072 kubelet[2144]: I0806 07:53:38.008735 2144 topology_manager.go:215] "Topology Admit Handler" podUID="2959b2b97923b6e9b4ef2326705daca0" podNamespace="kube-system" podName="kube-apiserver-ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:53:38.012460 kubelet[2144]: I0806 07:53:38.011484 2144 topology_manager.go:215] "Topology Admit Handler" podUID="01afd9e7db60d2c06158ee2b8c365b74" podNamespace="kube-system" podName="kube-controller-manager-ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:53:38.013218 kubelet[2144]: I0806 07:53:38.013183 2144 topology_manager.go:215] "Topology Admit Handler" podUID="89bfe769cb66f5ec5fa2e071983aecae" podNamespace="kube-system" podName="kube-scheduler-ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:53:38.021692 systemd[1]: Created slice kubepods-burstable-pod2959b2b97923b6e9b4ef2326705daca0.slice - libcontainer container kubepods-burstable-pod2959b2b97923b6e9b4ef2326705daca0.slice. Aug 6 07:53:38.041052 systemd[1]: Created slice kubepods-burstable-pod89bfe769cb66f5ec5fa2e071983aecae.slice - libcontainer container kubepods-burstable-pod89bfe769cb66f5ec5fa2e071983aecae.slice. Aug 6 07:53:38.054943 systemd[1]: Created slice kubepods-burstable-pod01afd9e7db60d2c06158ee2b8c365b74.slice - libcontainer container kubepods-burstable-pod01afd9e7db60d2c06158ee2b8c365b74.slice. Aug 6 07:53:38.068537 kubelet[2144]: I0806 07:53:38.068459 2144 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/01afd9e7db60d2c06158ee2b8c365b74-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3975.2.0-f-5a6fbdc7ed\" (UID: \"01afd9e7db60d2c06158ee2b8c365b74\") " pod="kube-system/kube-controller-manager-ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:53:38.068537 kubelet[2144]: I0806 07:53:38.068521 2144 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/89bfe769cb66f5ec5fa2e071983aecae-kubeconfig\") pod \"kube-scheduler-ci-3975.2.0-f-5a6fbdc7ed\" (UID: \"89bfe769cb66f5ec5fa2e071983aecae\") " pod="kube-system/kube-scheduler-ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:53:38.068537 kubelet[2144]: I0806 07:53:38.068548 2144 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2959b2b97923b6e9b4ef2326705daca0-ca-certs\") pod \"kube-apiserver-ci-3975.2.0-f-5a6fbdc7ed\" (UID: \"2959b2b97923b6e9b4ef2326705daca0\") " pod="kube-system/kube-apiserver-ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:53:38.068819 kubelet[2144]: I0806 07:53:38.068574 2144 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/01afd9e7db60d2c06158ee2b8c365b74-kubeconfig\") pod \"kube-controller-manager-ci-3975.2.0-f-5a6fbdc7ed\" (UID: \"01afd9e7db60d2c06158ee2b8c365b74\") " pod="kube-system/kube-controller-manager-ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:53:38.068819 kubelet[2144]: I0806 07:53:38.068598 2144 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/01afd9e7db60d2c06158ee2b8c365b74-ca-certs\") pod \"kube-controller-manager-ci-3975.2.0-f-5a6fbdc7ed\" (UID: \"01afd9e7db60d2c06158ee2b8c365b74\") " pod="kube-system/kube-controller-manager-ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:53:38.068819 kubelet[2144]: I0806 07:53:38.068626 2144 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/01afd9e7db60d2c06158ee2b8c365b74-flexvolume-dir\") pod \"kube-controller-manager-ci-3975.2.0-f-5a6fbdc7ed\" (UID: \"01afd9e7db60d2c06158ee2b8c365b74\") " pod="kube-system/kube-controller-manager-ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:53:38.068819 kubelet[2144]: I0806 07:53:38.068649 2144 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/01afd9e7db60d2c06158ee2b8c365b74-k8s-certs\") pod \"kube-controller-manager-ci-3975.2.0-f-5a6fbdc7ed\" (UID: \"01afd9e7db60d2c06158ee2b8c365b74\") " pod="kube-system/kube-controller-manager-ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:53:38.068819 kubelet[2144]: I0806 07:53:38.068677 2144 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2959b2b97923b6e9b4ef2326705daca0-k8s-certs\") pod \"kube-apiserver-ci-3975.2.0-f-5a6fbdc7ed\" (UID: \"2959b2b97923b6e9b4ef2326705daca0\") " pod="kube-system/kube-apiserver-ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:53:38.069228 kubelet[2144]: I0806 07:53:38.068706 2144 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2959b2b97923b6e9b4ef2326705daca0-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3975.2.0-f-5a6fbdc7ed\" (UID: \"2959b2b97923b6e9b4ef2326705daca0\") " pod="kube-system/kube-apiserver-ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:53:38.069487 kubelet[2144]: E0806 07:53:38.069420 2144 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://143.244.180.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.2.0-f-5a6fbdc7ed?timeout=10s\": dial tcp 143.244.180.140:6443: connect: connection refused" interval="400ms" Aug 6 07:53:38.174473 kubelet[2144]: I0806 07:53:38.174390 2144 kubelet_node_status.go:70] "Attempting to register node" node="ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:53:38.174896 kubelet[2144]: E0806 07:53:38.174849 2144 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://143.244.180.140:6443/api/v1/nodes\": dial tcp 143.244.180.140:6443: connect: connection refused" node="ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:53:38.337536 kubelet[2144]: E0806 07:53:38.337336 2144 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:53:38.338861 containerd[1467]: time="2024-08-06T07:53:38.338804188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3975.2.0-f-5a6fbdc7ed,Uid:2959b2b97923b6e9b4ef2326705daca0,Namespace:kube-system,Attempt:0,}" Aug 6 07:53:38.353436 kubelet[2144]: E0806 07:53:38.353387 2144 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:53:38.354524 containerd[1467]: time="2024-08-06T07:53:38.354452277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3975.2.0-f-5a6fbdc7ed,Uid:89bfe769cb66f5ec5fa2e071983aecae,Namespace:kube-system,Attempt:0,}" Aug 6 07:53:38.359964 kubelet[2144]: E0806 07:53:38.359771 2144 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:53:38.361310 containerd[1467]: time="2024-08-06T07:53:38.360769406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3975.2.0-f-5a6fbdc7ed,Uid:01afd9e7db60d2c06158ee2b8c365b74,Namespace:kube-system,Attempt:0,}" Aug 6 07:53:38.470743 kubelet[2144]: E0806 07:53:38.470665 2144 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://143.244.180.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.2.0-f-5a6fbdc7ed?timeout=10s\": dial tcp 143.244.180.140:6443: connect: connection refused" interval="800ms" Aug 6 07:53:38.576611 kubelet[2144]: I0806 07:53:38.576542 2144 kubelet_node_status.go:70] "Attempting to register node" node="ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:53:38.577732 kubelet[2144]: E0806 07:53:38.577704 2144 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://143.244.180.140:6443/api/v1/nodes\": dial tcp 143.244.180.140:6443: connect: connection refused" node="ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:53:38.727124 kubelet[2144]: W0806 07:53:38.726864 2144 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://143.244.180.140:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.2.0-f-5a6fbdc7ed&limit=500&resourceVersion=0": dial tcp 143.244.180.140:6443: connect: connection refused Aug 6 07:53:38.727124 kubelet[2144]: E0806 07:53:38.726958 2144 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://143.244.180.140:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.2.0-f-5a6fbdc7ed&limit=500&resourceVersion=0": dial tcp 143.244.180.140:6443: connect: connection refused Aug 6 07:53:38.996353 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2035549264.mount: Deactivated successfully. Aug 6 07:53:39.006774 kubelet[2144]: W0806 07:53:39.006726 2144 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://143.244.180.140:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 143.244.180.140:6443: connect: connection refused Aug 6 07:53:39.006774 kubelet[2144]: E0806 07:53:39.006785 2144 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://143.244.180.140:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 143.244.180.140:6443: connect: connection refused Aug 6 07:53:39.011106 containerd[1467]: time="2024-08-06T07:53:39.010854771Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 6 07:53:39.012083 containerd[1467]: time="2024-08-06T07:53:39.012027613Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 6 07:53:39.013474 containerd[1467]: time="2024-08-06T07:53:39.013406069Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 6 07:53:39.014571 containerd[1467]: time="2024-08-06T07:53:39.014484302Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 6 07:53:39.015848 containerd[1467]: time="2024-08-06T07:53:39.015772761Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 6 07:53:39.018261 containerd[1467]: time="2024-08-06T07:53:39.017789676Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 6 07:53:39.018261 containerd[1467]: time="2024-08-06T07:53:39.018168708Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Aug 6 07:53:39.020219 containerd[1467]: time="2024-08-06T07:53:39.020177251Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 6 07:53:39.025161 containerd[1467]: time="2024-08-06T07:53:39.025091181Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 686.094603ms" Aug 6 07:53:39.028715 containerd[1467]: time="2024-08-06T07:53:39.028537963Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 673.945649ms" Aug 6 07:53:39.034002 containerd[1467]: time="2024-08-06T07:53:39.033575323Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 672.518784ms" Aug 6 07:53:39.066115 kubelet[2144]: W0806 07:53:39.065530 2144 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://143.244.180.140:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 143.244.180.140:6443: connect: connection refused Aug 6 07:53:39.066115 kubelet[2144]: E0806 07:53:39.065613 2144 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://143.244.180.140:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 143.244.180.140:6443: connect: connection refused Aug 6 07:53:39.139134 kubelet[2144]: W0806 07:53:39.138928 2144 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://143.244.180.140:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 143.244.180.140:6443: connect: connection refused Aug 6 07:53:39.139134 kubelet[2144]: E0806 07:53:39.139021 2144 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://143.244.180.140:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 143.244.180.140:6443: connect: connection refused Aug 6 07:53:39.217943 containerd[1467]: time="2024-08-06T07:53:39.217405389Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 6 07:53:39.218177 containerd[1467]: time="2024-08-06T07:53:39.217505990Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 07:53:39.218177 containerd[1467]: time="2024-08-06T07:53:39.218130237Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 6 07:53:39.218263 containerd[1467]: time="2024-08-06T07:53:39.218158100Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 07:53:39.223011 containerd[1467]: time="2024-08-06T07:53:39.222870632Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 6 07:53:39.223190 containerd[1467]: time="2024-08-06T07:53:39.222950084Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 07:53:39.223190 containerd[1467]: time="2024-08-06T07:53:39.222999968Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 6 07:53:39.223190 containerd[1467]: time="2024-08-06T07:53:39.223016027Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 07:53:39.223663 containerd[1467]: time="2024-08-06T07:53:39.223553625Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 6 07:53:39.223849 containerd[1467]: time="2024-08-06T07:53:39.223803669Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 07:53:39.224027 containerd[1467]: time="2024-08-06T07:53:39.223959456Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 6 07:53:39.224179 containerd[1467]: time="2024-08-06T07:53:39.224144977Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 07:53:39.260294 systemd[1]: Started cri-containerd-0c50f4041539039aa24da4dbcd3ac2edb188bc51e8e2942cc24e2166fcd882ed.scope - libcontainer container 0c50f4041539039aa24da4dbcd3ac2edb188bc51e8e2942cc24e2166fcd882ed. Aug 6 07:53:39.272262 systemd[1]: Started cri-containerd-675d057d58d6d514a33b5c2f2ef800c2c1a318fcc718f2dd4578d0a0369b70f7.scope - libcontainer container 675d057d58d6d514a33b5c2f2ef800c2c1a318fcc718f2dd4578d0a0369b70f7. Aug 6 07:53:39.276009 kubelet[2144]: E0806 07:53:39.275443 2144 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://143.244.180.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.2.0-f-5a6fbdc7ed?timeout=10s\": dial tcp 143.244.180.140:6443: connect: connection refused" interval="1.6s" Aug 6 07:53:39.287575 systemd[1]: Started cri-containerd-3352cfd213ee04a080575baa688c569d6bfde4a5c7d5500909d2640ded45c0f2.scope - libcontainer container 3352cfd213ee04a080575baa688c569d6bfde4a5c7d5500909d2640ded45c0f2. Aug 6 07:53:39.377996 containerd[1467]: time="2024-08-06T07:53:39.377782664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3975.2.0-f-5a6fbdc7ed,Uid:01afd9e7db60d2c06158ee2b8c365b74,Namespace:kube-system,Attempt:0,} returns sandbox id \"3352cfd213ee04a080575baa688c569d6bfde4a5c7d5500909d2640ded45c0f2\"" Aug 6 07:53:39.380442 kubelet[2144]: I0806 07:53:39.380154 2144 kubelet_node_status.go:70] "Attempting to register node" node="ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:53:39.381405 kubelet[2144]: E0806 07:53:39.381121 2144 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://143.244.180.140:6443/api/v1/nodes\": dial tcp 143.244.180.140:6443: connect: connection refused" node="ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:53:39.384711 kubelet[2144]: E0806 07:53:39.384531 2144 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:53:39.387118 containerd[1467]: time="2024-08-06T07:53:39.386940898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3975.2.0-f-5a6fbdc7ed,Uid:89bfe769cb66f5ec5fa2e071983aecae,Namespace:kube-system,Attempt:0,} returns sandbox id \"675d057d58d6d514a33b5c2f2ef800c2c1a318fcc718f2dd4578d0a0369b70f7\"" Aug 6 07:53:39.388803 kubelet[2144]: E0806 07:53:39.388256 2144 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:53:39.393276 containerd[1467]: time="2024-08-06T07:53:39.393223726Z" level=info msg="CreateContainer within sandbox \"675d057d58d6d514a33b5c2f2ef800c2c1a318fcc718f2dd4578d0a0369b70f7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 6 07:53:39.393741 containerd[1467]: time="2024-08-06T07:53:39.393550682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3975.2.0-f-5a6fbdc7ed,Uid:2959b2b97923b6e9b4ef2326705daca0,Namespace:kube-system,Attempt:0,} returns sandbox id \"0c50f4041539039aa24da4dbcd3ac2edb188bc51e8e2942cc24e2166fcd882ed\"" Aug 6 07:53:39.394066 containerd[1467]: time="2024-08-06T07:53:39.393224673Z" level=info msg="CreateContainer within sandbox \"3352cfd213ee04a080575baa688c569d6bfde4a5c7d5500909d2640ded45c0f2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 6 07:53:39.394960 kubelet[2144]: E0806 07:53:39.394911 2144 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:53:39.398180 containerd[1467]: time="2024-08-06T07:53:39.398142088Z" level=info msg="CreateContainer within sandbox \"0c50f4041539039aa24da4dbcd3ac2edb188bc51e8e2942cc24e2166fcd882ed\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 6 07:53:39.443480 containerd[1467]: time="2024-08-06T07:53:39.443422986Z" level=info msg="CreateContainer within sandbox \"3352cfd213ee04a080575baa688c569d6bfde4a5c7d5500909d2640ded45c0f2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"dac1493b3f7a17e31087f41d8aba4f5b14fdda4550a8520c9ab7607a82acc99a\"" Aug 6 07:53:39.445453 containerd[1467]: time="2024-08-06T07:53:39.445166373Z" level=info msg="StartContainer for \"dac1493b3f7a17e31087f41d8aba4f5b14fdda4550a8520c9ab7607a82acc99a\"" Aug 6 07:53:39.446264 containerd[1467]: time="2024-08-06T07:53:39.446217203Z" level=info msg="CreateContainer within sandbox \"0c50f4041539039aa24da4dbcd3ac2edb188bc51e8e2942cc24e2166fcd882ed\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ce8b4f849bece25a22088dd8469c58f038163020a6e92f01a32ccb0ca0240dd2\"" Aug 6 07:53:39.446986 containerd[1467]: time="2024-08-06T07:53:39.446929707Z" level=info msg="StartContainer for \"ce8b4f849bece25a22088dd8469c58f038163020a6e92f01a32ccb0ca0240dd2\"" Aug 6 07:53:39.450440 containerd[1467]: time="2024-08-06T07:53:39.449908626Z" level=info msg="CreateContainer within sandbox \"675d057d58d6d514a33b5c2f2ef800c2c1a318fcc718f2dd4578d0a0369b70f7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f29f266018cf633dbfcc563bbee803732d7f2e7a2ae07a6df36d0836dbd5c080\"" Aug 6 07:53:39.451570 containerd[1467]: time="2024-08-06T07:53:39.451528997Z" level=info msg="StartContainer for \"f29f266018cf633dbfcc563bbee803732d7f2e7a2ae07a6df36d0836dbd5c080\"" Aug 6 07:53:39.501234 systemd[1]: Started cri-containerd-dac1493b3f7a17e31087f41d8aba4f5b14fdda4550a8520c9ab7607a82acc99a.scope - libcontainer container dac1493b3f7a17e31087f41d8aba4f5b14fdda4550a8520c9ab7607a82acc99a. Aug 6 07:53:39.506266 systemd[1]: Started cri-containerd-ce8b4f849bece25a22088dd8469c58f038163020a6e92f01a32ccb0ca0240dd2.scope - libcontainer container ce8b4f849bece25a22088dd8469c58f038163020a6e92f01a32ccb0ca0240dd2. Aug 6 07:53:39.537596 systemd[1]: Started cri-containerd-f29f266018cf633dbfcc563bbee803732d7f2e7a2ae07a6df36d0836dbd5c080.scope - libcontainer container f29f266018cf633dbfcc563bbee803732d7f2e7a2ae07a6df36d0836dbd5c080. Aug 6 07:53:39.636917 containerd[1467]: time="2024-08-06T07:53:39.636603845Z" level=info msg="StartContainer for \"ce8b4f849bece25a22088dd8469c58f038163020a6e92f01a32ccb0ca0240dd2\" returns successfully" Aug 6 07:53:39.646867 containerd[1467]: time="2024-08-06T07:53:39.646461862Z" level=info msg="StartContainer for \"dac1493b3f7a17e31087f41d8aba4f5b14fdda4550a8520c9ab7607a82acc99a\" returns successfully" Aug 6 07:53:39.661908 containerd[1467]: time="2024-08-06T07:53:39.661836140Z" level=info msg="StartContainer for \"f29f266018cf633dbfcc563bbee803732d7f2e7a2ae07a6df36d0836dbd5c080\" returns successfully" Aug 6 07:53:39.934103 kubelet[2144]: E0806 07:53:39.932861 2144 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:53:39.936387 kubelet[2144]: E0806 07:53:39.936357 2144 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:53:39.940653 kubelet[2144]: E0806 07:53:39.940625 2144 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:53:40.942943 kubelet[2144]: E0806 07:53:40.942862 2144 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:53:40.983563 kubelet[2144]: I0806 07:53:40.983099 2144 kubelet_node_status.go:70] "Attempting to register node" node="ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:53:41.808614 kubelet[2144]: E0806 07:53:41.808580 2144 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:53:41.974821 kubelet[2144]: E0806 07:53:41.974767 2144 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3975.2.0-f-5a6fbdc7ed\" not found" node="ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:53:42.056797 kubelet[2144]: I0806 07:53:42.056742 2144 kubelet_node_status.go:73] "Successfully registered node" node="ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:53:42.607620 kubelet[2144]: E0806 07:53:42.607574 2144 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3975.2.0-f-5a6fbdc7ed\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:53:42.608540 kubelet[2144]: E0806 07:53:42.608438 2144 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:53:42.849204 kubelet[2144]: I0806 07:53:42.849136 2144 apiserver.go:52] "Watching apiserver" Aug 6 07:53:42.868213 kubelet[2144]: I0806 07:53:42.868045 2144 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Aug 6 07:53:44.689480 kubelet[2144]: W0806 07:53:44.689424 2144 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 6 07:53:44.691238 kubelet[2144]: E0806 07:53:44.691170 2144 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:53:44.951519 kubelet[2144]: E0806 07:53:44.951301 2144 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:53:45.402820 systemd[1]: Reloading requested from client PID 2416 ('systemctl') (unit session-7.scope)... Aug 6 07:53:45.402850 systemd[1]: Reloading... Aug 6 07:53:45.525301 zram_generator::config[2450]: No configuration found. Aug 6 07:53:45.707050 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 6 07:53:45.829839 systemd[1]: Reloading finished in 426 ms. Aug 6 07:53:45.888437 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 6 07:53:45.889520 kubelet[2144]: I0806 07:53:45.888705 2144 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 6 07:53:45.893503 systemd[1]: kubelet.service: Deactivated successfully. Aug 6 07:53:45.893803 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 6 07:53:45.893912 systemd[1]: kubelet.service: Consumed 1.170s CPU time, 111.2M memory peak, 0B memory swap peak. Aug 6 07:53:45.903409 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 6 07:53:46.085989 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 6 07:53:46.099115 (kubelet)[2504]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 6 07:53:46.181989 kubelet[2504]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 6 07:53:46.181989 kubelet[2504]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 6 07:53:46.181989 kubelet[2504]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 6 07:53:46.184000 kubelet[2504]: I0806 07:53:46.182625 2504 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 6 07:53:46.189666 kubelet[2504]: I0806 07:53:46.189592 2504 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Aug 6 07:53:46.189849 kubelet[2504]: I0806 07:53:46.189834 2504 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 6 07:53:46.190574 kubelet[2504]: I0806 07:53:46.190526 2504 server.go:895] "Client rotation is on, will bootstrap in background" Aug 6 07:53:46.196063 kubelet[2504]: I0806 07:53:46.195059 2504 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 6 07:53:46.199007 kubelet[2504]: I0806 07:53:46.198487 2504 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 6 07:53:46.215089 kubelet[2504]: I0806 07:53:46.215051 2504 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 6 07:53:46.215531 kubelet[2504]: I0806 07:53:46.215512 2504 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 6 07:53:46.216443 kubelet[2504]: I0806 07:53:46.215913 2504 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Aug 6 07:53:46.216624 kubelet[2504]: I0806 07:53:46.216612 2504 topology_manager.go:138] "Creating topology manager with none policy" Aug 6 07:53:46.216679 kubelet[2504]: I0806 07:53:46.216673 2504 container_manager_linux.go:301] "Creating device plugin manager" Aug 6 07:53:46.216792 kubelet[2504]: I0806 07:53:46.216781 2504 state_mem.go:36] "Initialized new in-memory state store" Aug 6 07:53:46.217052 kubelet[2504]: I0806 07:53:46.217038 2504 kubelet.go:393] "Attempting to sync node with API server" Aug 6 07:53:46.217130 kubelet[2504]: I0806 07:53:46.217122 2504 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 6 07:53:46.217210 kubelet[2504]: I0806 07:53:46.217200 2504 kubelet.go:309] "Adding apiserver pod source" Aug 6 07:53:46.217267 kubelet[2504]: I0806 07:53:46.217260 2504 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 6 07:53:46.219349 kubelet[2504]: I0806 07:53:46.219321 2504 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Aug 6 07:53:46.220338 kubelet[2504]: I0806 07:53:46.220156 2504 server.go:1232] "Started kubelet" Aug 6 07:53:46.223086 kubelet[2504]: I0806 07:53:46.223058 2504 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 6 07:53:46.245018 kubelet[2504]: I0806 07:53:46.244313 2504 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Aug 6 07:53:46.245381 kubelet[2504]: E0806 07:53:46.245349 2504 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Aug 6 07:53:46.246168 kubelet[2504]: E0806 07:53:46.246111 2504 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 6 07:53:46.249775 kubelet[2504]: I0806 07:53:46.249736 2504 server.go:462] "Adding debug handlers to kubelet server" Aug 6 07:53:46.256120 kubelet[2504]: I0806 07:53:46.255171 2504 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Aug 6 07:53:46.256120 kubelet[2504]: I0806 07:53:46.255590 2504 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 6 07:53:46.258582 kubelet[2504]: I0806 07:53:46.258537 2504 volume_manager.go:291] "Starting Kubelet Volume Manager" Aug 6 07:53:46.259070 kubelet[2504]: I0806 07:53:46.259041 2504 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 6 07:53:46.260769 kubelet[2504]: I0806 07:53:46.260469 2504 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Aug 6 07:53:46.264118 kubelet[2504]: I0806 07:53:46.264044 2504 reconciler_new.go:29] "Reconciler: start to sync state" Aug 6 07:53:46.269402 kubelet[2504]: I0806 07:53:46.269259 2504 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 6 07:53:46.270809 kubelet[2504]: I0806 07:53:46.270169 2504 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 6 07:53:46.270809 kubelet[2504]: I0806 07:53:46.270223 2504 kubelet.go:2303] "Starting kubelet main sync loop" Aug 6 07:53:46.270809 kubelet[2504]: E0806 07:53:46.270314 2504 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 6 07:53:46.360950 kubelet[2504]: I0806 07:53:46.360821 2504 kubelet_node_status.go:70] "Attempting to register node" node="ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:53:46.370850 kubelet[2504]: E0806 07:53:46.370746 2504 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 6 07:53:46.380006 kubelet[2504]: I0806 07:53:46.378672 2504 kubelet_node_status.go:108] "Node was previously registered" node="ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:53:46.380006 kubelet[2504]: I0806 07:53:46.378769 2504 kubelet_node_status.go:73] "Successfully registered node" node="ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:53:46.407550 kubelet[2504]: I0806 07:53:46.407517 2504 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 6 07:53:46.409150 kubelet[2504]: I0806 07:53:46.407772 2504 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 6 07:53:46.409150 kubelet[2504]: I0806 07:53:46.407812 2504 state_mem.go:36] "Initialized new in-memory state store" Aug 6 07:53:46.409623 kubelet[2504]: I0806 07:53:46.409361 2504 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 6 07:53:46.409623 kubelet[2504]: I0806 07:53:46.409428 2504 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 6 07:53:46.409623 kubelet[2504]: I0806 07:53:46.409438 2504 policy_none.go:49] "None policy: Start" Aug 6 07:53:46.411176 kubelet[2504]: I0806 07:53:46.410859 2504 memory_manager.go:169] "Starting memorymanager" policy="None" Aug 6 07:53:46.411176 kubelet[2504]: I0806 07:53:46.410909 2504 state_mem.go:35] "Initializing new in-memory state store" Aug 6 07:53:46.411511 kubelet[2504]: I0806 07:53:46.411482 2504 state_mem.go:75] "Updated machine memory state" Aug 6 07:53:46.422081 kubelet[2504]: I0806 07:53:46.422037 2504 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 6 07:53:46.423349 kubelet[2504]: I0806 07:53:46.422917 2504 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 6 07:53:46.572301 kubelet[2504]: I0806 07:53:46.572217 2504 topology_manager.go:215] "Topology Admit Handler" podUID="2959b2b97923b6e9b4ef2326705daca0" podNamespace="kube-system" podName="kube-apiserver-ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:53:46.572608 kubelet[2504]: I0806 07:53:46.572388 2504 topology_manager.go:215] "Topology Admit Handler" podUID="01afd9e7db60d2c06158ee2b8c365b74" podNamespace="kube-system" podName="kube-controller-manager-ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:53:46.572608 kubelet[2504]: I0806 07:53:46.572451 2504 topology_manager.go:215] "Topology Admit Handler" podUID="89bfe769cb66f5ec5fa2e071983aecae" podNamespace="kube-system" podName="kube-scheduler-ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:53:46.582994 kubelet[2504]: W0806 07:53:46.582381 2504 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 6 07:53:46.584542 kubelet[2504]: W0806 07:53:46.584286 2504 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 6 07:53:46.589545 kubelet[2504]: W0806 07:53:46.589331 2504 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 6 07:53:46.589545 kubelet[2504]: E0806 07:53:46.589427 2504 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3975.2.0-f-5a6fbdc7ed\" already exists" pod="kube-system/kube-apiserver-ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:53:46.666252 kubelet[2504]: I0806 07:53:46.665957 2504 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/01afd9e7db60d2c06158ee2b8c365b74-ca-certs\") pod \"kube-controller-manager-ci-3975.2.0-f-5a6fbdc7ed\" (UID: \"01afd9e7db60d2c06158ee2b8c365b74\") " pod="kube-system/kube-controller-manager-ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:53:46.666252 kubelet[2504]: I0806 07:53:46.666026 2504 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/01afd9e7db60d2c06158ee2b8c365b74-k8s-certs\") pod \"kube-controller-manager-ci-3975.2.0-f-5a6fbdc7ed\" (UID: \"01afd9e7db60d2c06158ee2b8c365b74\") " pod="kube-system/kube-controller-manager-ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:53:46.666252 kubelet[2504]: I0806 07:53:46.666052 2504 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/01afd9e7db60d2c06158ee2b8c365b74-kubeconfig\") pod \"kube-controller-manager-ci-3975.2.0-f-5a6fbdc7ed\" (UID: \"01afd9e7db60d2c06158ee2b8c365b74\") " pod="kube-system/kube-controller-manager-ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:53:46.666252 kubelet[2504]: I0806 07:53:46.666084 2504 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2959b2b97923b6e9b4ef2326705daca0-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3975.2.0-f-5a6fbdc7ed\" (UID: \"2959b2b97923b6e9b4ef2326705daca0\") " pod="kube-system/kube-apiserver-ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:53:46.666252 kubelet[2504]: I0806 07:53:46.666112 2504 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/01afd9e7db60d2c06158ee2b8c365b74-flexvolume-dir\") pod \"kube-controller-manager-ci-3975.2.0-f-5a6fbdc7ed\" (UID: \"01afd9e7db60d2c06158ee2b8c365b74\") " pod="kube-system/kube-controller-manager-ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:53:46.667336 kubelet[2504]: I0806 07:53:46.666144 2504 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/01afd9e7db60d2c06158ee2b8c365b74-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3975.2.0-f-5a6fbdc7ed\" (UID: \"01afd9e7db60d2c06158ee2b8c365b74\") " pod="kube-system/kube-controller-manager-ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:53:46.667336 kubelet[2504]: I0806 07:53:46.666174 2504 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/89bfe769cb66f5ec5fa2e071983aecae-kubeconfig\") pod \"kube-scheduler-ci-3975.2.0-f-5a6fbdc7ed\" (UID: \"89bfe769cb66f5ec5fa2e071983aecae\") " pod="kube-system/kube-scheduler-ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:53:46.667336 kubelet[2504]: I0806 07:53:46.666202 2504 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2959b2b97923b6e9b4ef2326705daca0-ca-certs\") pod \"kube-apiserver-ci-3975.2.0-f-5a6fbdc7ed\" (UID: \"2959b2b97923b6e9b4ef2326705daca0\") " pod="kube-system/kube-apiserver-ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:53:46.667336 kubelet[2504]: I0806 07:53:46.666229 2504 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2959b2b97923b6e9b4ef2326705daca0-k8s-certs\") pod \"kube-apiserver-ci-3975.2.0-f-5a6fbdc7ed\" (UID: \"2959b2b97923b6e9b4ef2326705daca0\") " pod="kube-system/kube-apiserver-ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:53:46.884461 kubelet[2504]: E0806 07:53:46.884408 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:53:46.888235 kubelet[2504]: E0806 07:53:46.888184 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:53:46.895016 kubelet[2504]: E0806 07:53:46.893005 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:53:47.231004 kubelet[2504]: I0806 07:53:47.229598 2504 apiserver.go:52] "Watching apiserver" Aug 6 07:53:47.266284 kubelet[2504]: I0806 07:53:47.266180 2504 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Aug 6 07:53:47.322752 kubelet[2504]: E0806 07:53:47.322705 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:53:47.325091 kubelet[2504]: E0806 07:53:47.323010 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:53:47.325091 kubelet[2504]: E0806 07:53:47.323745 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:53:47.403016 kubelet[2504]: I0806 07:53:47.401575 2504 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3975.2.0-f-5a6fbdc7ed" podStartSLOduration=1.401527114 podCreationTimestamp="2024-08-06 07:53:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-06 07:53:47.35470324 +0000 UTC m=+1.247411399" watchObservedRunningTime="2024-08-06 07:53:47.401527114 +0000 UTC m=+1.294235272" Aug 6 07:53:47.445103 kubelet[2504]: I0806 07:53:47.445059 2504 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3975.2.0-f-5a6fbdc7ed" podStartSLOduration=3.445006138 podCreationTimestamp="2024-08-06 07:53:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-06 07:53:47.443498388 +0000 UTC m=+1.336206545" watchObservedRunningTime="2024-08-06 07:53:47.445006138 +0000 UTC m=+1.337714288" Aug 6 07:53:47.445284 kubelet[2504]: I0806 07:53:47.445234 2504 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3975.2.0-f-5a6fbdc7ed" podStartSLOduration=1.445212766 podCreationTimestamp="2024-08-06 07:53:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-06 07:53:47.414194012 +0000 UTC m=+1.306902169" watchObservedRunningTime="2024-08-06 07:53:47.445212766 +0000 UTC m=+1.337920922" Aug 6 07:53:48.324650 kubelet[2504]: E0806 07:53:48.324275 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:53:50.155505 kubelet[2504]: E0806 07:53:50.154470 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:53:50.329775 kubelet[2504]: E0806 07:53:50.329362 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:53:51.028932 update_engine[1444]: I0806 07:53:51.027801 1444 update_attempter.cc:509] Updating boot flags... Aug 6 07:53:51.064109 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2569) Aug 6 07:53:51.132063 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2572) Aug 6 07:53:53.044565 sudo[1654]: pam_unix(sudo:session): session closed for user root Aug 6 07:53:53.054045 sshd[1651]: pam_unix(sshd:session): session closed for user core Aug 6 07:53:53.062909 systemd[1]: sshd@7-143.244.180.140:22-147.75.109.163:49100.service: Deactivated successfully. Aug 6 07:53:53.065924 systemd[1]: session-7.scope: Deactivated successfully. Aug 6 07:53:53.067213 systemd[1]: session-7.scope: Consumed 5.379s CPU time, 136.8M memory peak, 0B memory swap peak. Aug 6 07:53:53.071471 systemd-logind[1443]: Session 7 logged out. Waiting for processes to exit. Aug 6 07:53:53.073563 systemd-logind[1443]: Removed session 7. Aug 6 07:53:53.714723 kubelet[2504]: E0806 07:53:53.714655 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:53:54.341767 kubelet[2504]: E0806 07:53:54.341136 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:53:55.838844 kubelet[2504]: E0806 07:53:55.837795 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:53:56.342548 kubelet[2504]: E0806 07:53:56.342507 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:53:57.079482 kubelet[2504]: I0806 07:53:57.079409 2504 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 6 07:53:57.080459 containerd[1467]: time="2024-08-06T07:53:57.080382676Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 6 07:53:57.081746 kubelet[2504]: I0806 07:53:57.080703 2504 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 6 07:53:57.576295 kubelet[2504]: I0806 07:53:57.576032 2504 topology_manager.go:215] "Topology Admit Handler" podUID="7ad64284-cbd1-4af8-a8a8-c2d651b1f853" podNamespace="kube-system" podName="kube-proxy-88lvg" Aug 6 07:53:57.597960 systemd[1]: Created slice kubepods-besteffort-pod7ad64284_cbd1_4af8_a8a8_c2d651b1f853.slice - libcontainer container kubepods-besteffort-pod7ad64284_cbd1_4af8_a8a8_c2d651b1f853.slice. Aug 6 07:53:57.736612 kubelet[2504]: I0806 07:53:57.736462 2504 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24qp2\" (UniqueName: \"kubernetes.io/projected/7ad64284-cbd1-4af8-a8a8-c2d651b1f853-kube-api-access-24qp2\") pod \"kube-proxy-88lvg\" (UID: \"7ad64284-cbd1-4af8-a8a8-c2d651b1f853\") " pod="kube-system/kube-proxy-88lvg" Aug 6 07:53:57.736612 kubelet[2504]: I0806 07:53:57.736533 2504 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7ad64284-cbd1-4af8-a8a8-c2d651b1f853-kube-proxy\") pod \"kube-proxy-88lvg\" (UID: \"7ad64284-cbd1-4af8-a8a8-c2d651b1f853\") " pod="kube-system/kube-proxy-88lvg" Aug 6 07:53:57.736612 kubelet[2504]: I0806 07:53:57.736565 2504 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7ad64284-cbd1-4af8-a8a8-c2d651b1f853-xtables-lock\") pod \"kube-proxy-88lvg\" (UID: \"7ad64284-cbd1-4af8-a8a8-c2d651b1f853\") " pod="kube-system/kube-proxy-88lvg" Aug 6 07:53:57.736612 kubelet[2504]: I0806 07:53:57.736592 2504 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7ad64284-cbd1-4af8-a8a8-c2d651b1f853-lib-modules\") pod \"kube-proxy-88lvg\" (UID: \"7ad64284-cbd1-4af8-a8a8-c2d651b1f853\") " pod="kube-system/kube-proxy-88lvg" Aug 6 07:53:57.857576 kubelet[2504]: E0806 07:53:57.856734 2504 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Aug 6 07:53:57.857576 kubelet[2504]: E0806 07:53:57.856812 2504 projected.go:198] Error preparing data for projected volume kube-api-access-24qp2 for pod kube-system/kube-proxy-88lvg: configmap "kube-root-ca.crt" not found Aug 6 07:53:57.859725 kubelet[2504]: E0806 07:53:57.859644 2504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7ad64284-cbd1-4af8-a8a8-c2d651b1f853-kube-api-access-24qp2 podName:7ad64284-cbd1-4af8-a8a8-c2d651b1f853 nodeName:}" failed. No retries permitted until 2024-08-06 07:53:58.357312147 +0000 UTC m=+12.250020298 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-24qp2" (UniqueName: "kubernetes.io/projected/7ad64284-cbd1-4af8-a8a8-c2d651b1f853-kube-api-access-24qp2") pod "kube-proxy-88lvg" (UID: "7ad64284-cbd1-4af8-a8a8-c2d651b1f853") : configmap "kube-root-ca.crt" not found Aug 6 07:53:58.147542 kubelet[2504]: I0806 07:53:58.147418 2504 topology_manager.go:215] "Topology Admit Handler" podUID="1953d980-0f41-448a-b3c1-9d5ae3377733" podNamespace="tigera-operator" podName="tigera-operator-76c4974c85-rpqpk" Aug 6 07:53:58.156493 systemd[1]: Created slice kubepods-besteffort-pod1953d980_0f41_448a_b3c1_9d5ae3377733.slice - libcontainer container kubepods-besteffort-pod1953d980_0f41_448a_b3c1_9d5ae3377733.slice. Aug 6 07:53:58.340895 kubelet[2504]: I0806 07:53:58.340793 2504 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rmjj\" (UniqueName: \"kubernetes.io/projected/1953d980-0f41-448a-b3c1-9d5ae3377733-kube-api-access-2rmjj\") pod \"tigera-operator-76c4974c85-rpqpk\" (UID: \"1953d980-0f41-448a-b3c1-9d5ae3377733\") " pod="tigera-operator/tigera-operator-76c4974c85-rpqpk" Aug 6 07:53:58.340895 kubelet[2504]: I0806 07:53:58.340888 2504 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1953d980-0f41-448a-b3c1-9d5ae3377733-var-lib-calico\") pod \"tigera-operator-76c4974c85-rpqpk\" (UID: \"1953d980-0f41-448a-b3c1-9d5ae3377733\") " pod="tigera-operator/tigera-operator-76c4974c85-rpqpk" Aug 6 07:53:58.463199 containerd[1467]: time="2024-08-06T07:53:58.463101121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-rpqpk,Uid:1953d980-0f41-448a-b3c1-9d5ae3377733,Namespace:tigera-operator,Attempt:0,}" Aug 6 07:53:58.506098 containerd[1467]: time="2024-08-06T07:53:58.505188267Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 6 07:53:58.506098 containerd[1467]: time="2024-08-06T07:53:58.505996091Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 07:53:58.506098 containerd[1467]: time="2024-08-06T07:53:58.506024330Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 6 07:53:58.506098 containerd[1467]: time="2024-08-06T07:53:58.506039225Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 07:53:58.510411 kubelet[2504]: E0806 07:53:58.509412 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:53:58.512326 containerd[1467]: time="2024-08-06T07:53:58.512278016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-88lvg,Uid:7ad64284-cbd1-4af8-a8a8-c2d651b1f853,Namespace:kube-system,Attempt:0,}" Aug 6 07:53:58.551272 systemd[1]: Started cri-containerd-a1b6286f83a6c5db07354da6bdace88db0870e37ddc6b363c9fd303ab2f5fdc8.scope - libcontainer container a1b6286f83a6c5db07354da6bdace88db0870e37ddc6b363c9fd303ab2f5fdc8. Aug 6 07:53:58.582312 containerd[1467]: time="2024-08-06T07:53:58.582168510Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 6 07:53:58.583063 containerd[1467]: time="2024-08-06T07:53:58.582250376Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 07:53:58.583786 containerd[1467]: time="2024-08-06T07:53:58.583697820Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 6 07:53:58.583786 containerd[1467]: time="2024-08-06T07:53:58.583723626Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 07:53:58.609343 systemd[1]: Started cri-containerd-48e3c4b5929857d6c50f0ec1a5190a8d37955f872289424df1ad33dc2ddce73a.scope - libcontainer container 48e3c4b5929857d6c50f0ec1a5190a8d37955f872289424df1ad33dc2ddce73a. Aug 6 07:53:58.638823 containerd[1467]: time="2024-08-06T07:53:58.638521520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-rpqpk,Uid:1953d980-0f41-448a-b3c1-9d5ae3377733,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"a1b6286f83a6c5db07354da6bdace88db0870e37ddc6b363c9fd303ab2f5fdc8\"" Aug 6 07:53:58.642838 containerd[1467]: time="2024-08-06T07:53:58.642723470Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Aug 6 07:53:58.663677 containerd[1467]: time="2024-08-06T07:53:58.663618915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-88lvg,Uid:7ad64284-cbd1-4af8-a8a8-c2d651b1f853,Namespace:kube-system,Attempt:0,} returns sandbox id \"48e3c4b5929857d6c50f0ec1a5190a8d37955f872289424df1ad33dc2ddce73a\"" Aug 6 07:53:58.665954 kubelet[2504]: E0806 07:53:58.665702 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:53:58.672169 containerd[1467]: time="2024-08-06T07:53:58.672074455Z" level=info msg="CreateContainer within sandbox \"48e3c4b5929857d6c50f0ec1a5190a8d37955f872289424df1ad33dc2ddce73a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 6 07:53:58.699096 containerd[1467]: time="2024-08-06T07:53:58.698957118Z" level=info msg="CreateContainer within sandbox \"48e3c4b5929857d6c50f0ec1a5190a8d37955f872289424df1ad33dc2ddce73a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ea0cb35f1541554101428893fe0b5538eeb28f62bf204eda446ab05d9f933287\"" Aug 6 07:53:58.704014 containerd[1467]: time="2024-08-06T07:53:58.701385020Z" level=info msg="StartContainer for \"ea0cb35f1541554101428893fe0b5538eeb28f62bf204eda446ab05d9f933287\"" Aug 6 07:53:58.756614 systemd[1]: Started cri-containerd-ea0cb35f1541554101428893fe0b5538eeb28f62bf204eda446ab05d9f933287.scope - libcontainer container ea0cb35f1541554101428893fe0b5538eeb28f62bf204eda446ab05d9f933287. Aug 6 07:53:58.805770 containerd[1467]: time="2024-08-06T07:53:58.805711503Z" level=info msg="StartContainer for \"ea0cb35f1541554101428893fe0b5538eeb28f62bf204eda446ab05d9f933287\" returns successfully" Aug 6 07:53:59.343445 systemd[1]: Started sshd@8-143.244.180.140:22-103.240.6.43:35794.service - OpenSSH per-connection server daemon (103.240.6.43:35794). Aug 6 07:53:59.361573 kubelet[2504]: E0806 07:53:59.359796 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:54:00.211415 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2298469380.mount: Deactivated successfully. Aug 6 07:54:00.393997 sshd[2815]: Invalid user rabbitmq from 103.240.6.43 port 35794 Aug 6 07:54:00.590179 sshd[2815]: Received disconnect from 103.240.6.43 port 35794:11: Bye Bye [preauth] Aug 6 07:54:00.590179 sshd[2815]: Disconnected from invalid user rabbitmq 103.240.6.43 port 35794 [preauth] Aug 6 07:54:00.593501 systemd[1]: sshd@8-143.244.180.140:22-103.240.6.43:35794.service: Deactivated successfully. Aug 6 07:54:01.429464 containerd[1467]: time="2024-08-06T07:54:01.429018747Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:54:01.432319 containerd[1467]: time="2024-08-06T07:54:01.432178379Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=22076148" Aug 6 07:54:01.436900 containerd[1467]: time="2024-08-06T07:54:01.436434017Z" level=info msg="ImageCreate event name:\"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:54:01.447718 containerd[1467]: time="2024-08-06T07:54:01.447620793Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:54:01.449796 containerd[1467]: time="2024-08-06T07:54:01.449621235Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"22070263\" in 2.806855888s" Aug 6 07:54:01.449796 containerd[1467]: time="2024-08-06T07:54:01.449674783Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\"" Aug 6 07:54:01.455853 containerd[1467]: time="2024-08-06T07:54:01.455298551Z" level=info msg="CreateContainer within sandbox \"a1b6286f83a6c5db07354da6bdace88db0870e37ddc6b363c9fd303ab2f5fdc8\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 6 07:54:01.530814 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2276192550.mount: Deactivated successfully. Aug 6 07:54:01.580188 containerd[1467]: time="2024-08-06T07:54:01.579943487Z" level=info msg="CreateContainer within sandbox \"a1b6286f83a6c5db07354da6bdace88db0870e37ddc6b363c9fd303ab2f5fdc8\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"559fb06d74d717d2cec0f48d6555075dec1dbdcced446d776526a549b08da532\"" Aug 6 07:54:01.588447 containerd[1467]: time="2024-08-06T07:54:01.587265880Z" level=info msg="StartContainer for \"559fb06d74d717d2cec0f48d6555075dec1dbdcced446d776526a549b08da532\"" Aug 6 07:54:01.745469 systemd[1]: Started cri-containerd-559fb06d74d717d2cec0f48d6555075dec1dbdcced446d776526a549b08da532.scope - libcontainer container 559fb06d74d717d2cec0f48d6555075dec1dbdcced446d776526a549b08da532. Aug 6 07:54:01.883415 containerd[1467]: time="2024-08-06T07:54:01.882674849Z" level=info msg="StartContainer for \"559fb06d74d717d2cec0f48d6555075dec1dbdcced446d776526a549b08da532\" returns successfully" Aug 6 07:54:02.406873 kubelet[2504]: I0806 07:54:02.406263 2504 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-88lvg" podStartSLOduration=5.401703812 podCreationTimestamp="2024-08-06 07:53:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-06 07:53:59.38919148 +0000 UTC m=+13.281899654" watchObservedRunningTime="2024-08-06 07:54:02.401703812 +0000 UTC m=+16.294411971" Aug 6 07:54:02.406873 kubelet[2504]: I0806 07:54:02.406512 2504 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4974c85-rpqpk" podStartSLOduration=1.597898791 podCreationTimestamp="2024-08-06 07:53:58 +0000 UTC" firstStartedPulling="2024-08-06 07:53:58.641883686 +0000 UTC m=+12.534591827" lastFinishedPulling="2024-08-06 07:54:01.450436695 +0000 UTC m=+15.343144846" observedRunningTime="2024-08-06 07:54:02.401034033 +0000 UTC m=+16.293742197" watchObservedRunningTime="2024-08-06 07:54:02.40645181 +0000 UTC m=+16.299159970" Aug 6 07:54:05.130953 kubelet[2504]: I0806 07:54:05.130291 2504 topology_manager.go:215] "Topology Admit Handler" podUID="a3a8c515-b9f7-4410-b651-da77ee776c5e" podNamespace="calico-system" podName="calico-typha-86498d5f9-jndjm" Aug 6 07:54:05.164866 systemd[1]: Created slice kubepods-besteffort-poda3a8c515_b9f7_4410_b651_da77ee776c5e.slice - libcontainer container kubepods-besteffort-poda3a8c515_b9f7_4410_b651_da77ee776c5e.slice. Aug 6 07:54:05.234381 kubelet[2504]: I0806 07:54:05.233897 2504 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8z7r\" (UniqueName: \"kubernetes.io/projected/a3a8c515-b9f7-4410-b651-da77ee776c5e-kube-api-access-v8z7r\") pod \"calico-typha-86498d5f9-jndjm\" (UID: \"a3a8c515-b9f7-4410-b651-da77ee776c5e\") " pod="calico-system/calico-typha-86498d5f9-jndjm" Aug 6 07:54:05.234381 kubelet[2504]: I0806 07:54:05.234104 2504 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a3a8c515-b9f7-4410-b651-da77ee776c5e-typha-certs\") pod \"calico-typha-86498d5f9-jndjm\" (UID: \"a3a8c515-b9f7-4410-b651-da77ee776c5e\") " pod="calico-system/calico-typha-86498d5f9-jndjm" Aug 6 07:54:05.234381 kubelet[2504]: I0806 07:54:05.234172 2504 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a3a8c515-b9f7-4410-b651-da77ee776c5e-tigera-ca-bundle\") pod \"calico-typha-86498d5f9-jndjm\" (UID: \"a3a8c515-b9f7-4410-b651-da77ee776c5e\") " pod="calico-system/calico-typha-86498d5f9-jndjm" Aug 6 07:54:05.252680 kubelet[2504]: I0806 07:54:05.252358 2504 topology_manager.go:215] "Topology Admit Handler" podUID="45d3012d-a6df-44b0-88d2-1028a78547bf" podNamespace="calico-system" podName="calico-node-v5qh8" Aug 6 07:54:05.282174 systemd[1]: Created slice kubepods-besteffort-pod45d3012d_a6df_44b0_88d2_1028a78547bf.slice - libcontainer container kubepods-besteffort-pod45d3012d_a6df_44b0_88d2_1028a78547bf.slice. Aug 6 07:54:05.337015 kubelet[2504]: I0806 07:54:05.335438 2504 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/45d3012d-a6df-44b0-88d2-1028a78547bf-var-run-calico\") pod \"calico-node-v5qh8\" (UID: \"45d3012d-a6df-44b0-88d2-1028a78547bf\") " pod="calico-system/calico-node-v5qh8" Aug 6 07:54:05.337015 kubelet[2504]: I0806 07:54:05.335530 2504 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/45d3012d-a6df-44b0-88d2-1028a78547bf-var-lib-calico\") pod \"calico-node-v5qh8\" (UID: \"45d3012d-a6df-44b0-88d2-1028a78547bf\") " pod="calico-system/calico-node-v5qh8" Aug 6 07:54:05.337015 kubelet[2504]: I0806 07:54:05.335566 2504 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/45d3012d-a6df-44b0-88d2-1028a78547bf-policysync\") pod \"calico-node-v5qh8\" (UID: \"45d3012d-a6df-44b0-88d2-1028a78547bf\") " pod="calico-system/calico-node-v5qh8" Aug 6 07:54:05.337015 kubelet[2504]: I0806 07:54:05.335595 2504 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/45d3012d-a6df-44b0-88d2-1028a78547bf-node-certs\") pod \"calico-node-v5qh8\" (UID: \"45d3012d-a6df-44b0-88d2-1028a78547bf\") " pod="calico-system/calico-node-v5qh8" Aug 6 07:54:05.337015 kubelet[2504]: I0806 07:54:05.335627 2504 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/45d3012d-a6df-44b0-88d2-1028a78547bf-cni-net-dir\") pod \"calico-node-v5qh8\" (UID: \"45d3012d-a6df-44b0-88d2-1028a78547bf\") " pod="calico-system/calico-node-v5qh8" Aug 6 07:54:05.337486 kubelet[2504]: I0806 07:54:05.335654 2504 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/45d3012d-a6df-44b0-88d2-1028a78547bf-cni-log-dir\") pod \"calico-node-v5qh8\" (UID: \"45d3012d-a6df-44b0-88d2-1028a78547bf\") " pod="calico-system/calico-node-v5qh8" Aug 6 07:54:05.337486 kubelet[2504]: I0806 07:54:05.335702 2504 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/45d3012d-a6df-44b0-88d2-1028a78547bf-xtables-lock\") pod \"calico-node-v5qh8\" (UID: \"45d3012d-a6df-44b0-88d2-1028a78547bf\") " pod="calico-system/calico-node-v5qh8" Aug 6 07:54:05.337486 kubelet[2504]: I0806 07:54:05.335812 2504 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/45d3012d-a6df-44b0-88d2-1028a78547bf-flexvol-driver-host\") pod \"calico-node-v5qh8\" (UID: \"45d3012d-a6df-44b0-88d2-1028a78547bf\") " pod="calico-system/calico-node-v5qh8" Aug 6 07:54:05.337486 kubelet[2504]: I0806 07:54:05.335852 2504 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/45d3012d-a6df-44b0-88d2-1028a78547bf-lib-modules\") pod \"calico-node-v5qh8\" (UID: \"45d3012d-a6df-44b0-88d2-1028a78547bf\") " pod="calico-system/calico-node-v5qh8" Aug 6 07:54:05.337486 kubelet[2504]: I0806 07:54:05.335887 2504 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45d3012d-a6df-44b0-88d2-1028a78547bf-tigera-ca-bundle\") pod \"calico-node-v5qh8\" (UID: \"45d3012d-a6df-44b0-88d2-1028a78547bf\") " pod="calico-system/calico-node-v5qh8" Aug 6 07:54:05.337673 kubelet[2504]: I0806 07:54:05.335923 2504 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/45d3012d-a6df-44b0-88d2-1028a78547bf-cni-bin-dir\") pod \"calico-node-v5qh8\" (UID: \"45d3012d-a6df-44b0-88d2-1028a78547bf\") " pod="calico-system/calico-node-v5qh8" Aug 6 07:54:05.337673 kubelet[2504]: I0806 07:54:05.335957 2504 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvc27\" (UniqueName: \"kubernetes.io/projected/45d3012d-a6df-44b0-88d2-1028a78547bf-kube-api-access-wvc27\") pod \"calico-node-v5qh8\" (UID: \"45d3012d-a6df-44b0-88d2-1028a78547bf\") " pod="calico-system/calico-node-v5qh8" Aug 6 07:54:05.397542 kubelet[2504]: I0806 07:54:05.397404 2504 topology_manager.go:215] "Topology Admit Handler" podUID="6471e7f0-19b6-4660-901b-e1b98911d115" podNamespace="calico-system" podName="csi-node-driver-jbj5f" Aug 6 07:54:05.398043 kubelet[2504]: E0806 07:54:05.398023 2504 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jbj5f" podUID="6471e7f0-19b6-4660-901b-e1b98911d115" Aug 6 07:54:05.437548 kubelet[2504]: I0806 07:54:05.436654 2504 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6471e7f0-19b6-4660-901b-e1b98911d115-kubelet-dir\") pod \"csi-node-driver-jbj5f\" (UID: \"6471e7f0-19b6-4660-901b-e1b98911d115\") " pod="calico-system/csi-node-driver-jbj5f" Aug 6 07:54:05.437548 kubelet[2504]: I0806 07:54:05.436702 2504 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6471e7f0-19b6-4660-901b-e1b98911d115-registration-dir\") pod \"csi-node-driver-jbj5f\" (UID: \"6471e7f0-19b6-4660-901b-e1b98911d115\") " pod="calico-system/csi-node-driver-jbj5f" Aug 6 07:54:05.437548 kubelet[2504]: I0806 07:54:05.436788 2504 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6471e7f0-19b6-4660-901b-e1b98911d115-socket-dir\") pod \"csi-node-driver-jbj5f\" (UID: \"6471e7f0-19b6-4660-901b-e1b98911d115\") " pod="calico-system/csi-node-driver-jbj5f" Aug 6 07:54:05.437548 kubelet[2504]: I0806 07:54:05.436818 2504 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwqvx\" (UniqueName: \"kubernetes.io/projected/6471e7f0-19b6-4660-901b-e1b98911d115-kube-api-access-qwqvx\") pod \"csi-node-driver-jbj5f\" (UID: \"6471e7f0-19b6-4660-901b-e1b98911d115\") " pod="calico-system/csi-node-driver-jbj5f" Aug 6 07:54:05.437548 kubelet[2504]: I0806 07:54:05.436848 2504 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/6471e7f0-19b6-4660-901b-e1b98911d115-varrun\") pod \"csi-node-driver-jbj5f\" (UID: \"6471e7f0-19b6-4660-901b-e1b98911d115\") " pod="calico-system/csi-node-driver-jbj5f" Aug 6 07:54:05.439548 kubelet[2504]: E0806 07:54:05.439362 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:05.439548 kubelet[2504]: W0806 07:54:05.439392 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:05.439548 kubelet[2504]: E0806 07:54:05.439437 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:05.440731 kubelet[2504]: E0806 07:54:05.440398 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:05.440731 kubelet[2504]: W0806 07:54:05.440424 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:05.440731 kubelet[2504]: E0806 07:54:05.440456 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:05.442194 kubelet[2504]: E0806 07:54:05.441940 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:05.442194 kubelet[2504]: W0806 07:54:05.441996 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:05.442194 kubelet[2504]: E0806 07:54:05.442040 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:05.443742 kubelet[2504]: E0806 07:54:05.443485 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:05.443742 kubelet[2504]: W0806 07:54:05.443514 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:05.443742 kubelet[2504]: E0806 07:54:05.443539 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:05.444125 kubelet[2504]: E0806 07:54:05.444015 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:05.444125 kubelet[2504]: W0806 07:54:05.444048 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:05.444125 kubelet[2504]: E0806 07:54:05.444071 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:05.445007 kubelet[2504]: E0806 07:54:05.444867 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:05.445007 kubelet[2504]: W0806 07:54:05.444884 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:05.446151 kubelet[2504]: E0806 07:54:05.445555 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:05.446151 kubelet[2504]: W0806 07:54:05.445571 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:05.446151 kubelet[2504]: E0806 07:54:05.445592 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:05.447499 kubelet[2504]: E0806 07:54:05.446535 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:05.447499 kubelet[2504]: E0806 07:54:05.446645 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:05.447499 kubelet[2504]: W0806 07:54:05.446671 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:05.447499 kubelet[2504]: E0806 07:54:05.446705 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:05.448354 kubelet[2504]: E0806 07:54:05.447987 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:05.448354 kubelet[2504]: W0806 07:54:05.448002 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:05.448354 kubelet[2504]: E0806 07:54:05.448058 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:05.450494 kubelet[2504]: E0806 07:54:05.450461 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:05.452880 kubelet[2504]: W0806 07:54:05.452522 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:05.454007 kubelet[2504]: E0806 07:54:05.453888 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:05.454007 kubelet[2504]: W0806 07:54:05.453911 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:05.455663 kubelet[2504]: E0806 07:54:05.454354 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:05.455663 kubelet[2504]: W0806 07:54:05.454376 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:05.456114 kubelet[2504]: E0806 07:54:05.456089 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:05.456747 kubelet[2504]: W0806 07:54:05.456712 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:05.457052 kubelet[2504]: E0806 07:54:05.456330 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:05.457186 kubelet[2504]: E0806 07:54:05.456343 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:05.457186 kubelet[2504]: E0806 07:54:05.456353 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:05.457287 kubelet[2504]: E0806 07:54:05.457183 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:05.459092 kubelet[2504]: E0806 07:54:05.458488 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:05.459092 kubelet[2504]: W0806 07:54:05.458509 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:05.459405 kubelet[2504]: E0806 07:54:05.459286 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:05.459405 kubelet[2504]: W0806 07:54:05.459320 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:05.459405 kubelet[2504]: E0806 07:54:05.459368 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:05.460458 kubelet[2504]: E0806 07:54:05.460333 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:05.461219 kubelet[2504]: E0806 07:54:05.461077 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:05.461219 kubelet[2504]: W0806 07:54:05.461102 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:05.461857 kubelet[2504]: E0806 07:54:05.461314 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:05.471368 kubelet[2504]: E0806 07:54:05.462430 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:05.473030 kubelet[2504]: W0806 07:54:05.471378 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:05.473030 kubelet[2504]: E0806 07:54:05.471560 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:05.474164 kubelet[2504]: E0806 07:54:05.474114 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:05.474164 kubelet[2504]: W0806 07:54:05.474160 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:05.474405 kubelet[2504]: E0806 07:54:05.474300 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:05.475750 kubelet[2504]: E0806 07:54:05.475704 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:05.475750 kubelet[2504]: W0806 07:54:05.475732 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:05.476133 kubelet[2504]: E0806 07:54:05.475862 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:05.476232 kubelet[2504]: E0806 07:54:05.476213 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:05.476308 kubelet[2504]: W0806 07:54:05.476271 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:05.477093 kubelet[2504]: E0806 07:54:05.477040 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:05.478734 kubelet[2504]: E0806 07:54:05.478704 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:05.478941 kubelet[2504]: W0806 07:54:05.478730 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:05.478941 kubelet[2504]: E0806 07:54:05.478875 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:05.480802 kubelet[2504]: E0806 07:54:05.480767 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:05.480802 kubelet[2504]: W0806 07:54:05.480798 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:05.481121 kubelet[2504]: E0806 07:54:05.480926 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:05.484222 kubelet[2504]: E0806 07:54:05.484185 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:05.484883 kubelet[2504]: W0806 07:54:05.484265 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:05.484883 kubelet[2504]: E0806 07:54:05.484565 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:05.486411 kubelet[2504]: E0806 07:54:05.485401 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:54:05.486464 containerd[1467]: time="2024-08-06T07:54:05.486150836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-86498d5f9-jndjm,Uid:a3a8c515-b9f7-4410-b651-da77ee776c5e,Namespace:calico-system,Attempt:0,}" Aug 6 07:54:05.488319 kubelet[2504]: E0806 07:54:05.487070 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:05.488319 kubelet[2504]: W0806 07:54:05.487088 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:05.488319 kubelet[2504]: E0806 07:54:05.487160 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:05.488319 kubelet[2504]: E0806 07:54:05.488126 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:05.488319 kubelet[2504]: W0806 07:54:05.488145 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:05.489005 kubelet[2504]: E0806 07:54:05.488877 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:05.490076 kubelet[2504]: E0806 07:54:05.489799 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:05.490076 kubelet[2504]: W0806 07:54:05.489817 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:05.491952 kubelet[2504]: E0806 07:54:05.491606 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:05.492266 kubelet[2504]: E0806 07:54:05.492253 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:05.492349 kubelet[2504]: W0806 07:54:05.492336 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:05.494329 kubelet[2504]: E0806 07:54:05.493917 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:05.497184 kubelet[2504]: E0806 07:54:05.494715 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:05.497184 kubelet[2504]: W0806 07:54:05.497005 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:05.497184 kubelet[2504]: E0806 07:54:05.497047 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:05.498108 kubelet[2504]: E0806 07:54:05.498070 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:05.498108 kubelet[2504]: W0806 07:54:05.498097 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:05.499224 kubelet[2504]: E0806 07:54:05.499029 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:05.499358 kubelet[2504]: E0806 07:54:05.499319 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:05.499358 kubelet[2504]: W0806 07:54:05.499344 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:05.499429 kubelet[2504]: E0806 07:54:05.499375 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:05.540046 kubelet[2504]: E0806 07:54:05.539806 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:05.540046 kubelet[2504]: W0806 07:54:05.539836 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:05.540046 kubelet[2504]: E0806 07:54:05.539878 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:05.542687 kubelet[2504]: E0806 07:54:05.542270 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:05.542687 kubelet[2504]: W0806 07:54:05.542296 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:05.542687 kubelet[2504]: E0806 07:54:05.542324 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:05.542687 kubelet[2504]: E0806 07:54:05.542565 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:05.542687 kubelet[2504]: W0806 07:54:05.542572 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:05.542687 kubelet[2504]: E0806 07:54:05.542582 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:05.543525 kubelet[2504]: E0806 07:54:05.543264 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:05.543525 kubelet[2504]: W0806 07:54:05.543279 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:05.543525 kubelet[2504]: E0806 07:54:05.543303 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:05.545034 kubelet[2504]: E0806 07:54:05.544453 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:05.545034 kubelet[2504]: W0806 07:54:05.544470 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:05.545034 kubelet[2504]: E0806 07:54:05.544494 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:05.545635 kubelet[2504]: E0806 07:54:05.545417 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:05.545635 kubelet[2504]: W0806 07:54:05.545435 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:05.546584 kubelet[2504]: E0806 07:54:05.546200 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:05.546584 kubelet[2504]: W0806 07:54:05.546426 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:05.546584 kubelet[2504]: E0806 07:54:05.546300 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:05.546584 kubelet[2504]: E0806 07:54:05.546554 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:05.547995 kubelet[2504]: E0806 07:54:05.547292 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:05.547995 kubelet[2504]: W0806 07:54:05.547310 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:05.547995 kubelet[2504]: E0806 07:54:05.547893 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:05.549048 kubelet[2504]: E0806 07:54:05.548494 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:05.549048 kubelet[2504]: W0806 07:54:05.548510 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:05.549048 kubelet[2504]: E0806 07:54:05.548782 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:05.549918 kubelet[2504]: E0806 07:54:05.549745 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:05.549918 kubelet[2504]: W0806 07:54:05.549768 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:05.549918 kubelet[2504]: E0806 07:54:05.549896 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:05.550369 kubelet[2504]: E0806 07:54:05.550354 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:05.550577 kubelet[2504]: W0806 07:54:05.550461 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:05.550577 kubelet[2504]: E0806 07:54:05.550512 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:05.550874 kubelet[2504]: E0806 07:54:05.550825 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:05.550874 kubelet[2504]: W0806 07:54:05.550842 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:05.551201 kubelet[2504]: E0806 07:54:05.551106 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:05.553946 kubelet[2504]: E0806 07:54:05.551391 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:05.553946 kubelet[2504]: W0806 07:54:05.551401 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:05.553946 kubelet[2504]: E0806 07:54:05.551620 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:05.553946 kubelet[2504]: E0806 07:54:05.551672 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:05.553946 kubelet[2504]: W0806 07:54:05.551678 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:05.553946 kubelet[2504]: E0806 07:54:05.551729 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:05.553946 kubelet[2504]: E0806 07:54:05.552228 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:05.553946 kubelet[2504]: W0806 07:54:05.552241 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:05.553946 kubelet[2504]: E0806 07:54:05.552378 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:05.554764 kubelet[2504]: E0806 07:54:05.554532 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:05.554764 kubelet[2504]: W0806 07:54:05.554553 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:05.554764 kubelet[2504]: E0806 07:54:05.554707 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:05.557357 kubelet[2504]: E0806 07:54:05.557323 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:05.557357 kubelet[2504]: W0806 07:54:05.557390 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:05.557357 kubelet[2504]: E0806 07:54:05.557484 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:05.559082 kubelet[2504]: E0806 07:54:05.558693 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:05.559082 kubelet[2504]: W0806 07:54:05.558719 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:05.559082 kubelet[2504]: E0806 07:54:05.558816 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:05.560348 kubelet[2504]: E0806 07:54:05.560154 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:05.560348 kubelet[2504]: W0806 07:54:05.560186 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:05.560348 kubelet[2504]: E0806 07:54:05.560274 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:05.565468 kubelet[2504]: E0806 07:54:05.560724 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:05.565468 kubelet[2504]: W0806 07:54:05.560739 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:05.565468 kubelet[2504]: E0806 07:54:05.562087 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:05.565468 kubelet[2504]: W0806 07:54:05.562101 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:05.565468 kubelet[2504]: E0806 07:54:05.563514 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:05.565468 kubelet[2504]: W0806 07:54:05.563532 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:05.565468 kubelet[2504]: E0806 07:54:05.564153 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:05.565468 kubelet[2504]: W0806 07:54:05.564167 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:05.567814 kubelet[2504]: E0806 07:54:05.567047 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:05.567814 kubelet[2504]: W0806 07:54:05.567078 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:05.567814 kubelet[2504]: E0806 07:54:05.567113 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:05.567814 kubelet[2504]: E0806 07:54:05.567164 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:05.570473 kubelet[2504]: E0806 07:54:05.570180 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:05.570473 kubelet[2504]: W0806 07:54:05.570229 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:05.570473 kubelet[2504]: E0806 07:54:05.570263 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:05.570473 kubelet[2504]: E0806 07:54:05.570324 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:05.570743 kubelet[2504]: E0806 07:54:05.570707 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:05.574667 kubelet[2504]: E0806 07:54:05.570941 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:05.574667 kubelet[2504]: E0806 07:54:05.572821 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:05.574667 kubelet[2504]: W0806 07:54:05.574603 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:05.574667 kubelet[2504]: E0806 07:54:05.574634 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:05.575336 containerd[1467]: time="2024-08-06T07:54:05.572607914Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 6 07:54:05.575336 containerd[1467]: time="2024-08-06T07:54:05.573231272Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 07:54:05.575336 containerd[1467]: time="2024-08-06T07:54:05.573283035Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 6 07:54:05.575336 containerd[1467]: time="2024-08-06T07:54:05.573300412Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 07:54:05.600855 kubelet[2504]: E0806 07:54:05.600811 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:05.600855 kubelet[2504]: W0806 07:54:05.600846 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:05.601082 kubelet[2504]: E0806 07:54:05.600887 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:05.624999 kubelet[2504]: E0806 07:54:05.623657 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:05.624999 kubelet[2504]: W0806 07:54:05.623696 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:05.624999 kubelet[2504]: E0806 07:54:05.623732 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:05.628304 systemd[1]: Started cri-containerd-1e018f10d43837deaae66aa0950cfdf27bd8c6db2bd1ce88f0de6d6bafa3391a.scope - libcontainer container 1e018f10d43837deaae66aa0950cfdf27bd8c6db2bd1ce88f0de6d6bafa3391a. Aug 6 07:54:05.813538 containerd[1467]: time="2024-08-06T07:54:05.813476139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-86498d5f9-jndjm,Uid:a3a8c515-b9f7-4410-b651-da77ee776c5e,Namespace:calico-system,Attempt:0,} returns sandbox id \"1e018f10d43837deaae66aa0950cfdf27bd8c6db2bd1ce88f0de6d6bafa3391a\"" Aug 6 07:54:05.816200 kubelet[2504]: E0806 07:54:05.816145 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:54:05.818675 containerd[1467]: time="2024-08-06T07:54:05.818311058Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Aug 6 07:54:05.895908 kubelet[2504]: E0806 07:54:05.895331 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:54:05.896555 containerd[1467]: time="2024-08-06T07:54:05.896207533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-v5qh8,Uid:45d3012d-a6df-44b0-88d2-1028a78547bf,Namespace:calico-system,Attempt:0,}" Aug 6 07:54:05.946662 containerd[1467]: time="2024-08-06T07:54:05.946295028Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 6 07:54:05.946662 containerd[1467]: time="2024-08-06T07:54:05.946382043Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 07:54:05.946662 containerd[1467]: time="2024-08-06T07:54:05.946414444Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 6 07:54:05.946662 containerd[1467]: time="2024-08-06T07:54:05.946434314Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 07:54:05.980354 systemd[1]: Started cri-containerd-7531c043b7e24c5e3799a417882a2d3111a5811a194aeb6226c2f7c75d7dda67.scope - libcontainer container 7531c043b7e24c5e3799a417882a2d3111a5811a194aeb6226c2f7c75d7dda67. Aug 6 07:54:06.030167 containerd[1467]: time="2024-08-06T07:54:06.029896979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-v5qh8,Uid:45d3012d-a6df-44b0-88d2-1028a78547bf,Namespace:calico-system,Attempt:0,} returns sandbox id \"7531c043b7e24c5e3799a417882a2d3111a5811a194aeb6226c2f7c75d7dda67\"" Aug 6 07:54:06.031723 kubelet[2504]: E0806 07:54:06.031631 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:54:07.272387 kubelet[2504]: E0806 07:54:07.270801 2504 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jbj5f" podUID="6471e7f0-19b6-4660-901b-e1b98911d115" Aug 6 07:54:08.171649 containerd[1467]: time="2024-08-06T07:54:08.171160401Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:54:08.173868 containerd[1467]: time="2024-08-06T07:54:08.173773836Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=29458030" Aug 6 07:54:08.174684 containerd[1467]: time="2024-08-06T07:54:08.174263993Z" level=info msg="ImageCreate event name:\"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:54:08.178932 containerd[1467]: time="2024-08-06T07:54:08.178873583Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:54:08.183007 containerd[1467]: time="2024-08-06T07:54:08.182736913Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"30905782\" in 2.364365768s" Aug 6 07:54:08.183007 containerd[1467]: time="2024-08-06T07:54:08.182947430Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\"" Aug 6 07:54:08.185532 containerd[1467]: time="2024-08-06T07:54:08.184270279Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Aug 6 07:54:08.208154 containerd[1467]: time="2024-08-06T07:54:08.208104025Z" level=info msg="CreateContainer within sandbox \"1e018f10d43837deaae66aa0950cfdf27bd8c6db2bd1ce88f0de6d6bafa3391a\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 6 07:54:08.245713 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3250852371.mount: Deactivated successfully. Aug 6 07:54:08.247421 containerd[1467]: time="2024-08-06T07:54:08.246210325Z" level=info msg="CreateContainer within sandbox \"1e018f10d43837deaae66aa0950cfdf27bd8c6db2bd1ce88f0de6d6bafa3391a\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"b6c55c591d483be6125917a3b2014ab85734317c86df6554a44582f0062d9f25\"" Aug 6 07:54:08.250327 containerd[1467]: time="2024-08-06T07:54:08.249863402Z" level=info msg="StartContainer for \"b6c55c591d483be6125917a3b2014ab85734317c86df6554a44582f0062d9f25\"" Aug 6 07:54:08.329565 systemd[1]: Started cri-containerd-b6c55c591d483be6125917a3b2014ab85734317c86df6554a44582f0062d9f25.scope - libcontainer container b6c55c591d483be6125917a3b2014ab85734317c86df6554a44582f0062d9f25. Aug 6 07:54:08.416375 containerd[1467]: time="2024-08-06T07:54:08.415869439Z" level=info msg="StartContainer for \"b6c55c591d483be6125917a3b2014ab85734317c86df6554a44582f0062d9f25\" returns successfully" Aug 6 07:54:08.424865 kubelet[2504]: E0806 07:54:08.424651 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:54:08.433904 kubelet[2504]: E0806 07:54:08.433855 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:08.434154 kubelet[2504]: W0806 07:54:08.433879 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:08.434154 kubelet[2504]: E0806 07:54:08.434012 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:08.434561 kubelet[2504]: E0806 07:54:08.434435 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:08.434561 kubelet[2504]: W0806 07:54:08.434448 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:08.434561 kubelet[2504]: E0806 07:54:08.434468 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:08.435207 kubelet[2504]: E0806 07:54:08.435044 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:08.435207 kubelet[2504]: W0806 07:54:08.435057 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:08.435398 kubelet[2504]: E0806 07:54:08.435285 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:08.436067 kubelet[2504]: E0806 07:54:08.436042 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:08.436393 kubelet[2504]: W0806 07:54:08.436154 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:08.436393 kubelet[2504]: E0806 07:54:08.436172 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:08.437082 kubelet[2504]: E0806 07:54:08.436851 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:08.437082 kubelet[2504]: W0806 07:54:08.436872 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:08.437082 kubelet[2504]: E0806 07:54:08.436887 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:08.437538 kubelet[2504]: E0806 07:54:08.437304 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:08.437538 kubelet[2504]: W0806 07:54:08.437348 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:08.437538 kubelet[2504]: E0806 07:54:08.437364 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:08.437836 kubelet[2504]: E0806 07:54:08.437729 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:08.437836 kubelet[2504]: W0806 07:54:08.437739 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:08.437836 kubelet[2504]: E0806 07:54:08.437751 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:08.438513 kubelet[2504]: E0806 07:54:08.438116 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:08.438513 kubelet[2504]: W0806 07:54:08.438136 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:08.438513 kubelet[2504]: E0806 07:54:08.438153 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:08.438758 kubelet[2504]: E0806 07:54:08.438743 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:08.438758 kubelet[2504]: W0806 07:54:08.438757 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:08.438834 kubelet[2504]: E0806 07:54:08.438772 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:08.439862 kubelet[2504]: E0806 07:54:08.439843 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:08.439862 kubelet[2504]: W0806 07:54:08.439859 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:08.440338 kubelet[2504]: E0806 07:54:08.439873 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:08.440489 kubelet[2504]: I0806 07:54:08.440418 2504 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-86498d5f9-jndjm" podStartSLOduration=1.0736053779999999 podCreationTimestamp="2024-08-06 07:54:05 +0000 UTC" firstStartedPulling="2024-08-06 07:54:05.817402718 +0000 UTC m=+19.710110868" lastFinishedPulling="2024-08-06 07:54:08.184105298 +0000 UTC m=+22.076813434" observedRunningTime="2024-08-06 07:54:08.439218604 +0000 UTC m=+22.331926761" watchObservedRunningTime="2024-08-06 07:54:08.440307944 +0000 UTC m=+22.333016161" Aug 6 07:54:08.440570 kubelet[2504]: E0806 07:54:08.440426 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:08.440570 kubelet[2504]: W0806 07:54:08.440508 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:08.440570 kubelet[2504]: E0806 07:54:08.440525 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:08.441101 kubelet[2504]: E0806 07:54:08.440905 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:08.441101 kubelet[2504]: W0806 07:54:08.440920 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:08.441101 kubelet[2504]: E0806 07:54:08.441072 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:08.441605 kubelet[2504]: E0806 07:54:08.441584 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:08.441605 kubelet[2504]: W0806 07:54:08.441600 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:08.441684 kubelet[2504]: E0806 07:54:08.441651 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:08.442025 kubelet[2504]: E0806 07:54:08.442012 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:08.442073 kubelet[2504]: W0806 07:54:08.442024 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:08.442073 kubelet[2504]: E0806 07:54:08.442053 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:08.442614 kubelet[2504]: E0806 07:54:08.442595 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:08.442749 kubelet[2504]: W0806 07:54:08.442609 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:08.442749 kubelet[2504]: E0806 07:54:08.442724 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:08.470008 kubelet[2504]: E0806 07:54:08.469821 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:08.470008 kubelet[2504]: W0806 07:54:08.469845 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:08.470008 kubelet[2504]: E0806 07:54:08.469872 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:08.470457 kubelet[2504]: E0806 07:54:08.470330 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:08.470457 kubelet[2504]: W0806 07:54:08.470342 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:08.470457 kubelet[2504]: E0806 07:54:08.470356 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:08.470912 kubelet[2504]: E0806 07:54:08.470730 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:08.470912 kubelet[2504]: W0806 07:54:08.470745 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:08.470912 kubelet[2504]: E0806 07:54:08.470767 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:08.471246 kubelet[2504]: E0806 07:54:08.471236 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:08.471400 kubelet[2504]: W0806 07:54:08.471296 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:08.471400 kubelet[2504]: E0806 07:54:08.471338 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:08.471804 kubelet[2504]: E0806 07:54:08.471727 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:08.471804 kubelet[2504]: W0806 07:54:08.471741 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:08.471804 kubelet[2504]: E0806 07:54:08.471762 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:08.472386 kubelet[2504]: E0806 07:54:08.472235 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:08.472386 kubelet[2504]: W0806 07:54:08.472255 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:08.472386 kubelet[2504]: E0806 07:54:08.472304 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:08.472569 kubelet[2504]: E0806 07:54:08.472558 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:08.472634 kubelet[2504]: W0806 07:54:08.472613 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:08.472787 kubelet[2504]: E0806 07:54:08.472701 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:08.472963 kubelet[2504]: E0806 07:54:08.472953 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:08.473051 kubelet[2504]: W0806 07:54:08.473010 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:08.473334 kubelet[2504]: E0806 07:54:08.473234 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:08.473566 kubelet[2504]: E0806 07:54:08.473555 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:08.473861 kubelet[2504]: W0806 07:54:08.473609 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:08.473861 kubelet[2504]: E0806 07:54:08.473628 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:08.474119 kubelet[2504]: E0806 07:54:08.474108 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:08.474183 kubelet[2504]: W0806 07:54:08.474174 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:08.474365 kubelet[2504]: E0806 07:54:08.474252 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:08.474462 kubelet[2504]: E0806 07:54:08.474453 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:08.474586 kubelet[2504]: W0806 07:54:08.474506 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:08.474653 kubelet[2504]: E0806 07:54:08.474635 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:08.474867 kubelet[2504]: E0806 07:54:08.474784 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:08.474867 kubelet[2504]: W0806 07:54:08.474793 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:08.474867 kubelet[2504]: E0806 07:54:08.474816 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:08.475135 kubelet[2504]: E0806 07:54:08.475126 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:08.475253 kubelet[2504]: W0806 07:54:08.475188 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:08.475253 kubelet[2504]: E0806 07:54:08.475211 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:08.475467 kubelet[2504]: E0806 07:54:08.475451 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:08.475512 kubelet[2504]: W0806 07:54:08.475466 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:08.475512 kubelet[2504]: E0806 07:54:08.475490 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:08.475846 kubelet[2504]: E0806 07:54:08.475832 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:08.475846 kubelet[2504]: W0806 07:54:08.475846 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:08.475919 kubelet[2504]: E0806 07:54:08.475864 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:08.476634 kubelet[2504]: E0806 07:54:08.476196 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:08.476634 kubelet[2504]: W0806 07:54:08.476208 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:08.476634 kubelet[2504]: E0806 07:54:08.476228 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:08.477027 kubelet[2504]: E0806 07:54:08.477007 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:08.477201 kubelet[2504]: W0806 07:54:08.477173 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:08.477372 kubelet[2504]: E0806 07:54:08.477362 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:08.477740 kubelet[2504]: E0806 07:54:08.477691 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:08.477740 kubelet[2504]: W0806 07:54:08.477702 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:08.477740 kubelet[2504]: E0806 07:54:08.477715 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:09.271344 kubelet[2504]: E0806 07:54:09.271162 2504 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jbj5f" podUID="6471e7f0-19b6-4660-901b-e1b98911d115" Aug 6 07:54:09.428598 kubelet[2504]: I0806 07:54:09.427620 2504 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 6 07:54:09.430009 kubelet[2504]: E0806 07:54:09.429455 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:54:09.452000 kubelet[2504]: E0806 07:54:09.451923 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:09.452000 kubelet[2504]: W0806 07:54:09.451955 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:09.452888 kubelet[2504]: E0806 07:54:09.452759 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:09.453264 kubelet[2504]: E0806 07:54:09.453135 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:09.453264 kubelet[2504]: W0806 07:54:09.453152 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:09.453264 kubelet[2504]: E0806 07:54:09.453177 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:09.454108 kubelet[2504]: E0806 07:54:09.453823 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:09.454108 kubelet[2504]: W0806 07:54:09.453942 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:09.454108 kubelet[2504]: E0806 07:54:09.453962 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:09.454823 kubelet[2504]: E0806 07:54:09.454578 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:09.454823 kubelet[2504]: W0806 07:54:09.454592 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:09.454823 kubelet[2504]: E0806 07:54:09.454803 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:09.455625 kubelet[2504]: E0806 07:54:09.455303 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:09.455625 kubelet[2504]: W0806 07:54:09.455316 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:09.455625 kubelet[2504]: E0806 07:54:09.455331 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:09.455923 kubelet[2504]: E0806 07:54:09.455865 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:09.455923 kubelet[2504]: W0806 07:54:09.455878 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:09.456377 kubelet[2504]: E0806 07:54:09.456000 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:09.456582 kubelet[2504]: E0806 07:54:09.456483 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:09.456582 kubelet[2504]: W0806 07:54:09.456497 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:09.456582 kubelet[2504]: E0806 07:54:09.456512 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:09.457000 kubelet[2504]: E0806 07:54:09.456939 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:09.457000 kubelet[2504]: W0806 07:54:09.456951 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:09.457000 kubelet[2504]: E0806 07:54:09.456967 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:09.458043 kubelet[2504]: E0806 07:54:09.457959 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:09.458043 kubelet[2504]: W0806 07:54:09.457981 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:09.458043 kubelet[2504]: E0806 07:54:09.458015 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:09.458816 kubelet[2504]: E0806 07:54:09.458370 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:09.458816 kubelet[2504]: W0806 07:54:09.458382 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:09.458816 kubelet[2504]: E0806 07:54:09.458396 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:09.460558 kubelet[2504]: E0806 07:54:09.460530 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:09.460558 kubelet[2504]: W0806 07:54:09.460548 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:09.460657 kubelet[2504]: E0806 07:54:09.460568 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:09.462255 kubelet[2504]: E0806 07:54:09.461857 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:09.462255 kubelet[2504]: W0806 07:54:09.461875 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:09.462255 kubelet[2504]: E0806 07:54:09.461898 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:09.463187 kubelet[2504]: E0806 07:54:09.462651 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:09.463187 kubelet[2504]: W0806 07:54:09.462684 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:09.463187 kubelet[2504]: E0806 07:54:09.462702 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:09.463318 kubelet[2504]: E0806 07:54:09.463301 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:09.463318 kubelet[2504]: W0806 07:54:09.463312 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:09.463370 kubelet[2504]: E0806 07:54:09.463326 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:09.464348 kubelet[2504]: E0806 07:54:09.463761 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:09.464348 kubelet[2504]: W0806 07:54:09.463776 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:09.464348 kubelet[2504]: E0806 07:54:09.463794 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:09.479177 kubelet[2504]: E0806 07:54:09.479140 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:09.479900 kubelet[2504]: W0806 07:54:09.479370 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:09.479900 kubelet[2504]: E0806 07:54:09.479409 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:09.481007 kubelet[2504]: E0806 07:54:09.480963 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:09.482040 kubelet[2504]: W0806 07:54:09.481204 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:09.482040 kubelet[2504]: E0806 07:54:09.481260 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:09.483305 kubelet[2504]: E0806 07:54:09.482841 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:09.483305 kubelet[2504]: W0806 07:54:09.482863 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:09.483305 kubelet[2504]: E0806 07:54:09.482951 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:09.483896 kubelet[2504]: E0806 07:54:09.483798 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:09.483896 kubelet[2504]: W0806 07:54:09.483821 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:09.483896 kubelet[2504]: E0806 07:54:09.483868 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:09.485123 kubelet[2504]: E0806 07:54:09.484833 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:09.485123 kubelet[2504]: W0806 07:54:09.484854 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:09.485265 kubelet[2504]: E0806 07:54:09.485211 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:09.486594 kubelet[2504]: E0806 07:54:09.485898 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:09.486594 kubelet[2504]: W0806 07:54:09.485918 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:09.486594 kubelet[2504]: E0806 07:54:09.486023 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:09.486594 kubelet[2504]: E0806 07:54:09.486558 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:09.486594 kubelet[2504]: W0806 07:54:09.486571 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:09.487131 kubelet[2504]: E0806 07:54:09.486927 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:09.487715 kubelet[2504]: E0806 07:54:09.487578 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:09.487715 kubelet[2504]: W0806 07:54:09.487596 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:09.488067 kubelet[2504]: E0806 07:54:09.487930 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:09.488647 kubelet[2504]: E0806 07:54:09.488631 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:09.489224 kubelet[2504]: W0806 07:54:09.488753 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:09.489224 kubelet[2504]: E0806 07:54:09.488810 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:09.490284 kubelet[2504]: E0806 07:54:09.490074 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:09.490284 kubelet[2504]: W0806 07:54:09.490096 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:09.491242 kubelet[2504]: E0806 07:54:09.490303 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:09.491242 kubelet[2504]: E0806 07:54:09.491009 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:09.491242 kubelet[2504]: W0806 07:54:09.491025 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:09.491432 kubelet[2504]: E0806 07:54:09.491296 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:09.492165 kubelet[2504]: E0806 07:54:09.492047 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:09.492165 kubelet[2504]: W0806 07:54:09.492065 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:09.492165 kubelet[2504]: E0806 07:54:09.492099 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:09.493098 kubelet[2504]: E0806 07:54:09.493006 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:09.493098 kubelet[2504]: W0806 07:54:09.493020 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:09.493098 kubelet[2504]: E0806 07:54:09.493048 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:09.494924 kubelet[2504]: E0806 07:54:09.494327 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:09.494924 kubelet[2504]: W0806 07:54:09.494344 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:09.494924 kubelet[2504]: E0806 07:54:09.494373 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:09.495488 kubelet[2504]: E0806 07:54:09.495368 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:09.495488 kubelet[2504]: W0806 07:54:09.495391 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:09.495488 kubelet[2504]: E0806 07:54:09.495416 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:09.496213 kubelet[2504]: E0806 07:54:09.495951 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:09.496213 kubelet[2504]: W0806 07:54:09.496055 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:09.496578 kubelet[2504]: E0806 07:54:09.496436 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:09.496578 kubelet[2504]: W0806 07:54:09.496450 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:09.496578 kubelet[2504]: E0806 07:54:09.496469 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:09.499177 kubelet[2504]: E0806 07:54:09.499009 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:09.499177 kubelet[2504]: E0806 07:54:09.499130 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 07:54:09.499177 kubelet[2504]: W0806 07:54:09.499140 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 07:54:09.499177 kubelet[2504]: E0806 07:54:09.499155 2504 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 07:54:09.604444 containerd[1467]: time="2024-08-06T07:54:09.604291181Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:54:09.608346 containerd[1467]: time="2024-08-06T07:54:09.606954311Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=5140568" Aug 6 07:54:09.608346 containerd[1467]: time="2024-08-06T07:54:09.607092461Z" level=info msg="ImageCreate event name:\"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:54:09.609217 containerd[1467]: time="2024-08-06T07:54:09.609159076Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:54:09.611054 containerd[1467]: time="2024-08-06T07:54:09.610996139Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6588288\" in 1.426657015s" Aug 6 07:54:09.611186 containerd[1467]: time="2024-08-06T07:54:09.611057819Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\"" Aug 6 07:54:09.615348 containerd[1467]: time="2024-08-06T07:54:09.615292236Z" level=info msg="CreateContainer within sandbox \"7531c043b7e24c5e3799a417882a2d3111a5811a194aeb6226c2f7c75d7dda67\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 6 07:54:09.638204 containerd[1467]: time="2024-08-06T07:54:09.638148132Z" level=info msg="CreateContainer within sandbox \"7531c043b7e24c5e3799a417882a2d3111a5811a194aeb6226c2f7c75d7dda67\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"44aaa9d86bfb5fe7864d5587b8564ac66f62e725fc4589dd3e56a228a916deec\"" Aug 6 07:54:09.642155 containerd[1467]: time="2024-08-06T07:54:09.642108448Z" level=info msg="StartContainer for \"44aaa9d86bfb5fe7864d5587b8564ac66f62e725fc4589dd3e56a228a916deec\"" Aug 6 07:54:09.692251 systemd[1]: Started cri-containerd-44aaa9d86bfb5fe7864d5587b8564ac66f62e725fc4589dd3e56a228a916deec.scope - libcontainer container 44aaa9d86bfb5fe7864d5587b8564ac66f62e725fc4589dd3e56a228a916deec. Aug 6 07:54:09.773573 containerd[1467]: time="2024-08-06T07:54:09.773485606Z" level=info msg="StartContainer for \"44aaa9d86bfb5fe7864d5587b8564ac66f62e725fc4589dd3e56a228a916deec\" returns successfully" Aug 6 07:54:09.814123 systemd[1]: cri-containerd-44aaa9d86bfb5fe7864d5587b8564ac66f62e725fc4589dd3e56a228a916deec.scope: Deactivated successfully. Aug 6 07:54:09.896877 containerd[1467]: time="2024-08-06T07:54:09.896331117Z" level=info msg="shim disconnected" id=44aaa9d86bfb5fe7864d5587b8564ac66f62e725fc4589dd3e56a228a916deec namespace=k8s.io Aug 6 07:54:09.896877 containerd[1467]: time="2024-08-06T07:54:09.896398698Z" level=warning msg="cleaning up after shim disconnected" id=44aaa9d86bfb5fe7864d5587b8564ac66f62e725fc4589dd3e56a228a916deec namespace=k8s.io Aug 6 07:54:09.896877 containerd[1467]: time="2024-08-06T07:54:09.896408335Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 6 07:54:10.195706 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-44aaa9d86bfb5fe7864d5587b8564ac66f62e725fc4589dd3e56a228a916deec-rootfs.mount: Deactivated successfully. Aug 6 07:54:10.433680 kubelet[2504]: E0806 07:54:10.433641 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:54:10.435963 containerd[1467]: time="2024-08-06T07:54:10.434689324Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Aug 6 07:54:11.270860 kubelet[2504]: E0806 07:54:11.270793 2504 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jbj5f" podUID="6471e7f0-19b6-4660-901b-e1b98911d115" Aug 6 07:54:13.272166 kubelet[2504]: E0806 07:54:13.272083 2504 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jbj5f" podUID="6471e7f0-19b6-4660-901b-e1b98911d115" Aug 6 07:54:14.041616 containerd[1467]: time="2024-08-06T07:54:14.040952532Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:54:14.048584 containerd[1467]: time="2024-08-06T07:54:14.048491298Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=93087850" Aug 6 07:54:14.062000 containerd[1467]: time="2024-08-06T07:54:14.050176033Z" level=info msg="ImageCreate event name:\"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:54:14.062000 containerd[1467]: time="2024-08-06T07:54:14.058234480Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"94535610\" in 3.623510363s" Aug 6 07:54:14.062000 containerd[1467]: time="2024-08-06T07:54:14.061536300Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\"" Aug 6 07:54:14.062000 containerd[1467]: time="2024-08-06T07:54:14.061890470Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:54:14.065786 containerd[1467]: time="2024-08-06T07:54:14.065730256Z" level=info msg="CreateContainer within sandbox \"7531c043b7e24c5e3799a417882a2d3111a5811a194aeb6226c2f7c75d7dda67\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 6 07:54:14.154264 containerd[1467]: time="2024-08-06T07:54:14.154158366Z" level=info msg="CreateContainer within sandbox \"7531c043b7e24c5e3799a417882a2d3111a5811a194aeb6226c2f7c75d7dda67\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"cb4b770c3cd77124c34990df58d03a29f4a4919bb73a648f82feec1311725bde\"" Aug 6 07:54:14.155749 containerd[1467]: time="2024-08-06T07:54:14.155575587Z" level=info msg="StartContainer for \"cb4b770c3cd77124c34990df58d03a29f4a4919bb73a648f82feec1311725bde\"" Aug 6 07:54:14.307609 systemd[1]: Started cri-containerd-cb4b770c3cd77124c34990df58d03a29f4a4919bb73a648f82feec1311725bde.scope - libcontainer container cb4b770c3cd77124c34990df58d03a29f4a4919bb73a648f82feec1311725bde. Aug 6 07:54:14.386003 containerd[1467]: time="2024-08-06T07:54:14.385437072Z" level=info msg="StartContainer for \"cb4b770c3cd77124c34990df58d03a29f4a4919bb73a648f82feec1311725bde\" returns successfully" Aug 6 07:54:14.452903 kubelet[2504]: E0806 07:54:14.452862 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:54:15.130590 systemd[1]: cri-containerd-cb4b770c3cd77124c34990df58d03a29f4a4919bb73a648f82feec1311725bde.scope: Deactivated successfully. Aug 6 07:54:15.181630 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cb4b770c3cd77124c34990df58d03a29f4a4919bb73a648f82feec1311725bde-rootfs.mount: Deactivated successfully. Aug 6 07:54:15.201629 kubelet[2504]: I0806 07:54:15.201555 2504 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Aug 6 07:54:15.212351 containerd[1467]: time="2024-08-06T07:54:15.212049650Z" level=info msg="shim disconnected" id=cb4b770c3cd77124c34990df58d03a29f4a4919bb73a648f82feec1311725bde namespace=k8s.io Aug 6 07:54:15.214182 containerd[1467]: time="2024-08-06T07:54:15.213388641Z" level=warning msg="cleaning up after shim disconnected" id=cb4b770c3cd77124c34990df58d03a29f4a4919bb73a648f82feec1311725bde namespace=k8s.io Aug 6 07:54:15.214182 containerd[1467]: time="2024-08-06T07:54:15.213440522Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 6 07:54:15.243769 kubelet[2504]: I0806 07:54:15.242865 2504 topology_manager.go:215] "Topology Admit Handler" podUID="c477a3e4-8453-408f-927e-4d6c3f9f2de8" podNamespace="kube-system" podName="coredns-5dd5756b68-p44ss" Aug 6 07:54:15.251915 kubelet[2504]: I0806 07:54:15.251695 2504 topology_manager.go:215] "Topology Admit Handler" podUID="c9550f08-461d-4533-b70f-c490af4113f0" podNamespace="kube-system" podName="coredns-5dd5756b68-d6hq7" Aug 6 07:54:15.253199 kubelet[2504]: I0806 07:54:15.253000 2504 topology_manager.go:215] "Topology Admit Handler" podUID="c1565eaa-995e-4ed3-9f2f-779318f9f3a8" podNamespace="calico-system" podName="calico-kube-controllers-5568985977-nxh2j" Aug 6 07:54:15.264274 systemd[1]: Created slice kubepods-burstable-podc477a3e4_8453_408f_927e_4d6c3f9f2de8.slice - libcontainer container kubepods-burstable-podc477a3e4_8453_408f_927e_4d6c3f9f2de8.slice. Aug 6 07:54:15.282559 systemd[1]: Created slice kubepods-besteffort-podc1565eaa_995e_4ed3_9f2f_779318f9f3a8.slice - libcontainer container kubepods-besteffort-podc1565eaa_995e_4ed3_9f2f_779318f9f3a8.slice. Aug 6 07:54:15.302654 systemd[1]: Created slice kubepods-burstable-podc9550f08_461d_4533_b70f_c490af4113f0.slice - libcontainer container kubepods-burstable-podc9550f08_461d_4533_b70f_c490af4113f0.slice. Aug 6 07:54:15.317548 systemd[1]: Created slice kubepods-besteffort-pod6471e7f0_19b6_4660_901b_e1b98911d115.slice - libcontainer container kubepods-besteffort-pod6471e7f0_19b6_4660_901b_e1b98911d115.slice. Aug 6 07:54:15.324284 containerd[1467]: time="2024-08-06T07:54:15.324237938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jbj5f,Uid:6471e7f0-19b6-4660-901b-e1b98911d115,Namespace:calico-system,Attempt:0,}" Aug 6 07:54:15.441542 kubelet[2504]: I0806 07:54:15.441257 2504 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2c9sz\" (UniqueName: \"kubernetes.io/projected/c9550f08-461d-4533-b70f-c490af4113f0-kube-api-access-2c9sz\") pod \"coredns-5dd5756b68-d6hq7\" (UID: \"c9550f08-461d-4533-b70f-c490af4113f0\") " pod="kube-system/coredns-5dd5756b68-d6hq7" Aug 6 07:54:15.441542 kubelet[2504]: I0806 07:54:15.441352 2504 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c9550f08-461d-4533-b70f-c490af4113f0-config-volume\") pod \"coredns-5dd5756b68-d6hq7\" (UID: \"c9550f08-461d-4533-b70f-c490af4113f0\") " pod="kube-system/coredns-5dd5756b68-d6hq7" Aug 6 07:54:15.441542 kubelet[2504]: I0806 07:54:15.441397 2504 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c477a3e4-8453-408f-927e-4d6c3f9f2de8-config-volume\") pod \"coredns-5dd5756b68-p44ss\" (UID: \"c477a3e4-8453-408f-927e-4d6c3f9f2de8\") " pod="kube-system/coredns-5dd5756b68-p44ss" Aug 6 07:54:15.441542 kubelet[2504]: I0806 07:54:15.441421 2504 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c1565eaa-995e-4ed3-9f2f-779318f9f3a8-tigera-ca-bundle\") pod \"calico-kube-controllers-5568985977-nxh2j\" (UID: \"c1565eaa-995e-4ed3-9f2f-779318f9f3a8\") " pod="calico-system/calico-kube-controllers-5568985977-nxh2j" Aug 6 07:54:15.441542 kubelet[2504]: I0806 07:54:15.441456 2504 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25pdw\" (UniqueName: \"kubernetes.io/projected/c477a3e4-8453-408f-927e-4d6c3f9f2de8-kube-api-access-25pdw\") pod \"coredns-5dd5756b68-p44ss\" (UID: \"c477a3e4-8453-408f-927e-4d6c3f9f2de8\") " pod="kube-system/coredns-5dd5756b68-p44ss" Aug 6 07:54:15.441916 kubelet[2504]: I0806 07:54:15.441481 2504 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z87sn\" (UniqueName: \"kubernetes.io/projected/c1565eaa-995e-4ed3-9f2f-779318f9f3a8-kube-api-access-z87sn\") pod \"calico-kube-controllers-5568985977-nxh2j\" (UID: \"c1565eaa-995e-4ed3-9f2f-779318f9f3a8\") " pod="calico-system/calico-kube-controllers-5568985977-nxh2j" Aug 6 07:54:15.473856 kubelet[2504]: E0806 07:54:15.473618 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:54:15.480206 containerd[1467]: time="2024-08-06T07:54:15.479310382Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Aug 6 07:54:15.551435 containerd[1467]: time="2024-08-06T07:54:15.547579962Z" level=error msg="Failed to destroy network for sandbox \"80b27028f7dca7666cc6459df26ca7ca285314301fb583f5796197c0aa9ceaeb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 6 07:54:15.556324 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-80b27028f7dca7666cc6459df26ca7ca285314301fb583f5796197c0aa9ceaeb-shm.mount: Deactivated successfully. Aug 6 07:54:15.561822 containerd[1467]: time="2024-08-06T07:54:15.561740521Z" level=error msg="encountered an error cleaning up failed sandbox \"80b27028f7dca7666cc6459df26ca7ca285314301fb583f5796197c0aa9ceaeb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 6 07:54:15.562055 containerd[1467]: time="2024-08-06T07:54:15.561842586Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jbj5f,Uid:6471e7f0-19b6-4660-901b-e1b98911d115,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"80b27028f7dca7666cc6459df26ca7ca285314301fb583f5796197c0aa9ceaeb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 6 07:54:15.563238 kubelet[2504]: E0806 07:54:15.562144 2504 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80b27028f7dca7666cc6459df26ca7ca285314301fb583f5796197c0aa9ceaeb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 6 07:54:15.563238 kubelet[2504]: E0806 07:54:15.562201 2504 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80b27028f7dca7666cc6459df26ca7ca285314301fb583f5796197c0aa9ceaeb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jbj5f" Aug 6 07:54:15.563238 kubelet[2504]: E0806 07:54:15.562222 2504 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80b27028f7dca7666cc6459df26ca7ca285314301fb583f5796197c0aa9ceaeb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jbj5f" Aug 6 07:54:15.563368 kubelet[2504]: E0806 07:54:15.562277 2504 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jbj5f_calico-system(6471e7f0-19b6-4660-901b-e1b98911d115)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jbj5f_calico-system(6471e7f0-19b6-4660-901b-e1b98911d115)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"80b27028f7dca7666cc6459df26ca7ca285314301fb583f5796197c0aa9ceaeb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jbj5f" podUID="6471e7f0-19b6-4660-901b-e1b98911d115" Aug 6 07:54:15.577143 kubelet[2504]: E0806 07:54:15.577099 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:54:15.577845 containerd[1467]: time="2024-08-06T07:54:15.577809846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-p44ss,Uid:c477a3e4-8453-408f-927e-4d6c3f9f2de8,Namespace:kube-system,Attempt:0,}" Aug 6 07:54:15.593641 containerd[1467]: time="2024-08-06T07:54:15.592939895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5568985977-nxh2j,Uid:c1565eaa-995e-4ed3-9f2f-779318f9f3a8,Namespace:calico-system,Attempt:0,}" Aug 6 07:54:15.611423 kubelet[2504]: E0806 07:54:15.611198 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:54:15.613094 containerd[1467]: time="2024-08-06T07:54:15.612565047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-d6hq7,Uid:c9550f08-461d-4533-b70f-c490af4113f0,Namespace:kube-system,Attempt:0,}" Aug 6 07:54:15.751130 containerd[1467]: time="2024-08-06T07:54:15.750144447Z" level=error msg="Failed to destroy network for sandbox \"646d09070ac7f8d1fb7a6e76d4f0c6a25d14961db68fde76f9fbffac4f13ec10\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 6 07:54:15.751130 containerd[1467]: time="2024-08-06T07:54:15.750636788Z" level=error msg="encountered an error cleaning up failed sandbox \"646d09070ac7f8d1fb7a6e76d4f0c6a25d14961db68fde76f9fbffac4f13ec10\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 6 07:54:15.751130 containerd[1467]: time="2024-08-06T07:54:15.750715066Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5568985977-nxh2j,Uid:c1565eaa-995e-4ed3-9f2f-779318f9f3a8,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"646d09070ac7f8d1fb7a6e76d4f0c6a25d14961db68fde76f9fbffac4f13ec10\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 6 07:54:15.752762 kubelet[2504]: E0806 07:54:15.751620 2504 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"646d09070ac7f8d1fb7a6e76d4f0c6a25d14961db68fde76f9fbffac4f13ec10\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 6 07:54:15.752762 kubelet[2504]: E0806 07:54:15.751686 2504 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"646d09070ac7f8d1fb7a6e76d4f0c6a25d14961db68fde76f9fbffac4f13ec10\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5568985977-nxh2j" Aug 6 07:54:15.752762 kubelet[2504]: E0806 07:54:15.751707 2504 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"646d09070ac7f8d1fb7a6e76d4f0c6a25d14961db68fde76f9fbffac4f13ec10\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5568985977-nxh2j" Aug 6 07:54:15.753111 kubelet[2504]: E0806 07:54:15.751765 2504 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5568985977-nxh2j_calico-system(c1565eaa-995e-4ed3-9f2f-779318f9f3a8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5568985977-nxh2j_calico-system(c1565eaa-995e-4ed3-9f2f-779318f9f3a8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"646d09070ac7f8d1fb7a6e76d4f0c6a25d14961db68fde76f9fbffac4f13ec10\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5568985977-nxh2j" podUID="c1565eaa-995e-4ed3-9f2f-779318f9f3a8" Aug 6 07:54:15.763849 containerd[1467]: time="2024-08-06T07:54:15.763791381Z" level=error msg="Failed to destroy network for sandbox \"4b012b36a88ab48fa7b7bb5bbf892da0014fbb122efd2550d325072f78a4ebe8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 6 07:54:15.764269 containerd[1467]: time="2024-08-06T07:54:15.764229894Z" level=error msg="encountered an error cleaning up failed sandbox \"4b012b36a88ab48fa7b7bb5bbf892da0014fbb122efd2550d325072f78a4ebe8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 6 07:54:15.764339 containerd[1467]: time="2024-08-06T07:54:15.764316391Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-p44ss,Uid:c477a3e4-8453-408f-927e-4d6c3f9f2de8,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4b012b36a88ab48fa7b7bb5bbf892da0014fbb122efd2550d325072f78a4ebe8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 6 07:54:15.764770 kubelet[2504]: E0806 07:54:15.764654 2504 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b012b36a88ab48fa7b7bb5bbf892da0014fbb122efd2550d325072f78a4ebe8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 6 07:54:15.765002 kubelet[2504]: E0806 07:54:15.764885 2504 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b012b36a88ab48fa7b7bb5bbf892da0014fbb122efd2550d325072f78a4ebe8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-p44ss" Aug 6 07:54:15.765002 kubelet[2504]: E0806 07:54:15.764935 2504 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b012b36a88ab48fa7b7bb5bbf892da0014fbb122efd2550d325072f78a4ebe8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-p44ss" Aug 6 07:54:15.765335 kubelet[2504]: E0806 07:54:15.765193 2504 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-p44ss_kube-system(c477a3e4-8453-408f-927e-4d6c3f9f2de8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-p44ss_kube-system(c477a3e4-8453-408f-927e-4d6c3f9f2de8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4b012b36a88ab48fa7b7bb5bbf892da0014fbb122efd2550d325072f78a4ebe8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-p44ss" podUID="c477a3e4-8453-408f-927e-4d6c3f9f2de8" Aug 6 07:54:15.786719 containerd[1467]: time="2024-08-06T07:54:15.786601226Z" level=error msg="Failed to destroy network for sandbox \"55870b105d7c660e280087874947b6871e075e5fd6825968e013ef385440eba4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 6 07:54:15.787325 containerd[1467]: time="2024-08-06T07:54:15.787235918Z" level=error msg="encountered an error cleaning up failed sandbox \"55870b105d7c660e280087874947b6871e075e5fd6825968e013ef385440eba4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 6 07:54:15.787479 containerd[1467]: time="2024-08-06T07:54:15.787350032Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-d6hq7,Uid:c9550f08-461d-4533-b70f-c490af4113f0,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"55870b105d7c660e280087874947b6871e075e5fd6825968e013ef385440eba4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 6 07:54:15.790120 kubelet[2504]: E0806 07:54:15.787728 2504 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55870b105d7c660e280087874947b6871e075e5fd6825968e013ef385440eba4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 6 07:54:15.790120 kubelet[2504]: E0806 07:54:15.787800 2504 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55870b105d7c660e280087874947b6871e075e5fd6825968e013ef385440eba4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-d6hq7" Aug 6 07:54:15.790120 kubelet[2504]: E0806 07:54:15.787844 2504 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55870b105d7c660e280087874947b6871e075e5fd6825968e013ef385440eba4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-d6hq7" Aug 6 07:54:15.790356 kubelet[2504]: E0806 07:54:15.787921 2504 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-d6hq7_kube-system(c9550f08-461d-4533-b70f-c490af4113f0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-d6hq7_kube-system(c9550f08-461d-4533-b70f-c490af4113f0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"55870b105d7c660e280087874947b6871e075e5fd6825968e013ef385440eba4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-d6hq7" podUID="c9550f08-461d-4533-b70f-c490af4113f0" Aug 6 07:54:16.476259 kubelet[2504]: I0806 07:54:16.476036 2504 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55870b105d7c660e280087874947b6871e075e5fd6825968e013ef385440eba4" Aug 6 07:54:16.485543 kubelet[2504]: I0806 07:54:16.481599 2504 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80b27028f7dca7666cc6459df26ca7ca285314301fb583f5796197c0aa9ceaeb" Aug 6 07:54:16.485736 containerd[1467]: time="2024-08-06T07:54:16.483371571Z" level=info msg="StopPodSandbox for \"55870b105d7c660e280087874947b6871e075e5fd6825968e013ef385440eba4\"" Aug 6 07:54:16.485736 containerd[1467]: time="2024-08-06T07:54:16.484105335Z" level=info msg="StopPodSandbox for \"80b27028f7dca7666cc6459df26ca7ca285314301fb583f5796197c0aa9ceaeb\"" Aug 6 07:54:16.496939 containerd[1467]: time="2024-08-06T07:54:16.496215890Z" level=info msg="Ensure that sandbox 55870b105d7c660e280087874947b6871e075e5fd6825968e013ef385440eba4 in task-service has been cleanup successfully" Aug 6 07:54:16.497793 containerd[1467]: time="2024-08-06T07:54:16.497740979Z" level=info msg="Ensure that sandbox 80b27028f7dca7666cc6459df26ca7ca285314301fb583f5796197c0aa9ceaeb in task-service has been cleanup successfully" Aug 6 07:54:16.502424 kubelet[2504]: I0806 07:54:16.502367 2504 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="646d09070ac7f8d1fb7a6e76d4f0c6a25d14961db68fde76f9fbffac4f13ec10" Aug 6 07:54:16.504171 containerd[1467]: time="2024-08-06T07:54:16.503434997Z" level=info msg="StopPodSandbox for \"646d09070ac7f8d1fb7a6e76d4f0c6a25d14961db68fde76f9fbffac4f13ec10\"" Aug 6 07:54:16.504171 containerd[1467]: time="2024-08-06T07:54:16.503762445Z" level=info msg="Ensure that sandbox 646d09070ac7f8d1fb7a6e76d4f0c6a25d14961db68fde76f9fbffac4f13ec10 in task-service has been cleanup successfully" Aug 6 07:54:16.509160 kubelet[2504]: I0806 07:54:16.509131 2504 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b012b36a88ab48fa7b7bb5bbf892da0014fbb122efd2550d325072f78a4ebe8" Aug 6 07:54:16.513458 containerd[1467]: time="2024-08-06T07:54:16.512707785Z" level=info msg="StopPodSandbox for \"4b012b36a88ab48fa7b7bb5bbf892da0014fbb122efd2550d325072f78a4ebe8\"" Aug 6 07:54:16.514661 containerd[1467]: time="2024-08-06T07:54:16.514479639Z" level=info msg="Ensure that sandbox 4b012b36a88ab48fa7b7bb5bbf892da0014fbb122efd2550d325072f78a4ebe8 in task-service has been cleanup successfully" Aug 6 07:54:16.597727 containerd[1467]: time="2024-08-06T07:54:16.596392036Z" level=error msg="StopPodSandbox for \"55870b105d7c660e280087874947b6871e075e5fd6825968e013ef385440eba4\" failed" error="failed to destroy network for sandbox \"55870b105d7c660e280087874947b6871e075e5fd6825968e013ef385440eba4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 6 07:54:16.598080 kubelet[2504]: E0806 07:54:16.597339 2504 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"55870b105d7c660e280087874947b6871e075e5fd6825968e013ef385440eba4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="55870b105d7c660e280087874947b6871e075e5fd6825968e013ef385440eba4" Aug 6 07:54:16.598080 kubelet[2504]: E0806 07:54:16.597431 2504 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"55870b105d7c660e280087874947b6871e075e5fd6825968e013ef385440eba4"} Aug 6 07:54:16.598080 kubelet[2504]: E0806 07:54:16.597472 2504 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c9550f08-461d-4533-b70f-c490af4113f0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"55870b105d7c660e280087874947b6871e075e5fd6825968e013ef385440eba4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 6 07:54:16.598080 kubelet[2504]: E0806 07:54:16.597503 2504 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c9550f08-461d-4533-b70f-c490af4113f0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"55870b105d7c660e280087874947b6871e075e5fd6825968e013ef385440eba4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-d6hq7" podUID="c9550f08-461d-4533-b70f-c490af4113f0" Aug 6 07:54:16.628208 containerd[1467]: time="2024-08-06T07:54:16.628134321Z" level=error msg="StopPodSandbox for \"80b27028f7dca7666cc6459df26ca7ca285314301fb583f5796197c0aa9ceaeb\" failed" error="failed to destroy network for sandbox \"80b27028f7dca7666cc6459df26ca7ca285314301fb583f5796197c0aa9ceaeb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 6 07:54:16.628876 kubelet[2504]: E0806 07:54:16.628501 2504 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"80b27028f7dca7666cc6459df26ca7ca285314301fb583f5796197c0aa9ceaeb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="80b27028f7dca7666cc6459df26ca7ca285314301fb583f5796197c0aa9ceaeb" Aug 6 07:54:16.628876 kubelet[2504]: E0806 07:54:16.628643 2504 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"80b27028f7dca7666cc6459df26ca7ca285314301fb583f5796197c0aa9ceaeb"} Aug 6 07:54:16.628876 kubelet[2504]: E0806 07:54:16.628701 2504 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6471e7f0-19b6-4660-901b-e1b98911d115\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"80b27028f7dca7666cc6459df26ca7ca285314301fb583f5796197c0aa9ceaeb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 6 07:54:16.628876 kubelet[2504]: E0806 07:54:16.628752 2504 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6471e7f0-19b6-4660-901b-e1b98911d115\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"80b27028f7dca7666cc6459df26ca7ca285314301fb583f5796197c0aa9ceaeb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jbj5f" podUID="6471e7f0-19b6-4660-901b-e1b98911d115" Aug 6 07:54:16.635928 containerd[1467]: time="2024-08-06T07:54:16.635827364Z" level=error msg="StopPodSandbox for \"646d09070ac7f8d1fb7a6e76d4f0c6a25d14961db68fde76f9fbffac4f13ec10\" failed" error="failed to destroy network for sandbox \"646d09070ac7f8d1fb7a6e76d4f0c6a25d14961db68fde76f9fbffac4f13ec10\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 6 07:54:16.636233 kubelet[2504]: E0806 07:54:16.636198 2504 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"646d09070ac7f8d1fb7a6e76d4f0c6a25d14961db68fde76f9fbffac4f13ec10\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="646d09070ac7f8d1fb7a6e76d4f0c6a25d14961db68fde76f9fbffac4f13ec10" Aug 6 07:54:16.636370 kubelet[2504]: E0806 07:54:16.636244 2504 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"646d09070ac7f8d1fb7a6e76d4f0c6a25d14961db68fde76f9fbffac4f13ec10"} Aug 6 07:54:16.636370 kubelet[2504]: E0806 07:54:16.636278 2504 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c1565eaa-995e-4ed3-9f2f-779318f9f3a8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"646d09070ac7f8d1fb7a6e76d4f0c6a25d14961db68fde76f9fbffac4f13ec10\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 6 07:54:16.636370 kubelet[2504]: E0806 07:54:16.636314 2504 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c1565eaa-995e-4ed3-9f2f-779318f9f3a8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"646d09070ac7f8d1fb7a6e76d4f0c6a25d14961db68fde76f9fbffac4f13ec10\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5568985977-nxh2j" podUID="c1565eaa-995e-4ed3-9f2f-779318f9f3a8" Aug 6 07:54:16.638571 containerd[1467]: time="2024-08-06T07:54:16.638326803Z" level=error msg="StopPodSandbox for \"4b012b36a88ab48fa7b7bb5bbf892da0014fbb122efd2550d325072f78a4ebe8\" failed" error="failed to destroy network for sandbox \"4b012b36a88ab48fa7b7bb5bbf892da0014fbb122efd2550d325072f78a4ebe8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 6 07:54:16.638701 kubelet[2504]: E0806 07:54:16.638660 2504 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4b012b36a88ab48fa7b7bb5bbf892da0014fbb122efd2550d325072f78a4ebe8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4b012b36a88ab48fa7b7bb5bbf892da0014fbb122efd2550d325072f78a4ebe8" Aug 6 07:54:16.638772 kubelet[2504]: E0806 07:54:16.638712 2504 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4b012b36a88ab48fa7b7bb5bbf892da0014fbb122efd2550d325072f78a4ebe8"} Aug 6 07:54:16.638772 kubelet[2504]: E0806 07:54:16.638761 2504 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c477a3e4-8453-408f-927e-4d6c3f9f2de8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4b012b36a88ab48fa7b7bb5bbf892da0014fbb122efd2550d325072f78a4ebe8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 6 07:54:16.639387 kubelet[2504]: E0806 07:54:16.638800 2504 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c477a3e4-8453-408f-927e-4d6c3f9f2de8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4b012b36a88ab48fa7b7bb5bbf892da0014fbb122efd2550d325072f78a4ebe8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-p44ss" podUID="c477a3e4-8453-408f-927e-4d6c3f9f2de8" Aug 6 07:54:20.791269 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3395978980.mount: Deactivated successfully. Aug 6 07:54:20.844306 containerd[1467]: time="2024-08-06T07:54:20.844094832Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:54:20.846724 containerd[1467]: time="2024-08-06T07:54:20.846647435Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=115238750" Aug 6 07:54:20.848025 containerd[1467]: time="2024-08-06T07:54:20.847684321Z" level=info msg="ImageCreate event name:\"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:54:20.851062 containerd[1467]: time="2024-08-06T07:54:20.850995488Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:54:20.851710 containerd[1467]: time="2024-08-06T07:54:20.851490690Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"115238612\" in 5.372138596s" Aug 6 07:54:20.851710 containerd[1467]: time="2024-08-06T07:54:20.851528016Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\"" Aug 6 07:54:20.876837 containerd[1467]: time="2024-08-06T07:54:20.876788593Z" level=info msg="CreateContainer within sandbox \"7531c043b7e24c5e3799a417882a2d3111a5811a194aeb6226c2f7c75d7dda67\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 6 07:54:20.911270 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount304415132.mount: Deactivated successfully. Aug 6 07:54:20.916316 containerd[1467]: time="2024-08-06T07:54:20.916244758Z" level=info msg="CreateContainer within sandbox \"7531c043b7e24c5e3799a417882a2d3111a5811a194aeb6226c2f7c75d7dda67\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"5f4e639dfd909915535e5e665c715687ebdd0d6b4bdc7e11e5f817787e93b3ba\"" Aug 6 07:54:20.919180 containerd[1467]: time="2024-08-06T07:54:20.917522696Z" level=info msg="StartContainer for \"5f4e639dfd909915535e5e665c715687ebdd0d6b4bdc7e11e5f817787e93b3ba\"" Aug 6 07:54:20.970329 systemd[1]: Started cri-containerd-5f4e639dfd909915535e5e665c715687ebdd0d6b4bdc7e11e5f817787e93b3ba.scope - libcontainer container 5f4e639dfd909915535e5e665c715687ebdd0d6b4bdc7e11e5f817787e93b3ba. Aug 6 07:54:21.023203 containerd[1467]: time="2024-08-06T07:54:21.023147094Z" level=info msg="StartContainer for \"5f4e639dfd909915535e5e665c715687ebdd0d6b4bdc7e11e5f817787e93b3ba\" returns successfully" Aug 6 07:54:21.136061 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 6 07:54:21.136956 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 6 07:54:21.526884 kubelet[2504]: E0806 07:54:21.526814 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:54:22.528291 kubelet[2504]: I0806 07:54:22.527413 2504 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 6 07:54:22.529066 kubelet[2504]: E0806 07:54:22.528921 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:54:27.176611 kubelet[2504]: I0806 07:54:27.176110 2504 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 6 07:54:27.177798 kubelet[2504]: E0806 07:54:27.177562 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:54:27.202580 kubelet[2504]: I0806 07:54:27.202532 2504 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-v5qh8" podStartSLOduration=7.383207518 podCreationTimestamp="2024-08-06 07:54:05 +0000 UTC" firstStartedPulling="2024-08-06 07:54:06.032756318 +0000 UTC m=+19.925464466" lastFinishedPulling="2024-08-06 07:54:20.851930562 +0000 UTC m=+34.744638700" observedRunningTime="2024-08-06 07:54:21.553411362 +0000 UTC m=+35.446119520" watchObservedRunningTime="2024-08-06 07:54:27.202381752 +0000 UTC m=+41.095089909" Aug 6 07:54:27.541559 kubelet[2504]: E0806 07:54:27.540815 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:54:28.275399 containerd[1467]: time="2024-08-06T07:54:28.275338558Z" level=info msg="StopPodSandbox for \"80b27028f7dca7666cc6459df26ca7ca285314301fb583f5796197c0aa9ceaeb\"" Aug 6 07:54:28.277770 containerd[1467]: time="2024-08-06T07:54:28.277011774Z" level=info msg="StopPodSandbox for \"4b012b36a88ab48fa7b7bb5bbf892da0014fbb122efd2550d325072f78a4ebe8\"" Aug 6 07:54:28.617137 containerd[1467]: 2024-08-06 07:54:28.406 [INFO][3790] k8s.go 608: Cleaning up netns ContainerID="4b012b36a88ab48fa7b7bb5bbf892da0014fbb122efd2550d325072f78a4ebe8" Aug 6 07:54:28.617137 containerd[1467]: 2024-08-06 07:54:28.406 [INFO][3790] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="4b012b36a88ab48fa7b7bb5bbf892da0014fbb122efd2550d325072f78a4ebe8" iface="eth0" netns="/var/run/netns/cni-f7920d3e-a61e-2ab4-c29b-1b4d4f9a9aa2" Aug 6 07:54:28.617137 containerd[1467]: 2024-08-06 07:54:28.407 [INFO][3790] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="4b012b36a88ab48fa7b7bb5bbf892da0014fbb122efd2550d325072f78a4ebe8" iface="eth0" netns="/var/run/netns/cni-f7920d3e-a61e-2ab4-c29b-1b4d4f9a9aa2" Aug 6 07:54:28.617137 containerd[1467]: 2024-08-06 07:54:28.407 [INFO][3790] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="4b012b36a88ab48fa7b7bb5bbf892da0014fbb122efd2550d325072f78a4ebe8" iface="eth0" netns="/var/run/netns/cni-f7920d3e-a61e-2ab4-c29b-1b4d4f9a9aa2" Aug 6 07:54:28.617137 containerd[1467]: 2024-08-06 07:54:28.407 [INFO][3790] k8s.go 615: Releasing IP address(es) ContainerID="4b012b36a88ab48fa7b7bb5bbf892da0014fbb122efd2550d325072f78a4ebe8" Aug 6 07:54:28.617137 containerd[1467]: 2024-08-06 07:54:28.407 [INFO][3790] utils.go 188: Calico CNI releasing IP address ContainerID="4b012b36a88ab48fa7b7bb5bbf892da0014fbb122efd2550d325072f78a4ebe8" Aug 6 07:54:28.617137 containerd[1467]: 2024-08-06 07:54:28.587 [INFO][3813] ipam_plugin.go 411: Releasing address using handleID ContainerID="4b012b36a88ab48fa7b7bb5bbf892da0014fbb122efd2550d325072f78a4ebe8" HandleID="k8s-pod-network.4b012b36a88ab48fa7b7bb5bbf892da0014fbb122efd2550d325072f78a4ebe8" Workload="ci--3975.2.0--f--5a6fbdc7ed-k8s-coredns--5dd5756b68--p44ss-eth0" Aug 6 07:54:28.617137 containerd[1467]: 2024-08-06 07:54:28.588 [INFO][3813] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 6 07:54:28.617137 containerd[1467]: 2024-08-06 07:54:28.588 [INFO][3813] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 6 07:54:28.617137 containerd[1467]: 2024-08-06 07:54:28.602 [WARNING][3813] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="4b012b36a88ab48fa7b7bb5bbf892da0014fbb122efd2550d325072f78a4ebe8" HandleID="k8s-pod-network.4b012b36a88ab48fa7b7bb5bbf892da0014fbb122efd2550d325072f78a4ebe8" Workload="ci--3975.2.0--f--5a6fbdc7ed-k8s-coredns--5dd5756b68--p44ss-eth0" Aug 6 07:54:28.617137 containerd[1467]: 2024-08-06 07:54:28.602 [INFO][3813] ipam_plugin.go 439: Releasing address using workloadID ContainerID="4b012b36a88ab48fa7b7bb5bbf892da0014fbb122efd2550d325072f78a4ebe8" HandleID="k8s-pod-network.4b012b36a88ab48fa7b7bb5bbf892da0014fbb122efd2550d325072f78a4ebe8" Workload="ci--3975.2.0--f--5a6fbdc7ed-k8s-coredns--5dd5756b68--p44ss-eth0" Aug 6 07:54:28.617137 containerd[1467]: 2024-08-06 07:54:28.606 [INFO][3813] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 6 07:54:28.617137 containerd[1467]: 2024-08-06 07:54:28.612 [INFO][3790] k8s.go 621: Teardown processing complete. ContainerID="4b012b36a88ab48fa7b7bb5bbf892da0014fbb122efd2550d325072f78a4ebe8" Aug 6 07:54:28.617137 containerd[1467]: time="2024-08-06T07:54:28.616505542Z" level=info msg="TearDown network for sandbox \"4b012b36a88ab48fa7b7bb5bbf892da0014fbb122efd2550d325072f78a4ebe8\" successfully" Aug 6 07:54:28.617137 containerd[1467]: time="2024-08-06T07:54:28.616795902Z" level=info msg="StopPodSandbox for \"4b012b36a88ab48fa7b7bb5bbf892da0014fbb122efd2550d325072f78a4ebe8\" returns successfully" Aug 6 07:54:28.627342 systemd[1]: run-netns-cni\x2df7920d3e\x2da61e\x2d2ab4\x2dc29b\x2d1b4d4f9a9aa2.mount: Deactivated successfully. Aug 6 07:54:28.645794 systemd-networkd[1340]: vxlan.calico: Link UP Aug 6 07:54:28.649571 systemd-networkd[1340]: vxlan.calico: Gained carrier Aug 6 07:54:28.708188 containerd[1467]: 2024-08-06 07:54:28.380 [INFO][3789] k8s.go 608: Cleaning up netns ContainerID="80b27028f7dca7666cc6459df26ca7ca285314301fb583f5796197c0aa9ceaeb" Aug 6 07:54:28.708188 containerd[1467]: 2024-08-06 07:54:28.382 [INFO][3789] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="80b27028f7dca7666cc6459df26ca7ca285314301fb583f5796197c0aa9ceaeb" iface="eth0" netns="/var/run/netns/cni-115056d1-4648-76fb-bf92-94a707438ee0" Aug 6 07:54:28.708188 containerd[1467]: 2024-08-06 07:54:28.383 [INFO][3789] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="80b27028f7dca7666cc6459df26ca7ca285314301fb583f5796197c0aa9ceaeb" iface="eth0" netns="/var/run/netns/cni-115056d1-4648-76fb-bf92-94a707438ee0" Aug 6 07:54:28.708188 containerd[1467]: 2024-08-06 07:54:28.384 [INFO][3789] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="80b27028f7dca7666cc6459df26ca7ca285314301fb583f5796197c0aa9ceaeb" iface="eth0" netns="/var/run/netns/cni-115056d1-4648-76fb-bf92-94a707438ee0" Aug 6 07:54:28.708188 containerd[1467]: 2024-08-06 07:54:28.384 [INFO][3789] k8s.go 615: Releasing IP address(es) ContainerID="80b27028f7dca7666cc6459df26ca7ca285314301fb583f5796197c0aa9ceaeb" Aug 6 07:54:28.708188 containerd[1467]: 2024-08-06 07:54:28.384 [INFO][3789] utils.go 188: Calico CNI releasing IP address ContainerID="80b27028f7dca7666cc6459df26ca7ca285314301fb583f5796197c0aa9ceaeb" Aug 6 07:54:28.708188 containerd[1467]: 2024-08-06 07:54:28.595 [INFO][3808] ipam_plugin.go 411: Releasing address using handleID ContainerID="80b27028f7dca7666cc6459df26ca7ca285314301fb583f5796197c0aa9ceaeb" HandleID="k8s-pod-network.80b27028f7dca7666cc6459df26ca7ca285314301fb583f5796197c0aa9ceaeb" Workload="ci--3975.2.0--f--5a6fbdc7ed-k8s-csi--node--driver--jbj5f-eth0" Aug 6 07:54:28.708188 containerd[1467]: 2024-08-06 07:54:28.596 [INFO][3808] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 6 07:54:28.708188 containerd[1467]: 2024-08-06 07:54:28.606 [INFO][3808] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 6 07:54:28.708188 containerd[1467]: 2024-08-06 07:54:28.656 [WARNING][3808] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="80b27028f7dca7666cc6459df26ca7ca285314301fb583f5796197c0aa9ceaeb" HandleID="k8s-pod-network.80b27028f7dca7666cc6459df26ca7ca285314301fb583f5796197c0aa9ceaeb" Workload="ci--3975.2.0--f--5a6fbdc7ed-k8s-csi--node--driver--jbj5f-eth0" Aug 6 07:54:28.708188 containerd[1467]: 2024-08-06 07:54:28.656 [INFO][3808] ipam_plugin.go 439: Releasing address using workloadID ContainerID="80b27028f7dca7666cc6459df26ca7ca285314301fb583f5796197c0aa9ceaeb" HandleID="k8s-pod-network.80b27028f7dca7666cc6459df26ca7ca285314301fb583f5796197c0aa9ceaeb" Workload="ci--3975.2.0--f--5a6fbdc7ed-k8s-csi--node--driver--jbj5f-eth0" Aug 6 07:54:28.708188 containerd[1467]: 2024-08-06 07:54:28.667 [INFO][3808] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 6 07:54:28.708188 containerd[1467]: 2024-08-06 07:54:28.685 [INFO][3789] k8s.go 621: Teardown processing complete. ContainerID="80b27028f7dca7666cc6459df26ca7ca285314301fb583f5796197c0aa9ceaeb" Aug 6 07:54:28.714401 containerd[1467]: time="2024-08-06T07:54:28.708353179Z" level=info msg="TearDown network for sandbox \"80b27028f7dca7666cc6459df26ca7ca285314301fb583f5796197c0aa9ceaeb\" successfully" Aug 6 07:54:28.714401 containerd[1467]: time="2024-08-06T07:54:28.708397102Z" level=info msg="StopPodSandbox for \"80b27028f7dca7666cc6459df26ca7ca285314301fb583f5796197c0aa9ceaeb\" returns successfully" Aug 6 07:54:28.714740 systemd[1]: run-netns-cni\x2d115056d1\x2d4648\x2d76fb\x2dbf92\x2d94a707438ee0.mount: Deactivated successfully. Aug 6 07:54:28.718516 containerd[1467]: time="2024-08-06T07:54:28.718438215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jbj5f,Uid:6471e7f0-19b6-4660-901b-e1b98911d115,Namespace:calico-system,Attempt:1,}" Aug 6 07:54:28.722622 kubelet[2504]: E0806 07:54:28.721634 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:54:28.729485 containerd[1467]: time="2024-08-06T07:54:28.729413978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-p44ss,Uid:c477a3e4-8453-408f-927e-4d6c3f9f2de8,Namespace:kube-system,Attempt:1,}" Aug 6 07:54:29.108056 systemd-networkd[1340]: cali9c0e167aac7: Link UP Aug 6 07:54:29.110990 systemd-networkd[1340]: cali9c0e167aac7: Gained carrier Aug 6 07:54:29.139097 containerd[1467]: 2024-08-06 07:54:28.937 [INFO][3872] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.2.0--f--5a6fbdc7ed-k8s-coredns--5dd5756b68--p44ss-eth0 coredns-5dd5756b68- kube-system c477a3e4-8453-408f-927e-4d6c3f9f2de8 748 0 2024-08-06 07:53:58 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3975.2.0-f-5a6fbdc7ed coredns-5dd5756b68-p44ss eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9c0e167aac7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="17c1d44bd47cc1d9be76dab3cb1930a924e282cddb73c64d3184a6489c9ee195" Namespace="kube-system" Pod="coredns-5dd5756b68-p44ss" WorkloadEndpoint="ci--3975.2.0--f--5a6fbdc7ed-k8s-coredns--5dd5756b68--p44ss-" Aug 6 07:54:29.139097 containerd[1467]: 2024-08-06 07:54:28.937 [INFO][3872] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="17c1d44bd47cc1d9be76dab3cb1930a924e282cddb73c64d3184a6489c9ee195" Namespace="kube-system" Pod="coredns-5dd5756b68-p44ss" WorkloadEndpoint="ci--3975.2.0--f--5a6fbdc7ed-k8s-coredns--5dd5756b68--p44ss-eth0" Aug 6 07:54:29.139097 containerd[1467]: 2024-08-06 07:54:29.022 [INFO][3916] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="17c1d44bd47cc1d9be76dab3cb1930a924e282cddb73c64d3184a6489c9ee195" HandleID="k8s-pod-network.17c1d44bd47cc1d9be76dab3cb1930a924e282cddb73c64d3184a6489c9ee195" Workload="ci--3975.2.0--f--5a6fbdc7ed-k8s-coredns--5dd5756b68--p44ss-eth0" Aug 6 07:54:29.139097 containerd[1467]: 2024-08-06 07:54:29.042 [INFO][3916] ipam_plugin.go 264: Auto assigning IP ContainerID="17c1d44bd47cc1d9be76dab3cb1930a924e282cddb73c64d3184a6489c9ee195" HandleID="k8s-pod-network.17c1d44bd47cc1d9be76dab3cb1930a924e282cddb73c64d3184a6489c9ee195" Workload="ci--3975.2.0--f--5a6fbdc7ed-k8s-coredns--5dd5756b68--p44ss-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00039c5e0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3975.2.0-f-5a6fbdc7ed", "pod":"coredns-5dd5756b68-p44ss", "timestamp":"2024-08-06 07:54:29.022548919 +0000 UTC"}, Hostname:"ci-3975.2.0-f-5a6fbdc7ed", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 6 07:54:29.139097 containerd[1467]: 2024-08-06 07:54:29.043 [INFO][3916] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 6 07:54:29.139097 containerd[1467]: 2024-08-06 07:54:29.043 [INFO][3916] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 6 07:54:29.139097 containerd[1467]: 2024-08-06 07:54:29.043 [INFO][3916] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.2.0-f-5a6fbdc7ed' Aug 6 07:54:29.139097 containerd[1467]: 2024-08-06 07:54:29.050 [INFO][3916] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.17c1d44bd47cc1d9be76dab3cb1930a924e282cddb73c64d3184a6489c9ee195" host="ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:54:29.139097 containerd[1467]: 2024-08-06 07:54:29.064 [INFO][3916] ipam.go 372: Looking up existing affinities for host host="ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:54:29.139097 containerd[1467]: 2024-08-06 07:54:29.073 [INFO][3916] ipam.go 489: Trying affinity for 192.168.3.128/26 host="ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:54:29.139097 containerd[1467]: 2024-08-06 07:54:29.076 [INFO][3916] ipam.go 155: Attempting to load block cidr=192.168.3.128/26 host="ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:54:29.139097 containerd[1467]: 2024-08-06 07:54:29.080 [INFO][3916] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.3.128/26 host="ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:54:29.139097 containerd[1467]: 2024-08-06 07:54:29.080 [INFO][3916] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.3.128/26 handle="k8s-pod-network.17c1d44bd47cc1d9be76dab3cb1930a924e282cddb73c64d3184a6489c9ee195" host="ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:54:29.139097 containerd[1467]: 2024-08-06 07:54:29.083 [INFO][3916] ipam.go 1685: Creating new handle: k8s-pod-network.17c1d44bd47cc1d9be76dab3cb1930a924e282cddb73c64d3184a6489c9ee195 Aug 6 07:54:29.139097 containerd[1467]: 2024-08-06 07:54:29.089 [INFO][3916] ipam.go 1203: Writing block in order to claim IPs block=192.168.3.128/26 handle="k8s-pod-network.17c1d44bd47cc1d9be76dab3cb1930a924e282cddb73c64d3184a6489c9ee195" host="ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:54:29.139097 containerd[1467]: 2024-08-06 07:54:29.097 [INFO][3916] ipam.go 1216: Successfully claimed IPs: [192.168.3.129/26] block=192.168.3.128/26 handle="k8s-pod-network.17c1d44bd47cc1d9be76dab3cb1930a924e282cddb73c64d3184a6489c9ee195" host="ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:54:29.139097 containerd[1467]: 2024-08-06 07:54:29.097 [INFO][3916] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.3.129/26] handle="k8s-pod-network.17c1d44bd47cc1d9be76dab3cb1930a924e282cddb73c64d3184a6489c9ee195" host="ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:54:29.139097 containerd[1467]: 2024-08-06 07:54:29.097 [INFO][3916] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 6 07:54:29.139097 containerd[1467]: 2024-08-06 07:54:29.097 [INFO][3916] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.3.129/26] IPv6=[] ContainerID="17c1d44bd47cc1d9be76dab3cb1930a924e282cddb73c64d3184a6489c9ee195" HandleID="k8s-pod-network.17c1d44bd47cc1d9be76dab3cb1930a924e282cddb73c64d3184a6489c9ee195" Workload="ci--3975.2.0--f--5a6fbdc7ed-k8s-coredns--5dd5756b68--p44ss-eth0" Aug 6 07:54:29.141063 containerd[1467]: 2024-08-06 07:54:29.101 [INFO][3872] k8s.go 386: Populated endpoint ContainerID="17c1d44bd47cc1d9be76dab3cb1930a924e282cddb73c64d3184a6489c9ee195" Namespace="kube-system" Pod="coredns-5dd5756b68-p44ss" WorkloadEndpoint="ci--3975.2.0--f--5a6fbdc7ed-k8s-coredns--5dd5756b68--p44ss-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.0--f--5a6fbdc7ed-k8s-coredns--5dd5756b68--p44ss-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"c477a3e4-8453-408f-927e-4d6c3f9f2de8", ResourceVersion:"748", Generation:0, CreationTimestamp:time.Date(2024, time.August, 6, 7, 53, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.0-f-5a6fbdc7ed", ContainerID:"", Pod:"coredns-5dd5756b68-p44ss", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.3.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9c0e167aac7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 6 07:54:29.141063 containerd[1467]: 2024-08-06 07:54:29.102 [INFO][3872] k8s.go 387: Calico CNI using IPs: [192.168.3.129/32] ContainerID="17c1d44bd47cc1d9be76dab3cb1930a924e282cddb73c64d3184a6489c9ee195" Namespace="kube-system" Pod="coredns-5dd5756b68-p44ss" WorkloadEndpoint="ci--3975.2.0--f--5a6fbdc7ed-k8s-coredns--5dd5756b68--p44ss-eth0" Aug 6 07:54:29.141063 containerd[1467]: 2024-08-06 07:54:29.102 [INFO][3872] dataplane_linux.go 68: Setting the host side veth name to cali9c0e167aac7 ContainerID="17c1d44bd47cc1d9be76dab3cb1930a924e282cddb73c64d3184a6489c9ee195" Namespace="kube-system" Pod="coredns-5dd5756b68-p44ss" WorkloadEndpoint="ci--3975.2.0--f--5a6fbdc7ed-k8s-coredns--5dd5756b68--p44ss-eth0" Aug 6 07:54:29.141063 containerd[1467]: 2024-08-06 07:54:29.110 [INFO][3872] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="17c1d44bd47cc1d9be76dab3cb1930a924e282cddb73c64d3184a6489c9ee195" Namespace="kube-system" Pod="coredns-5dd5756b68-p44ss" WorkloadEndpoint="ci--3975.2.0--f--5a6fbdc7ed-k8s-coredns--5dd5756b68--p44ss-eth0" Aug 6 07:54:29.141063 containerd[1467]: 2024-08-06 07:54:29.112 [INFO][3872] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="17c1d44bd47cc1d9be76dab3cb1930a924e282cddb73c64d3184a6489c9ee195" Namespace="kube-system" Pod="coredns-5dd5756b68-p44ss" WorkloadEndpoint="ci--3975.2.0--f--5a6fbdc7ed-k8s-coredns--5dd5756b68--p44ss-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.0--f--5a6fbdc7ed-k8s-coredns--5dd5756b68--p44ss-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"c477a3e4-8453-408f-927e-4d6c3f9f2de8", ResourceVersion:"748", Generation:0, CreationTimestamp:time.Date(2024, time.August, 6, 7, 53, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.0-f-5a6fbdc7ed", ContainerID:"17c1d44bd47cc1d9be76dab3cb1930a924e282cddb73c64d3184a6489c9ee195", Pod:"coredns-5dd5756b68-p44ss", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.3.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9c0e167aac7", MAC:"c2:3c:ad:ea:90:f0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 6 07:54:29.141063 containerd[1467]: 2024-08-06 07:54:29.134 [INFO][3872] k8s.go 500: Wrote updated endpoint to datastore ContainerID="17c1d44bd47cc1d9be76dab3cb1930a924e282cddb73c64d3184a6489c9ee195" Namespace="kube-system" Pod="coredns-5dd5756b68-p44ss" WorkloadEndpoint="ci--3975.2.0--f--5a6fbdc7ed-k8s-coredns--5dd5756b68--p44ss-eth0" Aug 6 07:54:29.196910 systemd-networkd[1340]: cali40625ed866b: Link UP Aug 6 07:54:29.202935 systemd-networkd[1340]: cali40625ed866b: Gained carrier Aug 6 07:54:29.226592 containerd[1467]: 2024-08-06 07:54:28.968 [INFO][3875] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.2.0--f--5a6fbdc7ed-k8s-csi--node--driver--jbj5f-eth0 csi-node-driver- calico-system 6471e7f0-19b6-4660-901b-e1b98911d115 747 0 2024-08-06 07:54:05 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7d7f6c786c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ci-3975.2.0-f-5a6fbdc7ed csi-node-driver-jbj5f eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali40625ed866b [] []}} ContainerID="1d6e84aff724cde6d8ea261a8b5e02170f9dd915fc7380ad74ef5719d73aa4dc" Namespace="calico-system" Pod="csi-node-driver-jbj5f" WorkloadEndpoint="ci--3975.2.0--f--5a6fbdc7ed-k8s-csi--node--driver--jbj5f-" Aug 6 07:54:29.226592 containerd[1467]: 2024-08-06 07:54:28.968 [INFO][3875] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1d6e84aff724cde6d8ea261a8b5e02170f9dd915fc7380ad74ef5719d73aa4dc" Namespace="calico-system" Pod="csi-node-driver-jbj5f" WorkloadEndpoint="ci--3975.2.0--f--5a6fbdc7ed-k8s-csi--node--driver--jbj5f-eth0" Aug 6 07:54:29.226592 containerd[1467]: 2024-08-06 07:54:29.045 [INFO][3921] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1d6e84aff724cde6d8ea261a8b5e02170f9dd915fc7380ad74ef5719d73aa4dc" HandleID="k8s-pod-network.1d6e84aff724cde6d8ea261a8b5e02170f9dd915fc7380ad74ef5719d73aa4dc" Workload="ci--3975.2.0--f--5a6fbdc7ed-k8s-csi--node--driver--jbj5f-eth0" Aug 6 07:54:29.226592 containerd[1467]: 2024-08-06 07:54:29.063 [INFO][3921] ipam_plugin.go 264: Auto assigning IP ContainerID="1d6e84aff724cde6d8ea261a8b5e02170f9dd915fc7380ad74ef5719d73aa4dc" HandleID="k8s-pod-network.1d6e84aff724cde6d8ea261a8b5e02170f9dd915fc7380ad74ef5719d73aa4dc" Workload="ci--3975.2.0--f--5a6fbdc7ed-k8s-csi--node--driver--jbj5f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051eb0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3975.2.0-f-5a6fbdc7ed", "pod":"csi-node-driver-jbj5f", "timestamp":"2024-08-06 07:54:29.04522365 +0000 UTC"}, Hostname:"ci-3975.2.0-f-5a6fbdc7ed", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 6 07:54:29.226592 containerd[1467]: 2024-08-06 07:54:29.063 [INFO][3921] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 6 07:54:29.226592 containerd[1467]: 2024-08-06 07:54:29.099 [INFO][3921] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 6 07:54:29.226592 containerd[1467]: 2024-08-06 07:54:29.099 [INFO][3921] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.2.0-f-5a6fbdc7ed' Aug 6 07:54:29.226592 containerd[1467]: 2024-08-06 07:54:29.103 [INFO][3921] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1d6e84aff724cde6d8ea261a8b5e02170f9dd915fc7380ad74ef5719d73aa4dc" host="ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:54:29.226592 containerd[1467]: 2024-08-06 07:54:29.119 [INFO][3921] ipam.go 372: Looking up existing affinities for host host="ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:54:29.226592 containerd[1467]: 2024-08-06 07:54:29.138 [INFO][3921] ipam.go 489: Trying affinity for 192.168.3.128/26 host="ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:54:29.226592 containerd[1467]: 2024-08-06 07:54:29.144 [INFO][3921] ipam.go 155: Attempting to load block cidr=192.168.3.128/26 host="ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:54:29.226592 containerd[1467]: 2024-08-06 07:54:29.151 [INFO][3921] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.3.128/26 host="ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:54:29.226592 containerd[1467]: 2024-08-06 07:54:29.151 [INFO][3921] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.3.128/26 handle="k8s-pod-network.1d6e84aff724cde6d8ea261a8b5e02170f9dd915fc7380ad74ef5719d73aa4dc" host="ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:54:29.226592 containerd[1467]: 2024-08-06 07:54:29.157 [INFO][3921] ipam.go 1685: Creating new handle: k8s-pod-network.1d6e84aff724cde6d8ea261a8b5e02170f9dd915fc7380ad74ef5719d73aa4dc Aug 6 07:54:29.226592 containerd[1467]: 2024-08-06 07:54:29.168 [INFO][3921] ipam.go 1203: Writing block in order to claim IPs block=192.168.3.128/26 handle="k8s-pod-network.1d6e84aff724cde6d8ea261a8b5e02170f9dd915fc7380ad74ef5719d73aa4dc" host="ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:54:29.226592 containerd[1467]: 2024-08-06 07:54:29.183 [INFO][3921] ipam.go 1216: Successfully claimed IPs: [192.168.3.130/26] block=192.168.3.128/26 handle="k8s-pod-network.1d6e84aff724cde6d8ea261a8b5e02170f9dd915fc7380ad74ef5719d73aa4dc" host="ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:54:29.226592 containerd[1467]: 2024-08-06 07:54:29.183 [INFO][3921] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.3.130/26] handle="k8s-pod-network.1d6e84aff724cde6d8ea261a8b5e02170f9dd915fc7380ad74ef5719d73aa4dc" host="ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:54:29.226592 containerd[1467]: 2024-08-06 07:54:29.183 [INFO][3921] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 6 07:54:29.226592 containerd[1467]: 2024-08-06 07:54:29.183 [INFO][3921] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.3.130/26] IPv6=[] ContainerID="1d6e84aff724cde6d8ea261a8b5e02170f9dd915fc7380ad74ef5719d73aa4dc" HandleID="k8s-pod-network.1d6e84aff724cde6d8ea261a8b5e02170f9dd915fc7380ad74ef5719d73aa4dc" Workload="ci--3975.2.0--f--5a6fbdc7ed-k8s-csi--node--driver--jbj5f-eth0" Aug 6 07:54:29.227541 containerd[1467]: 2024-08-06 07:54:29.188 [INFO][3875] k8s.go 386: Populated endpoint ContainerID="1d6e84aff724cde6d8ea261a8b5e02170f9dd915fc7380ad74ef5719d73aa4dc" Namespace="calico-system" Pod="csi-node-driver-jbj5f" WorkloadEndpoint="ci--3975.2.0--f--5a6fbdc7ed-k8s-csi--node--driver--jbj5f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.0--f--5a6fbdc7ed-k8s-csi--node--driver--jbj5f-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6471e7f0-19b6-4660-901b-e1b98911d115", ResourceVersion:"747", Generation:0, CreationTimestamp:time.Date(2024, time.August, 6, 7, 54, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.0-f-5a6fbdc7ed", ContainerID:"", Pod:"csi-node-driver-jbj5f", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.3.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali40625ed866b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 6 07:54:29.227541 containerd[1467]: 2024-08-06 07:54:29.188 [INFO][3875] k8s.go 387: Calico CNI using IPs: [192.168.3.130/32] ContainerID="1d6e84aff724cde6d8ea261a8b5e02170f9dd915fc7380ad74ef5719d73aa4dc" Namespace="calico-system" Pod="csi-node-driver-jbj5f" WorkloadEndpoint="ci--3975.2.0--f--5a6fbdc7ed-k8s-csi--node--driver--jbj5f-eth0" Aug 6 07:54:29.227541 containerd[1467]: 2024-08-06 07:54:29.188 [INFO][3875] dataplane_linux.go 68: Setting the host side veth name to cali40625ed866b ContainerID="1d6e84aff724cde6d8ea261a8b5e02170f9dd915fc7380ad74ef5719d73aa4dc" Namespace="calico-system" Pod="csi-node-driver-jbj5f" WorkloadEndpoint="ci--3975.2.0--f--5a6fbdc7ed-k8s-csi--node--driver--jbj5f-eth0" Aug 6 07:54:29.227541 containerd[1467]: 2024-08-06 07:54:29.196 [INFO][3875] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="1d6e84aff724cde6d8ea261a8b5e02170f9dd915fc7380ad74ef5719d73aa4dc" Namespace="calico-system" Pod="csi-node-driver-jbj5f" WorkloadEndpoint="ci--3975.2.0--f--5a6fbdc7ed-k8s-csi--node--driver--jbj5f-eth0" Aug 6 07:54:29.227541 containerd[1467]: 2024-08-06 07:54:29.196 [INFO][3875] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1d6e84aff724cde6d8ea261a8b5e02170f9dd915fc7380ad74ef5719d73aa4dc" Namespace="calico-system" Pod="csi-node-driver-jbj5f" WorkloadEndpoint="ci--3975.2.0--f--5a6fbdc7ed-k8s-csi--node--driver--jbj5f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.0--f--5a6fbdc7ed-k8s-csi--node--driver--jbj5f-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6471e7f0-19b6-4660-901b-e1b98911d115", ResourceVersion:"747", Generation:0, CreationTimestamp:time.Date(2024, time.August, 6, 7, 54, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.0-f-5a6fbdc7ed", ContainerID:"1d6e84aff724cde6d8ea261a8b5e02170f9dd915fc7380ad74ef5719d73aa4dc", Pod:"csi-node-driver-jbj5f", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.3.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali40625ed866b", MAC:"6e:99:29:69:9d:31", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 6 07:54:29.227541 containerd[1467]: 2024-08-06 07:54:29.221 [INFO][3875] k8s.go 500: Wrote updated endpoint to datastore ContainerID="1d6e84aff724cde6d8ea261a8b5e02170f9dd915fc7380ad74ef5719d73aa4dc" Namespace="calico-system" Pod="csi-node-driver-jbj5f" WorkloadEndpoint="ci--3975.2.0--f--5a6fbdc7ed-k8s-csi--node--driver--jbj5f-eth0" Aug 6 07:54:29.244392 containerd[1467]: time="2024-08-06T07:54:29.243053827Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 6 07:54:29.244392 containerd[1467]: time="2024-08-06T07:54:29.243160948Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 07:54:29.244392 containerd[1467]: time="2024-08-06T07:54:29.243186073Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 6 07:54:29.244392 containerd[1467]: time="2024-08-06T07:54:29.243199099Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 07:54:29.273962 containerd[1467]: time="2024-08-06T07:54:29.273039498Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 6 07:54:29.273962 containerd[1467]: time="2024-08-06T07:54:29.273118349Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 07:54:29.273962 containerd[1467]: time="2024-08-06T07:54:29.273156576Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 6 07:54:29.273962 containerd[1467]: time="2024-08-06T07:54:29.273171557Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 07:54:29.297519 systemd[1]: Started cri-containerd-17c1d44bd47cc1d9be76dab3cb1930a924e282cddb73c64d3184a6489c9ee195.scope - libcontainer container 17c1d44bd47cc1d9be76dab3cb1930a924e282cddb73c64d3184a6489c9ee195. Aug 6 07:54:29.356602 systemd[1]: Started cri-containerd-1d6e84aff724cde6d8ea261a8b5e02170f9dd915fc7380ad74ef5719d73aa4dc.scope - libcontainer container 1d6e84aff724cde6d8ea261a8b5e02170f9dd915fc7380ad74ef5719d73aa4dc. Aug 6 07:54:29.459012 containerd[1467]: time="2024-08-06T07:54:29.458882866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-p44ss,Uid:c477a3e4-8453-408f-927e-4d6c3f9f2de8,Namespace:kube-system,Attempt:1,} returns sandbox id \"17c1d44bd47cc1d9be76dab3cb1930a924e282cddb73c64d3184a6489c9ee195\"" Aug 6 07:54:29.462328 kubelet[2504]: E0806 07:54:29.461687 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:54:29.477442 containerd[1467]: time="2024-08-06T07:54:29.477364946Z" level=info msg="CreateContainer within sandbox \"17c1d44bd47cc1d9be76dab3cb1930a924e282cddb73c64d3184a6489c9ee195\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 6 07:54:29.485992 containerd[1467]: time="2024-08-06T07:54:29.485893260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jbj5f,Uid:6471e7f0-19b6-4660-901b-e1b98911d115,Namespace:calico-system,Attempt:1,} returns sandbox id \"1d6e84aff724cde6d8ea261a8b5e02170f9dd915fc7380ad74ef5719d73aa4dc\"" Aug 6 07:54:29.491408 containerd[1467]: time="2024-08-06T07:54:29.491015583Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Aug 6 07:54:29.516358 containerd[1467]: time="2024-08-06T07:54:29.516279773Z" level=info msg="CreateContainer within sandbox \"17c1d44bd47cc1d9be76dab3cb1930a924e282cddb73c64d3184a6489c9ee195\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ca26e7c447c42e035bcae323f616b69ebad550829c514a85e71f8984f99e19c7\"" Aug 6 07:54:29.520618 containerd[1467]: time="2024-08-06T07:54:29.518000555Z" level=info msg="StartContainer for \"ca26e7c447c42e035bcae323f616b69ebad550829c514a85e71f8984f99e19c7\"" Aug 6 07:54:29.562534 systemd[1]: Started cri-containerd-ca26e7c447c42e035bcae323f616b69ebad550829c514a85e71f8984f99e19c7.scope - libcontainer container ca26e7c447c42e035bcae323f616b69ebad550829c514a85e71f8984f99e19c7. Aug 6 07:54:29.630110 containerd[1467]: time="2024-08-06T07:54:29.622842357Z" level=info msg="StartContainer for \"ca26e7c447c42e035bcae323f616b69ebad550829c514a85e71f8984f99e19c7\" returns successfully" Aug 6 07:54:30.236813 systemd-networkd[1340]: vxlan.calico: Gained IPv6LL Aug 6 07:54:30.238858 systemd-networkd[1340]: cali9c0e167aac7: Gained IPv6LL Aug 6 07:54:30.572721 kubelet[2504]: E0806 07:54:30.572575 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:54:30.620175 systemd-networkd[1340]: cali40625ed866b: Gained IPv6LL Aug 6 07:54:30.631135 kubelet[2504]: I0806 07:54:30.631062 2504 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-p44ss" podStartSLOduration=32.630775002 podCreationTimestamp="2024-08-06 07:53:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-06 07:54:30.59547734 +0000 UTC m=+44.488185502" watchObservedRunningTime="2024-08-06 07:54:30.630775002 +0000 UTC m=+44.523483161" Aug 6 07:54:31.001400 containerd[1467]: time="2024-08-06T07:54:31.001328411Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:54:31.003404 containerd[1467]: time="2024-08-06T07:54:31.003275993Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7641062" Aug 6 07:54:31.004957 containerd[1467]: time="2024-08-06T07:54:31.004886848Z" level=info msg="ImageCreate event name:\"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:54:31.008880 containerd[1467]: time="2024-08-06T07:54:31.008503368Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:54:31.010376 containerd[1467]: time="2024-08-06T07:54:31.010319549Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"9088822\" in 1.519246493s" Aug 6 07:54:31.010717 containerd[1467]: time="2024-08-06T07:54:31.010583288Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\"" Aug 6 07:54:31.016501 containerd[1467]: time="2024-08-06T07:54:31.016164902Z" level=info msg="CreateContainer within sandbox \"1d6e84aff724cde6d8ea261a8b5e02170f9dd915fc7380ad74ef5719d73aa4dc\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 6 07:54:31.065530 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1718431185.mount: Deactivated successfully. Aug 6 07:54:31.068936 containerd[1467]: time="2024-08-06T07:54:31.068646721Z" level=info msg="CreateContainer within sandbox \"1d6e84aff724cde6d8ea261a8b5e02170f9dd915fc7380ad74ef5719d73aa4dc\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"9f785d8c76d8bae5c0f84ec66985452f05a31d396ee4a6626a2e20b90eb62cc3\"" Aug 6 07:54:31.080834 containerd[1467]: time="2024-08-06T07:54:31.080294973Z" level=info msg="StartContainer for \"9f785d8c76d8bae5c0f84ec66985452f05a31d396ee4a6626a2e20b90eb62cc3\"" Aug 6 07:54:31.133519 systemd[1]: Started cri-containerd-9f785d8c76d8bae5c0f84ec66985452f05a31d396ee4a6626a2e20b90eb62cc3.scope - libcontainer container 9f785d8c76d8bae5c0f84ec66985452f05a31d396ee4a6626a2e20b90eb62cc3. Aug 6 07:54:31.174531 containerd[1467]: time="2024-08-06T07:54:31.174482736Z" level=info msg="StartContainer for \"9f785d8c76d8bae5c0f84ec66985452f05a31d396ee4a6626a2e20b90eb62cc3\" returns successfully" Aug 6 07:54:31.178184 containerd[1467]: time="2024-08-06T07:54:31.177825509Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Aug 6 07:54:31.577781 kubelet[2504]: E0806 07:54:31.577674 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:54:31.961035 kubelet[2504]: I0806 07:54:31.959357 2504 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 6 07:54:31.961035 kubelet[2504]: E0806 07:54:31.960438 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:54:32.269278 systemd[1]: run-containerd-runc-k8s.io-5f4e639dfd909915535e5e665c715687ebdd0d6b4bdc7e11e5f817787e93b3ba-runc.OobX63.mount: Deactivated successfully. Aug 6 07:54:32.288245 containerd[1467]: time="2024-08-06T07:54:32.288078621Z" level=info msg="StopPodSandbox for \"55870b105d7c660e280087874947b6871e075e5fd6825968e013ef385440eba4\"" Aug 6 07:54:32.291797 containerd[1467]: time="2024-08-06T07:54:32.291758005Z" level=info msg="StopPodSandbox for \"646d09070ac7f8d1fb7a6e76d4f0c6a25d14961db68fde76f9fbffac4f13ec10\"" Aug 6 07:54:32.526288 containerd[1467]: 2024-08-06 07:54:32.460 [INFO][4183] k8s.go 608: Cleaning up netns ContainerID="646d09070ac7f8d1fb7a6e76d4f0c6a25d14961db68fde76f9fbffac4f13ec10" Aug 6 07:54:32.526288 containerd[1467]: 2024-08-06 07:54:32.460 [INFO][4183] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="646d09070ac7f8d1fb7a6e76d4f0c6a25d14961db68fde76f9fbffac4f13ec10" iface="eth0" netns="/var/run/netns/cni-815a4b33-6132-296f-4b09-5c3ed743bfc3" Aug 6 07:54:32.526288 containerd[1467]: 2024-08-06 07:54:32.462 [INFO][4183] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="646d09070ac7f8d1fb7a6e76d4f0c6a25d14961db68fde76f9fbffac4f13ec10" iface="eth0" netns="/var/run/netns/cni-815a4b33-6132-296f-4b09-5c3ed743bfc3" Aug 6 07:54:32.526288 containerd[1467]: 2024-08-06 07:54:32.462 [INFO][4183] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="646d09070ac7f8d1fb7a6e76d4f0c6a25d14961db68fde76f9fbffac4f13ec10" iface="eth0" netns="/var/run/netns/cni-815a4b33-6132-296f-4b09-5c3ed743bfc3" Aug 6 07:54:32.526288 containerd[1467]: 2024-08-06 07:54:32.462 [INFO][4183] k8s.go 615: Releasing IP address(es) ContainerID="646d09070ac7f8d1fb7a6e76d4f0c6a25d14961db68fde76f9fbffac4f13ec10" Aug 6 07:54:32.526288 containerd[1467]: 2024-08-06 07:54:32.462 [INFO][4183] utils.go 188: Calico CNI releasing IP address ContainerID="646d09070ac7f8d1fb7a6e76d4f0c6a25d14961db68fde76f9fbffac4f13ec10" Aug 6 07:54:32.526288 containerd[1467]: 2024-08-06 07:54:32.505 [INFO][4204] ipam_plugin.go 411: Releasing address using handleID ContainerID="646d09070ac7f8d1fb7a6e76d4f0c6a25d14961db68fde76f9fbffac4f13ec10" HandleID="k8s-pod-network.646d09070ac7f8d1fb7a6e76d4f0c6a25d14961db68fde76f9fbffac4f13ec10" Workload="ci--3975.2.0--f--5a6fbdc7ed-k8s-calico--kube--controllers--5568985977--nxh2j-eth0" Aug 6 07:54:32.526288 containerd[1467]: 2024-08-06 07:54:32.505 [INFO][4204] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 6 07:54:32.526288 containerd[1467]: 2024-08-06 07:54:32.505 [INFO][4204] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 6 07:54:32.526288 containerd[1467]: 2024-08-06 07:54:32.514 [WARNING][4204] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="646d09070ac7f8d1fb7a6e76d4f0c6a25d14961db68fde76f9fbffac4f13ec10" HandleID="k8s-pod-network.646d09070ac7f8d1fb7a6e76d4f0c6a25d14961db68fde76f9fbffac4f13ec10" Workload="ci--3975.2.0--f--5a6fbdc7ed-k8s-calico--kube--controllers--5568985977--nxh2j-eth0" Aug 6 07:54:32.526288 containerd[1467]: 2024-08-06 07:54:32.514 [INFO][4204] ipam_plugin.go 439: Releasing address using workloadID ContainerID="646d09070ac7f8d1fb7a6e76d4f0c6a25d14961db68fde76f9fbffac4f13ec10" HandleID="k8s-pod-network.646d09070ac7f8d1fb7a6e76d4f0c6a25d14961db68fde76f9fbffac4f13ec10" Workload="ci--3975.2.0--f--5a6fbdc7ed-k8s-calico--kube--controllers--5568985977--nxh2j-eth0" Aug 6 07:54:32.526288 containerd[1467]: 2024-08-06 07:54:32.517 [INFO][4204] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 6 07:54:32.526288 containerd[1467]: 2024-08-06 07:54:32.521 [INFO][4183] k8s.go 621: Teardown processing complete. ContainerID="646d09070ac7f8d1fb7a6e76d4f0c6a25d14961db68fde76f9fbffac4f13ec10" Aug 6 07:54:32.529776 containerd[1467]: time="2024-08-06T07:54:32.527189834Z" level=info msg="TearDown network for sandbox \"646d09070ac7f8d1fb7a6e76d4f0c6a25d14961db68fde76f9fbffac4f13ec10\" successfully" Aug 6 07:54:32.529776 containerd[1467]: time="2024-08-06T07:54:32.527248547Z" level=info msg="StopPodSandbox for \"646d09070ac7f8d1fb7a6e76d4f0c6a25d14961db68fde76f9fbffac4f13ec10\" returns successfully" Aug 6 07:54:32.534056 containerd[1467]: time="2024-08-06T07:54:32.533427506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5568985977-nxh2j,Uid:c1565eaa-995e-4ed3-9f2f-779318f9f3a8,Namespace:calico-system,Attempt:1,}" Aug 6 07:54:32.535525 systemd[1]: run-netns-cni\x2d815a4b33\x2d6132\x2d296f\x2d4b09\x2d5c3ed743bfc3.mount: Deactivated successfully. Aug 6 07:54:32.551595 containerd[1467]: 2024-08-06 07:54:32.443 [INFO][4182] k8s.go 608: Cleaning up netns ContainerID="55870b105d7c660e280087874947b6871e075e5fd6825968e013ef385440eba4" Aug 6 07:54:32.551595 containerd[1467]: 2024-08-06 07:54:32.444 [INFO][4182] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="55870b105d7c660e280087874947b6871e075e5fd6825968e013ef385440eba4" iface="eth0" netns="/var/run/netns/cni-2aaaff68-cd2c-ec3b-f780-a3e6b3a424b6" Aug 6 07:54:32.551595 containerd[1467]: 2024-08-06 07:54:32.445 [INFO][4182] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="55870b105d7c660e280087874947b6871e075e5fd6825968e013ef385440eba4" iface="eth0" netns="/var/run/netns/cni-2aaaff68-cd2c-ec3b-f780-a3e6b3a424b6" Aug 6 07:54:32.551595 containerd[1467]: 2024-08-06 07:54:32.445 [INFO][4182] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="55870b105d7c660e280087874947b6871e075e5fd6825968e013ef385440eba4" iface="eth0" netns="/var/run/netns/cni-2aaaff68-cd2c-ec3b-f780-a3e6b3a424b6" Aug 6 07:54:32.551595 containerd[1467]: 2024-08-06 07:54:32.445 [INFO][4182] k8s.go 615: Releasing IP address(es) ContainerID="55870b105d7c660e280087874947b6871e075e5fd6825968e013ef385440eba4" Aug 6 07:54:32.551595 containerd[1467]: 2024-08-06 07:54:32.445 [INFO][4182] utils.go 188: Calico CNI releasing IP address ContainerID="55870b105d7c660e280087874947b6871e075e5fd6825968e013ef385440eba4" Aug 6 07:54:32.551595 containerd[1467]: 2024-08-06 07:54:32.512 [INFO][4200] ipam_plugin.go 411: Releasing address using handleID ContainerID="55870b105d7c660e280087874947b6871e075e5fd6825968e013ef385440eba4" HandleID="k8s-pod-network.55870b105d7c660e280087874947b6871e075e5fd6825968e013ef385440eba4" Workload="ci--3975.2.0--f--5a6fbdc7ed-k8s-coredns--5dd5756b68--d6hq7-eth0" Aug 6 07:54:32.551595 containerd[1467]: 2024-08-06 07:54:32.513 [INFO][4200] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 6 07:54:32.551595 containerd[1467]: 2024-08-06 07:54:32.517 [INFO][4200] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 6 07:54:32.551595 containerd[1467]: 2024-08-06 07:54:32.532 [WARNING][4200] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="55870b105d7c660e280087874947b6871e075e5fd6825968e013ef385440eba4" HandleID="k8s-pod-network.55870b105d7c660e280087874947b6871e075e5fd6825968e013ef385440eba4" Workload="ci--3975.2.0--f--5a6fbdc7ed-k8s-coredns--5dd5756b68--d6hq7-eth0" Aug 6 07:54:32.551595 containerd[1467]: 2024-08-06 07:54:32.532 [INFO][4200] ipam_plugin.go 439: Releasing address using workloadID ContainerID="55870b105d7c660e280087874947b6871e075e5fd6825968e013ef385440eba4" HandleID="k8s-pod-network.55870b105d7c660e280087874947b6871e075e5fd6825968e013ef385440eba4" Workload="ci--3975.2.0--f--5a6fbdc7ed-k8s-coredns--5dd5756b68--d6hq7-eth0" Aug 6 07:54:32.551595 containerd[1467]: 2024-08-06 07:54:32.539 [INFO][4200] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 6 07:54:32.551595 containerd[1467]: 2024-08-06 07:54:32.544 [INFO][4182] k8s.go 621: Teardown processing complete. ContainerID="55870b105d7c660e280087874947b6871e075e5fd6825968e013ef385440eba4" Aug 6 07:54:32.552790 containerd[1467]: time="2024-08-06T07:54:32.552331857Z" level=info msg="TearDown network for sandbox \"55870b105d7c660e280087874947b6871e075e5fd6825968e013ef385440eba4\" successfully" Aug 6 07:54:32.552790 containerd[1467]: time="2024-08-06T07:54:32.552364773Z" level=info msg="StopPodSandbox for \"55870b105d7c660e280087874947b6871e075e5fd6825968e013ef385440eba4\" returns successfully" Aug 6 07:54:32.555066 kubelet[2504]: E0806 07:54:32.553126 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:54:32.555898 containerd[1467]: time="2024-08-06T07:54:32.555852868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-d6hq7,Uid:c9550f08-461d-4533-b70f-c490af4113f0,Namespace:kube-system,Attempt:1,}" Aug 6 07:54:32.582285 kubelet[2504]: E0806 07:54:32.582245 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:54:32.583157 kubelet[2504]: E0806 07:54:32.583133 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:54:32.835586 systemd-networkd[1340]: calidd766066cb0: Link UP Aug 6 07:54:32.840600 systemd-networkd[1340]: calidd766066cb0: Gained carrier Aug 6 07:54:32.879611 containerd[1467]: 2024-08-06 07:54:32.672 [INFO][4214] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.2.0--f--5a6fbdc7ed-k8s-calico--kube--controllers--5568985977--nxh2j-eth0 calico-kube-controllers-5568985977- calico-system c1565eaa-995e-4ed3-9f2f-779318f9f3a8 787 0 2024-08-06 07:54:05 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5568985977 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-3975.2.0-f-5a6fbdc7ed calico-kube-controllers-5568985977-nxh2j eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calidd766066cb0 [] []}} ContainerID="37e57245be030efb35ebdf0b45151e8b4d6b04898e1083d3cd1b4846c160f324" Namespace="calico-system" Pod="calico-kube-controllers-5568985977-nxh2j" WorkloadEndpoint="ci--3975.2.0--f--5a6fbdc7ed-k8s-calico--kube--controllers--5568985977--nxh2j-" Aug 6 07:54:32.879611 containerd[1467]: 2024-08-06 07:54:32.675 [INFO][4214] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="37e57245be030efb35ebdf0b45151e8b4d6b04898e1083d3cd1b4846c160f324" Namespace="calico-system" Pod="calico-kube-controllers-5568985977-nxh2j" WorkloadEndpoint="ci--3975.2.0--f--5a6fbdc7ed-k8s-calico--kube--controllers--5568985977--nxh2j-eth0" Aug 6 07:54:32.879611 containerd[1467]: 2024-08-06 07:54:32.730 [INFO][4243] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="37e57245be030efb35ebdf0b45151e8b4d6b04898e1083d3cd1b4846c160f324" HandleID="k8s-pod-network.37e57245be030efb35ebdf0b45151e8b4d6b04898e1083d3cd1b4846c160f324" Workload="ci--3975.2.0--f--5a6fbdc7ed-k8s-calico--kube--controllers--5568985977--nxh2j-eth0" Aug 6 07:54:32.879611 containerd[1467]: 2024-08-06 07:54:32.752 [INFO][4243] ipam_plugin.go 264: Auto assigning IP ContainerID="37e57245be030efb35ebdf0b45151e8b4d6b04898e1083d3cd1b4846c160f324" HandleID="k8s-pod-network.37e57245be030efb35ebdf0b45151e8b4d6b04898e1083d3cd1b4846c160f324" Workload="ci--3975.2.0--f--5a6fbdc7ed-k8s-calico--kube--controllers--5568985977--nxh2j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ede00), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3975.2.0-f-5a6fbdc7ed", "pod":"calico-kube-controllers-5568985977-nxh2j", "timestamp":"2024-08-06 07:54:32.730259508 +0000 UTC"}, Hostname:"ci-3975.2.0-f-5a6fbdc7ed", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 6 07:54:32.879611 containerd[1467]: 2024-08-06 07:54:32.753 [INFO][4243] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 6 07:54:32.879611 containerd[1467]: 2024-08-06 07:54:32.753 [INFO][4243] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 6 07:54:32.879611 containerd[1467]: 2024-08-06 07:54:32.753 [INFO][4243] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.2.0-f-5a6fbdc7ed' Aug 6 07:54:32.879611 containerd[1467]: 2024-08-06 07:54:32.757 [INFO][4243] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.37e57245be030efb35ebdf0b45151e8b4d6b04898e1083d3cd1b4846c160f324" host="ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:54:32.879611 containerd[1467]: 2024-08-06 07:54:32.765 [INFO][4243] ipam.go 372: Looking up existing affinities for host host="ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:54:32.879611 containerd[1467]: 2024-08-06 07:54:32.777 [INFO][4243] ipam.go 489: Trying affinity for 192.168.3.128/26 host="ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:54:32.879611 containerd[1467]: 2024-08-06 07:54:32.782 [INFO][4243] ipam.go 155: Attempting to load block cidr=192.168.3.128/26 host="ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:54:32.879611 containerd[1467]: 2024-08-06 07:54:32.789 [INFO][4243] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.3.128/26 host="ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:54:32.879611 containerd[1467]: 2024-08-06 07:54:32.789 [INFO][4243] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.3.128/26 handle="k8s-pod-network.37e57245be030efb35ebdf0b45151e8b4d6b04898e1083d3cd1b4846c160f324" host="ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:54:32.879611 containerd[1467]: 2024-08-06 07:54:32.793 [INFO][4243] ipam.go 1685: Creating new handle: k8s-pod-network.37e57245be030efb35ebdf0b45151e8b4d6b04898e1083d3cd1b4846c160f324 Aug 6 07:54:32.879611 containerd[1467]: 2024-08-06 07:54:32.802 [INFO][4243] ipam.go 1203: Writing block in order to claim IPs block=192.168.3.128/26 handle="k8s-pod-network.37e57245be030efb35ebdf0b45151e8b4d6b04898e1083d3cd1b4846c160f324" host="ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:54:32.879611 containerd[1467]: 2024-08-06 07:54:32.816 [INFO][4243] ipam.go 1216: Successfully claimed IPs: [192.168.3.131/26] block=192.168.3.128/26 handle="k8s-pod-network.37e57245be030efb35ebdf0b45151e8b4d6b04898e1083d3cd1b4846c160f324" host="ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:54:32.879611 containerd[1467]: 2024-08-06 07:54:32.819 [INFO][4243] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.3.131/26] handle="k8s-pod-network.37e57245be030efb35ebdf0b45151e8b4d6b04898e1083d3cd1b4846c160f324" host="ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:54:32.879611 containerd[1467]: 2024-08-06 07:54:32.820 [INFO][4243] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 6 07:54:32.879611 containerd[1467]: 2024-08-06 07:54:32.820 [INFO][4243] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.3.131/26] IPv6=[] ContainerID="37e57245be030efb35ebdf0b45151e8b4d6b04898e1083d3cd1b4846c160f324" HandleID="k8s-pod-network.37e57245be030efb35ebdf0b45151e8b4d6b04898e1083d3cd1b4846c160f324" Workload="ci--3975.2.0--f--5a6fbdc7ed-k8s-calico--kube--controllers--5568985977--nxh2j-eth0" Aug 6 07:54:32.881703 containerd[1467]: 2024-08-06 07:54:32.828 [INFO][4214] k8s.go 386: Populated endpoint ContainerID="37e57245be030efb35ebdf0b45151e8b4d6b04898e1083d3cd1b4846c160f324" Namespace="calico-system" Pod="calico-kube-controllers-5568985977-nxh2j" WorkloadEndpoint="ci--3975.2.0--f--5a6fbdc7ed-k8s-calico--kube--controllers--5568985977--nxh2j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.0--f--5a6fbdc7ed-k8s-calico--kube--controllers--5568985977--nxh2j-eth0", GenerateName:"calico-kube-controllers-5568985977-", Namespace:"calico-system", SelfLink:"", UID:"c1565eaa-995e-4ed3-9f2f-779318f9f3a8", ResourceVersion:"787", Generation:0, CreationTimestamp:time.Date(2024, time.August, 6, 7, 54, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5568985977", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.0-f-5a6fbdc7ed", ContainerID:"", Pod:"calico-kube-controllers-5568985977-nxh2j", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.3.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidd766066cb0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 6 07:54:32.881703 containerd[1467]: 2024-08-06 07:54:32.829 [INFO][4214] k8s.go 387: Calico CNI using IPs: [192.168.3.131/32] ContainerID="37e57245be030efb35ebdf0b45151e8b4d6b04898e1083d3cd1b4846c160f324" Namespace="calico-system" Pod="calico-kube-controllers-5568985977-nxh2j" WorkloadEndpoint="ci--3975.2.0--f--5a6fbdc7ed-k8s-calico--kube--controllers--5568985977--nxh2j-eth0" Aug 6 07:54:32.881703 containerd[1467]: 2024-08-06 07:54:32.830 [INFO][4214] dataplane_linux.go 68: Setting the host side veth name to calidd766066cb0 ContainerID="37e57245be030efb35ebdf0b45151e8b4d6b04898e1083d3cd1b4846c160f324" Namespace="calico-system" Pod="calico-kube-controllers-5568985977-nxh2j" WorkloadEndpoint="ci--3975.2.0--f--5a6fbdc7ed-k8s-calico--kube--controllers--5568985977--nxh2j-eth0" Aug 6 07:54:32.881703 containerd[1467]: 2024-08-06 07:54:32.835 [INFO][4214] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="37e57245be030efb35ebdf0b45151e8b4d6b04898e1083d3cd1b4846c160f324" Namespace="calico-system" Pod="calico-kube-controllers-5568985977-nxh2j" WorkloadEndpoint="ci--3975.2.0--f--5a6fbdc7ed-k8s-calico--kube--controllers--5568985977--nxh2j-eth0" Aug 6 07:54:32.881703 containerd[1467]: 2024-08-06 07:54:32.835 [INFO][4214] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="37e57245be030efb35ebdf0b45151e8b4d6b04898e1083d3cd1b4846c160f324" Namespace="calico-system" Pod="calico-kube-controllers-5568985977-nxh2j" WorkloadEndpoint="ci--3975.2.0--f--5a6fbdc7ed-k8s-calico--kube--controllers--5568985977--nxh2j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.0--f--5a6fbdc7ed-k8s-calico--kube--controllers--5568985977--nxh2j-eth0", GenerateName:"calico-kube-controllers-5568985977-", Namespace:"calico-system", SelfLink:"", UID:"c1565eaa-995e-4ed3-9f2f-779318f9f3a8", ResourceVersion:"787", Generation:0, CreationTimestamp:time.Date(2024, time.August, 6, 7, 54, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5568985977", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.0-f-5a6fbdc7ed", ContainerID:"37e57245be030efb35ebdf0b45151e8b4d6b04898e1083d3cd1b4846c160f324", Pod:"calico-kube-controllers-5568985977-nxh2j", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.3.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidd766066cb0", MAC:"9e:08:dd:96:f8:4f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 6 07:54:32.881703 containerd[1467]: 2024-08-06 07:54:32.860 [INFO][4214] k8s.go 500: Wrote updated endpoint to datastore ContainerID="37e57245be030efb35ebdf0b45151e8b4d6b04898e1083d3cd1b4846c160f324" Namespace="calico-system" Pod="calico-kube-controllers-5568985977-nxh2j" WorkloadEndpoint="ci--3975.2.0--f--5a6fbdc7ed-k8s-calico--kube--controllers--5568985977--nxh2j-eth0" Aug 6 07:54:32.942773 systemd-networkd[1340]: cali1025806133d: Link UP Aug 6 07:54:32.946234 systemd-networkd[1340]: cali1025806133d: Gained carrier Aug 6 07:54:32.977494 containerd[1467]: 2024-08-06 07:54:32.660 [INFO][4229] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.2.0--f--5a6fbdc7ed-k8s-coredns--5dd5756b68--d6hq7-eth0 coredns-5dd5756b68- kube-system c9550f08-461d-4533-b70f-c490af4113f0 786 0 2024-08-06 07:53:58 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3975.2.0-f-5a6fbdc7ed coredns-5dd5756b68-d6hq7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1025806133d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="6e0596bfd512a6777e89e395f173da12de4286c86209730391553915a08f5ce9" Namespace="kube-system" Pod="coredns-5dd5756b68-d6hq7" WorkloadEndpoint="ci--3975.2.0--f--5a6fbdc7ed-k8s-coredns--5dd5756b68--d6hq7-" Aug 6 07:54:32.977494 containerd[1467]: 2024-08-06 07:54:32.661 [INFO][4229] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6e0596bfd512a6777e89e395f173da12de4286c86209730391553915a08f5ce9" Namespace="kube-system" Pod="coredns-5dd5756b68-d6hq7" WorkloadEndpoint="ci--3975.2.0--f--5a6fbdc7ed-k8s-coredns--5dd5756b68--d6hq7-eth0" Aug 6 07:54:32.977494 containerd[1467]: 2024-08-06 07:54:32.744 [INFO][4239] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6e0596bfd512a6777e89e395f173da12de4286c86209730391553915a08f5ce9" HandleID="k8s-pod-network.6e0596bfd512a6777e89e395f173da12de4286c86209730391553915a08f5ce9" Workload="ci--3975.2.0--f--5a6fbdc7ed-k8s-coredns--5dd5756b68--d6hq7-eth0" Aug 6 07:54:32.977494 containerd[1467]: 2024-08-06 07:54:32.760 [INFO][4239] ipam_plugin.go 264: Auto assigning IP ContainerID="6e0596bfd512a6777e89e395f173da12de4286c86209730391553915a08f5ce9" HandleID="k8s-pod-network.6e0596bfd512a6777e89e395f173da12de4286c86209730391553915a08f5ce9" Workload="ci--3975.2.0--f--5a6fbdc7ed-k8s-coredns--5dd5756b68--d6hq7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002be260), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3975.2.0-f-5a6fbdc7ed", "pod":"coredns-5dd5756b68-d6hq7", "timestamp":"2024-08-06 07:54:32.744325549 +0000 UTC"}, Hostname:"ci-3975.2.0-f-5a6fbdc7ed", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 6 07:54:32.977494 containerd[1467]: 2024-08-06 07:54:32.761 [INFO][4239] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 6 07:54:32.977494 containerd[1467]: 2024-08-06 07:54:32.820 [INFO][4239] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 6 07:54:32.977494 containerd[1467]: 2024-08-06 07:54:32.820 [INFO][4239] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.2.0-f-5a6fbdc7ed' Aug 6 07:54:32.977494 containerd[1467]: 2024-08-06 07:54:32.825 [INFO][4239] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6e0596bfd512a6777e89e395f173da12de4286c86209730391553915a08f5ce9" host="ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:54:32.977494 containerd[1467]: 2024-08-06 07:54:32.843 [INFO][4239] ipam.go 372: Looking up existing affinities for host host="ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:54:32.977494 containerd[1467]: 2024-08-06 07:54:32.862 [INFO][4239] ipam.go 489: Trying affinity for 192.168.3.128/26 host="ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:54:32.977494 containerd[1467]: 2024-08-06 07:54:32.876 [INFO][4239] ipam.go 155: Attempting to load block cidr=192.168.3.128/26 host="ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:54:32.977494 containerd[1467]: 2024-08-06 07:54:32.884 [INFO][4239] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.3.128/26 host="ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:54:32.977494 containerd[1467]: 2024-08-06 07:54:32.885 [INFO][4239] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.3.128/26 handle="k8s-pod-network.6e0596bfd512a6777e89e395f173da12de4286c86209730391553915a08f5ce9" host="ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:54:32.977494 containerd[1467]: 2024-08-06 07:54:32.889 [INFO][4239] ipam.go 1685: Creating new handle: k8s-pod-network.6e0596bfd512a6777e89e395f173da12de4286c86209730391553915a08f5ce9 Aug 6 07:54:32.977494 containerd[1467]: 2024-08-06 07:54:32.903 [INFO][4239] ipam.go 1203: Writing block in order to claim IPs block=192.168.3.128/26 handle="k8s-pod-network.6e0596bfd512a6777e89e395f173da12de4286c86209730391553915a08f5ce9" host="ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:54:32.977494 containerd[1467]: 2024-08-06 07:54:32.931 [INFO][4239] ipam.go 1216: Successfully claimed IPs: [192.168.3.132/26] block=192.168.3.128/26 handle="k8s-pod-network.6e0596bfd512a6777e89e395f173da12de4286c86209730391553915a08f5ce9" host="ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:54:32.977494 containerd[1467]: 2024-08-06 07:54:32.931 [INFO][4239] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.3.132/26] handle="k8s-pod-network.6e0596bfd512a6777e89e395f173da12de4286c86209730391553915a08f5ce9" host="ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:54:32.977494 containerd[1467]: 2024-08-06 07:54:32.931 [INFO][4239] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 6 07:54:32.977494 containerd[1467]: 2024-08-06 07:54:32.931 [INFO][4239] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.3.132/26] IPv6=[] ContainerID="6e0596bfd512a6777e89e395f173da12de4286c86209730391553915a08f5ce9" HandleID="k8s-pod-network.6e0596bfd512a6777e89e395f173da12de4286c86209730391553915a08f5ce9" Workload="ci--3975.2.0--f--5a6fbdc7ed-k8s-coredns--5dd5756b68--d6hq7-eth0" Aug 6 07:54:32.978286 containerd[1467]: 2024-08-06 07:54:32.936 [INFO][4229] k8s.go 386: Populated endpoint ContainerID="6e0596bfd512a6777e89e395f173da12de4286c86209730391553915a08f5ce9" Namespace="kube-system" Pod="coredns-5dd5756b68-d6hq7" WorkloadEndpoint="ci--3975.2.0--f--5a6fbdc7ed-k8s-coredns--5dd5756b68--d6hq7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.0--f--5a6fbdc7ed-k8s-coredns--5dd5756b68--d6hq7-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"c9550f08-461d-4533-b70f-c490af4113f0", ResourceVersion:"786", Generation:0, CreationTimestamp:time.Date(2024, time.August, 6, 7, 53, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.0-f-5a6fbdc7ed", ContainerID:"", Pod:"coredns-5dd5756b68-d6hq7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.3.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1025806133d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 6 07:54:32.978286 containerd[1467]: 2024-08-06 07:54:32.937 [INFO][4229] k8s.go 387: Calico CNI using IPs: [192.168.3.132/32] ContainerID="6e0596bfd512a6777e89e395f173da12de4286c86209730391553915a08f5ce9" Namespace="kube-system" Pod="coredns-5dd5756b68-d6hq7" WorkloadEndpoint="ci--3975.2.0--f--5a6fbdc7ed-k8s-coredns--5dd5756b68--d6hq7-eth0" Aug 6 07:54:32.978286 containerd[1467]: 2024-08-06 07:54:32.937 [INFO][4229] dataplane_linux.go 68: Setting the host side veth name to cali1025806133d ContainerID="6e0596bfd512a6777e89e395f173da12de4286c86209730391553915a08f5ce9" Namespace="kube-system" Pod="coredns-5dd5756b68-d6hq7" WorkloadEndpoint="ci--3975.2.0--f--5a6fbdc7ed-k8s-coredns--5dd5756b68--d6hq7-eth0" Aug 6 07:54:32.978286 containerd[1467]: 2024-08-06 07:54:32.947 [INFO][4229] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="6e0596bfd512a6777e89e395f173da12de4286c86209730391553915a08f5ce9" Namespace="kube-system" Pod="coredns-5dd5756b68-d6hq7" WorkloadEndpoint="ci--3975.2.0--f--5a6fbdc7ed-k8s-coredns--5dd5756b68--d6hq7-eth0" Aug 6 07:54:32.978286 containerd[1467]: 2024-08-06 07:54:32.951 [INFO][4229] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6e0596bfd512a6777e89e395f173da12de4286c86209730391553915a08f5ce9" Namespace="kube-system" Pod="coredns-5dd5756b68-d6hq7" WorkloadEndpoint="ci--3975.2.0--f--5a6fbdc7ed-k8s-coredns--5dd5756b68--d6hq7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.0--f--5a6fbdc7ed-k8s-coredns--5dd5756b68--d6hq7-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"c9550f08-461d-4533-b70f-c490af4113f0", ResourceVersion:"786", Generation:0, CreationTimestamp:time.Date(2024, time.August, 6, 7, 53, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.0-f-5a6fbdc7ed", ContainerID:"6e0596bfd512a6777e89e395f173da12de4286c86209730391553915a08f5ce9", Pod:"coredns-5dd5756b68-d6hq7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.3.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1025806133d", MAC:"86:1e:ea:90:e9:4e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 6 07:54:32.978286 containerd[1467]: 2024-08-06 07:54:32.973 [INFO][4229] k8s.go 500: Wrote updated endpoint to datastore ContainerID="6e0596bfd512a6777e89e395f173da12de4286c86209730391553915a08f5ce9" Namespace="kube-system" Pod="coredns-5dd5756b68-d6hq7" WorkloadEndpoint="ci--3975.2.0--f--5a6fbdc7ed-k8s-coredns--5dd5756b68--d6hq7-eth0" Aug 6 07:54:32.988805 containerd[1467]: time="2024-08-06T07:54:32.986822338Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 6 07:54:32.988805 containerd[1467]: time="2024-08-06T07:54:32.986892605Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 07:54:32.988805 containerd[1467]: time="2024-08-06T07:54:32.986912482Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 6 07:54:32.988805 containerd[1467]: time="2024-08-06T07:54:32.986925880Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 07:54:33.038045 containerd[1467]: time="2024-08-06T07:54:33.037232378Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 6 07:54:33.038276 containerd[1467]: time="2024-08-06T07:54:33.038024269Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 07:54:33.038276 containerd[1467]: time="2024-08-06T07:54:33.038071649Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 6 07:54:33.038276 containerd[1467]: time="2024-08-06T07:54:33.038093889Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 07:54:33.058215 systemd[1]: Started cri-containerd-37e57245be030efb35ebdf0b45151e8b4d6b04898e1083d3cd1b4846c160f324.scope - libcontainer container 37e57245be030efb35ebdf0b45151e8b4d6b04898e1083d3cd1b4846c160f324. Aug 6 07:54:33.108279 systemd[1]: Started cri-containerd-6e0596bfd512a6777e89e395f173da12de4286c86209730391553915a08f5ce9.scope - libcontainer container 6e0596bfd512a6777e89e395f173da12de4286c86209730391553915a08f5ce9. Aug 6 07:54:33.135249 systemd[1]: run-netns-cni\x2d2aaaff68\x2dcd2c\x2dec3b\x2df780\x2da3e6b3a424b6.mount: Deactivated successfully. Aug 6 07:54:33.238003 containerd[1467]: time="2024-08-06T07:54:33.237284212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-d6hq7,Uid:c9550f08-461d-4533-b70f-c490af4113f0,Namespace:kube-system,Attempt:1,} returns sandbox id \"6e0596bfd512a6777e89e395f173da12de4286c86209730391553915a08f5ce9\"" Aug 6 07:54:33.238696 kubelet[2504]: E0806 07:54:33.238670 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:54:33.251285 containerd[1467]: time="2024-08-06T07:54:33.251235522Z" level=info msg="CreateContainer within sandbox \"6e0596bfd512a6777e89e395f173da12de4286c86209730391553915a08f5ce9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 6 07:54:33.277326 containerd[1467]: time="2024-08-06T07:54:33.277285374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5568985977-nxh2j,Uid:c1565eaa-995e-4ed3-9f2f-779318f9f3a8,Namespace:calico-system,Attempt:1,} returns sandbox id \"37e57245be030efb35ebdf0b45151e8b4d6b04898e1083d3cd1b4846c160f324\"" Aug 6 07:54:33.285303 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1114489550.mount: Deactivated successfully. Aug 6 07:54:33.287856 containerd[1467]: time="2024-08-06T07:54:33.287117649Z" level=info msg="CreateContainer within sandbox \"6e0596bfd512a6777e89e395f173da12de4286c86209730391553915a08f5ce9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3d04fda287767f17b2f8f97117346368bf8c59695cf3edde88584188c59237e6\"" Aug 6 07:54:33.288888 containerd[1467]: time="2024-08-06T07:54:33.288837659Z" level=info msg="StartContainer for \"3d04fda287767f17b2f8f97117346368bf8c59695cf3edde88584188c59237e6\"" Aug 6 07:54:33.316661 containerd[1467]: time="2024-08-06T07:54:33.314338748Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:54:33.319004 containerd[1467]: time="2024-08-06T07:54:33.318934007Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=10147655" Aug 6 07:54:33.321827 containerd[1467]: time="2024-08-06T07:54:33.321768356Z" level=info msg="ImageCreate event name:\"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:54:33.329699 containerd[1467]: time="2024-08-06T07:54:33.329640095Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:54:33.331404 containerd[1467]: time="2024-08-06T07:54:33.330784315Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"11595367\" in 2.152909619s" Aug 6 07:54:33.331593 containerd[1467]: time="2024-08-06T07:54:33.331565122Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\"" Aug 6 07:54:33.334331 containerd[1467]: time="2024-08-06T07:54:33.334207674Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Aug 6 07:54:33.336183 containerd[1467]: time="2024-08-06T07:54:33.336104050Z" level=info msg="CreateContainer within sandbox \"1d6e84aff724cde6d8ea261a8b5e02170f9dd915fc7380ad74ef5719d73aa4dc\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 6 07:54:33.358314 systemd[1]: Started cri-containerd-3d04fda287767f17b2f8f97117346368bf8c59695cf3edde88584188c59237e6.scope - libcontainer container 3d04fda287767f17b2f8f97117346368bf8c59695cf3edde88584188c59237e6. Aug 6 07:54:33.364305 containerd[1467]: time="2024-08-06T07:54:33.364020764Z" level=info msg="CreateContainer within sandbox \"1d6e84aff724cde6d8ea261a8b5e02170f9dd915fc7380ad74ef5719d73aa4dc\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"2e8bfcf207179e8d27808ce612c2de4b7d2329675f5dfac130009b01f9b40930\"" Aug 6 07:54:33.370956 containerd[1467]: time="2024-08-06T07:54:33.369384411Z" level=info msg="StartContainer for \"2e8bfcf207179e8d27808ce612c2de4b7d2329675f5dfac130009b01f9b40930\"" Aug 6 07:54:33.419297 systemd[1]: Started cri-containerd-2e8bfcf207179e8d27808ce612c2de4b7d2329675f5dfac130009b01f9b40930.scope - libcontainer container 2e8bfcf207179e8d27808ce612c2de4b7d2329675f5dfac130009b01f9b40930. Aug 6 07:54:33.434055 containerd[1467]: time="2024-08-06T07:54:33.433218117Z" level=info msg="StartContainer for \"3d04fda287767f17b2f8f97117346368bf8c59695cf3edde88584188c59237e6\" returns successfully" Aug 6 07:54:33.486645 containerd[1467]: time="2024-08-06T07:54:33.486595496Z" level=info msg="StartContainer for \"2e8bfcf207179e8d27808ce612c2de4b7d2329675f5dfac130009b01f9b40930\" returns successfully" Aug 6 07:54:33.586665 kubelet[2504]: E0806 07:54:33.586631 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:54:33.645996 kubelet[2504]: I0806 07:54:33.644664 2504 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-d6hq7" podStartSLOduration=35.644619097 podCreationTimestamp="2024-08-06 07:53:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-06 07:54:33.616607576 +0000 UTC m=+47.509315739" watchObservedRunningTime="2024-08-06 07:54:33.644619097 +0000 UTC m=+47.537327265" Aug 6 07:54:34.144160 systemd-networkd[1340]: calidd766066cb0: Gained IPv6LL Aug 6 07:54:34.524292 systemd-networkd[1340]: cali1025806133d: Gained IPv6LL Aug 6 07:54:34.619645 kubelet[2504]: E0806 07:54:34.619605 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:54:34.662490 kubelet[2504]: I0806 07:54:34.661033 2504 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 6 07:54:34.672130 kubelet[2504]: I0806 07:54:34.670945 2504 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-jbj5f" podStartSLOduration=25.826371901 podCreationTimestamp="2024-08-06 07:54:05 +0000 UTC" firstStartedPulling="2024-08-06 07:54:29.488114672 +0000 UTC m=+43.380822808" lastFinishedPulling="2024-08-06 07:54:33.332646345 +0000 UTC m=+47.225354501" observedRunningTime="2024-08-06 07:54:33.645626933 +0000 UTC m=+47.538335124" watchObservedRunningTime="2024-08-06 07:54:34.670903594 +0000 UTC m=+48.563611750" Aug 6 07:54:34.678086 kubelet[2504]: I0806 07:54:34.677885 2504 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 6 07:54:35.616745 containerd[1467]: time="2024-08-06T07:54:35.616690014Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:54:35.619695 containerd[1467]: time="2024-08-06T07:54:35.619633815Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=33505793" Aug 6 07:54:35.621562 containerd[1467]: time="2024-08-06T07:54:35.621510020Z" level=info msg="ImageCreate event name:\"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:54:35.626593 kubelet[2504]: E0806 07:54:35.626510 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:54:35.629065 containerd[1467]: time="2024-08-06T07:54:35.627143709Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:54:35.629065 containerd[1467]: time="2024-08-06T07:54:35.627921520Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"34953521\" in 2.293432183s" Aug 6 07:54:35.629065 containerd[1467]: time="2024-08-06T07:54:35.627957890Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\"" Aug 6 07:54:35.659920 containerd[1467]: time="2024-08-06T07:54:35.659853520Z" level=info msg="CreateContainer within sandbox \"37e57245be030efb35ebdf0b45151e8b4d6b04898e1083d3cd1b4846c160f324\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Aug 6 07:54:35.759789 containerd[1467]: time="2024-08-06T07:54:35.759703920Z" level=info msg="CreateContainer within sandbox \"37e57245be030efb35ebdf0b45151e8b4d6b04898e1083d3cd1b4846c160f324\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"4af3f251d1f3f95ac441859bfdfe50648fed9d90d2b82037c1f93514b775a800\"" Aug 6 07:54:35.761504 containerd[1467]: time="2024-08-06T07:54:35.761263153Z" level=info msg="StartContainer for \"4af3f251d1f3f95ac441859bfdfe50648fed9d90d2b82037c1f93514b775a800\"" Aug 6 07:54:35.828317 systemd[1]: Started cri-containerd-4af3f251d1f3f95ac441859bfdfe50648fed9d90d2b82037c1f93514b775a800.scope - libcontainer container 4af3f251d1f3f95ac441859bfdfe50648fed9d90d2b82037c1f93514b775a800. Aug 6 07:54:35.906082 containerd[1467]: time="2024-08-06T07:54:35.905259647Z" level=info msg="StartContainer for \"4af3f251d1f3f95ac441859bfdfe50648fed9d90d2b82037c1f93514b775a800\" returns successfully" Aug 6 07:54:36.630105 kubelet[2504]: E0806 07:54:36.630076 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:54:36.657216 kubelet[2504]: I0806 07:54:36.657155 2504 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5568985977-nxh2j" podStartSLOduration=29.319878301 podCreationTimestamp="2024-08-06 07:54:05 +0000 UTC" firstStartedPulling="2024-08-06 07:54:33.291534852 +0000 UTC m=+47.184243002" lastFinishedPulling="2024-08-06 07:54:35.628748772 +0000 UTC m=+49.521456923" observedRunningTime="2024-08-06 07:54:36.656521561 +0000 UTC m=+50.549229729" watchObservedRunningTime="2024-08-06 07:54:36.657092222 +0000 UTC m=+50.549800378" Aug 6 07:54:36.687688 systemd[1]: run-containerd-runc-k8s.io-4af3f251d1f3f95ac441859bfdfe50648fed9d90d2b82037c1f93514b775a800-runc.DqE2M1.mount: Deactivated successfully. Aug 6 07:54:40.289650 kubelet[2504]: I0806 07:54:40.289552 2504 topology_manager.go:215] "Topology Admit Handler" podUID="e2499d60-022a-4c60-b134-1f5712911b9c" podNamespace="calico-apiserver" podName="calico-apiserver-6dd74b4fb-hg7hb" Aug 6 07:54:40.310467 kubelet[2504]: I0806 07:54:40.309185 2504 topology_manager.go:215] "Topology Admit Handler" podUID="ee07754a-85c0-420d-afac-a2e4703ca541" podNamespace="calico-apiserver" podName="calico-apiserver-6dd74b4fb-zbkd9" Aug 6 07:54:40.344891 systemd[1]: Created slice kubepods-besteffort-pode2499d60_022a_4c60_b134_1f5712911b9c.slice - libcontainer container kubepods-besteffort-pode2499d60_022a_4c60_b134_1f5712911b9c.slice. Aug 6 07:54:40.348726 kubelet[2504]: I0806 07:54:40.348186 2504 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e2499d60-022a-4c60-b134-1f5712911b9c-calico-apiserver-certs\") pod \"calico-apiserver-6dd74b4fb-hg7hb\" (UID: \"e2499d60-022a-4c60-b134-1f5712911b9c\") " pod="calico-apiserver/calico-apiserver-6dd74b4fb-hg7hb" Aug 6 07:54:40.349393 kubelet[2504]: I0806 07:54:40.349093 2504 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqxlg\" (UniqueName: \"kubernetes.io/projected/e2499d60-022a-4c60-b134-1f5712911b9c-kube-api-access-cqxlg\") pod \"calico-apiserver-6dd74b4fb-hg7hb\" (UID: \"e2499d60-022a-4c60-b134-1f5712911b9c\") " pod="calico-apiserver/calico-apiserver-6dd74b4fb-hg7hb" Aug 6 07:54:40.349393 kubelet[2504]: I0806 07:54:40.349170 2504 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ee07754a-85c0-420d-afac-a2e4703ca541-calico-apiserver-certs\") pod \"calico-apiserver-6dd74b4fb-zbkd9\" (UID: \"ee07754a-85c0-420d-afac-a2e4703ca541\") " pod="calico-apiserver/calico-apiserver-6dd74b4fb-zbkd9" Aug 6 07:54:40.349393 kubelet[2504]: I0806 07:54:40.349222 2504 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cv9m\" (UniqueName: \"kubernetes.io/projected/ee07754a-85c0-420d-afac-a2e4703ca541-kube-api-access-2cv9m\") pod \"calico-apiserver-6dd74b4fb-zbkd9\" (UID: \"ee07754a-85c0-420d-afac-a2e4703ca541\") " pod="calico-apiserver/calico-apiserver-6dd74b4fb-zbkd9" Aug 6 07:54:40.387428 systemd[1]: Created slice kubepods-besteffort-podee07754a_85c0_420d_afac_a2e4703ca541.slice - libcontainer container kubepods-besteffort-podee07754a_85c0_420d_afac_a2e4703ca541.slice. Aug 6 07:54:40.441622 systemd[1]: Started sshd@9-143.244.180.140:22-147.75.109.163:50606.service - OpenSSH per-connection server daemon (147.75.109.163:50606). Aug 6 07:54:40.453168 kubelet[2504]: E0806 07:54:40.453102 2504 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Aug 6 07:54:40.454960 kubelet[2504]: E0806 07:54:40.454719 2504 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Aug 6 07:54:40.487230 kubelet[2504]: E0806 07:54:40.487163 2504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee07754a-85c0-420d-afac-a2e4703ca541-calico-apiserver-certs podName:ee07754a-85c0-420d-afac-a2e4703ca541 nodeName:}" failed. No retries permitted until 2024-08-06 07:54:40.953213727 +0000 UTC m=+54.845921880 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/ee07754a-85c0-420d-afac-a2e4703ca541-calico-apiserver-certs") pod "calico-apiserver-6dd74b4fb-zbkd9" (UID: "ee07754a-85c0-420d-afac-a2e4703ca541") : secret "calico-apiserver-certs" not found Aug 6 07:54:40.487531 kubelet[2504]: E0806 07:54:40.487267 2504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e2499d60-022a-4c60-b134-1f5712911b9c-calico-apiserver-certs podName:e2499d60-022a-4c60-b134-1f5712911b9c nodeName:}" failed. No retries permitted until 2024-08-06 07:54:40.987221962 +0000 UTC m=+54.879930101 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/e2499d60-022a-4c60-b134-1f5712911b9c-calico-apiserver-certs") pod "calico-apiserver-6dd74b4fb-hg7hb" (UID: "e2499d60-022a-4c60-b134-1f5712911b9c") : secret "calico-apiserver-certs" not found Aug 6 07:54:40.636154 sshd[4537]: Accepted publickey for core from 147.75.109.163 port 50606 ssh2: RSA SHA256:SPvEXBcgWiyzsQCUAYKdH6mATUwSf1+92OLxtqkDC6k Aug 6 07:54:40.640016 sshd[4537]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:54:40.660328 systemd-logind[1443]: New session 8 of user core. Aug 6 07:54:40.666324 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 6 07:54:40.998450 containerd[1467]: time="2024-08-06T07:54:40.998218628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dd74b4fb-zbkd9,Uid:ee07754a-85c0-420d-afac-a2e4703ca541,Namespace:calico-apiserver,Attempt:0,}" Aug 6 07:54:41.076457 sshd[4537]: pam_unix(sshd:session): session closed for user core Aug 6 07:54:41.105593 systemd[1]: sshd@9-143.244.180.140:22-147.75.109.163:50606.service: Deactivated successfully. Aug 6 07:54:41.112686 systemd[1]: session-8.scope: Deactivated successfully. Aug 6 07:54:41.115631 systemd-logind[1443]: Session 8 logged out. Waiting for processes to exit. Aug 6 07:54:41.118755 systemd-logind[1443]: Removed session 8. Aug 6 07:54:41.273953 containerd[1467]: time="2024-08-06T07:54:41.273787261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dd74b4fb-hg7hb,Uid:e2499d60-022a-4c60-b134-1f5712911b9c,Namespace:calico-apiserver,Attempt:0,}" Aug 6 07:54:41.377078 systemd-networkd[1340]: calif85c1a31413: Link UP Aug 6 07:54:41.378408 systemd-networkd[1340]: calif85c1a31413: Gained carrier Aug 6 07:54:41.423476 containerd[1467]: 2024-08-06 07:54:41.170 [INFO][4560] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.2.0--f--5a6fbdc7ed-k8s-calico--apiserver--6dd74b4fb--zbkd9-eth0 calico-apiserver-6dd74b4fb- calico-apiserver ee07754a-85c0-420d-afac-a2e4703ca541 922 0 2024-08-06 07:54:40 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6dd74b4fb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3975.2.0-f-5a6fbdc7ed calico-apiserver-6dd74b4fb-zbkd9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif85c1a31413 [] []}} ContainerID="021dc137a3943643a045c92d983ffc53b41880b056bcac2d2dafbeebc3c54738" Namespace="calico-apiserver" Pod="calico-apiserver-6dd74b4fb-zbkd9" WorkloadEndpoint="ci--3975.2.0--f--5a6fbdc7ed-k8s-calico--apiserver--6dd74b4fb--zbkd9-" Aug 6 07:54:41.423476 containerd[1467]: 2024-08-06 07:54:41.170 [INFO][4560] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="021dc137a3943643a045c92d983ffc53b41880b056bcac2d2dafbeebc3c54738" Namespace="calico-apiserver" Pod="calico-apiserver-6dd74b4fb-zbkd9" WorkloadEndpoint="ci--3975.2.0--f--5a6fbdc7ed-k8s-calico--apiserver--6dd74b4fb--zbkd9-eth0" Aug 6 07:54:41.423476 containerd[1467]: 2024-08-06 07:54:41.240 [INFO][4575] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="021dc137a3943643a045c92d983ffc53b41880b056bcac2d2dafbeebc3c54738" HandleID="k8s-pod-network.021dc137a3943643a045c92d983ffc53b41880b056bcac2d2dafbeebc3c54738" Workload="ci--3975.2.0--f--5a6fbdc7ed-k8s-calico--apiserver--6dd74b4fb--zbkd9-eth0" Aug 6 07:54:41.423476 containerd[1467]: 2024-08-06 07:54:41.256 [INFO][4575] ipam_plugin.go 264: Auto assigning IP ContainerID="021dc137a3943643a045c92d983ffc53b41880b056bcac2d2dafbeebc3c54738" HandleID="k8s-pod-network.021dc137a3943643a045c92d983ffc53b41880b056bcac2d2dafbeebc3c54738" Workload="ci--3975.2.0--f--5a6fbdc7ed-k8s-calico--apiserver--6dd74b4fb--zbkd9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031a2b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3975.2.0-f-5a6fbdc7ed", "pod":"calico-apiserver-6dd74b4fb-zbkd9", "timestamp":"2024-08-06 07:54:41.240280281 +0000 UTC"}, Hostname:"ci-3975.2.0-f-5a6fbdc7ed", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 6 07:54:41.423476 containerd[1467]: 2024-08-06 07:54:41.257 [INFO][4575] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 6 07:54:41.423476 containerd[1467]: 2024-08-06 07:54:41.257 [INFO][4575] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 6 07:54:41.423476 containerd[1467]: 2024-08-06 07:54:41.257 [INFO][4575] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.2.0-f-5a6fbdc7ed' Aug 6 07:54:41.423476 containerd[1467]: 2024-08-06 07:54:41.263 [INFO][4575] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.021dc137a3943643a045c92d983ffc53b41880b056bcac2d2dafbeebc3c54738" host="ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:54:41.423476 containerd[1467]: 2024-08-06 07:54:41.275 [INFO][4575] ipam.go 372: Looking up existing affinities for host host="ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:54:41.423476 containerd[1467]: 2024-08-06 07:54:41.286 [INFO][4575] ipam.go 489: Trying affinity for 192.168.3.128/26 host="ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:54:41.423476 containerd[1467]: 2024-08-06 07:54:41.292 [INFO][4575] ipam.go 155: Attempting to load block cidr=192.168.3.128/26 host="ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:54:41.423476 containerd[1467]: 2024-08-06 07:54:41.299 [INFO][4575] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.3.128/26 host="ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:54:41.423476 containerd[1467]: 2024-08-06 07:54:41.299 [INFO][4575] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.3.128/26 handle="k8s-pod-network.021dc137a3943643a045c92d983ffc53b41880b056bcac2d2dafbeebc3c54738" host="ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:54:41.423476 containerd[1467]: 2024-08-06 07:54:41.304 [INFO][4575] ipam.go 1685: Creating new handle: k8s-pod-network.021dc137a3943643a045c92d983ffc53b41880b056bcac2d2dafbeebc3c54738 Aug 6 07:54:41.423476 containerd[1467]: 2024-08-06 07:54:41.313 [INFO][4575] ipam.go 1203: Writing block in order to claim IPs block=192.168.3.128/26 handle="k8s-pod-network.021dc137a3943643a045c92d983ffc53b41880b056bcac2d2dafbeebc3c54738" host="ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:54:41.423476 containerd[1467]: 2024-08-06 07:54:41.345 [INFO][4575] ipam.go 1216: Successfully claimed IPs: [192.168.3.133/26] block=192.168.3.128/26 handle="k8s-pod-network.021dc137a3943643a045c92d983ffc53b41880b056bcac2d2dafbeebc3c54738" host="ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:54:41.423476 containerd[1467]: 2024-08-06 07:54:41.348 [INFO][4575] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.3.133/26] handle="k8s-pod-network.021dc137a3943643a045c92d983ffc53b41880b056bcac2d2dafbeebc3c54738" host="ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:54:41.423476 containerd[1467]: 2024-08-06 07:54:41.349 [INFO][4575] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 6 07:54:41.423476 containerd[1467]: 2024-08-06 07:54:41.349 [INFO][4575] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.3.133/26] IPv6=[] ContainerID="021dc137a3943643a045c92d983ffc53b41880b056bcac2d2dafbeebc3c54738" HandleID="k8s-pod-network.021dc137a3943643a045c92d983ffc53b41880b056bcac2d2dafbeebc3c54738" Workload="ci--3975.2.0--f--5a6fbdc7ed-k8s-calico--apiserver--6dd74b4fb--zbkd9-eth0" Aug 6 07:54:41.427614 containerd[1467]: 2024-08-06 07:54:41.359 [INFO][4560] k8s.go 386: Populated endpoint ContainerID="021dc137a3943643a045c92d983ffc53b41880b056bcac2d2dafbeebc3c54738" Namespace="calico-apiserver" Pod="calico-apiserver-6dd74b4fb-zbkd9" WorkloadEndpoint="ci--3975.2.0--f--5a6fbdc7ed-k8s-calico--apiserver--6dd74b4fb--zbkd9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.0--f--5a6fbdc7ed-k8s-calico--apiserver--6dd74b4fb--zbkd9-eth0", GenerateName:"calico-apiserver-6dd74b4fb-", Namespace:"calico-apiserver", SelfLink:"", UID:"ee07754a-85c0-420d-afac-a2e4703ca541", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2024, time.August, 6, 7, 54, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dd74b4fb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.0-f-5a6fbdc7ed", ContainerID:"", Pod:"calico-apiserver-6dd74b4fb-zbkd9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.3.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif85c1a31413", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 6 07:54:41.427614 containerd[1467]: 2024-08-06 07:54:41.360 [INFO][4560] k8s.go 387: Calico CNI using IPs: [192.168.3.133/32] ContainerID="021dc137a3943643a045c92d983ffc53b41880b056bcac2d2dafbeebc3c54738" Namespace="calico-apiserver" Pod="calico-apiserver-6dd74b4fb-zbkd9" WorkloadEndpoint="ci--3975.2.0--f--5a6fbdc7ed-k8s-calico--apiserver--6dd74b4fb--zbkd9-eth0" Aug 6 07:54:41.427614 containerd[1467]: 2024-08-06 07:54:41.360 [INFO][4560] dataplane_linux.go 68: Setting the host side veth name to calif85c1a31413 ContainerID="021dc137a3943643a045c92d983ffc53b41880b056bcac2d2dafbeebc3c54738" Namespace="calico-apiserver" Pod="calico-apiserver-6dd74b4fb-zbkd9" WorkloadEndpoint="ci--3975.2.0--f--5a6fbdc7ed-k8s-calico--apiserver--6dd74b4fb--zbkd9-eth0" Aug 6 07:54:41.427614 containerd[1467]: 2024-08-06 07:54:41.382 [INFO][4560] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="021dc137a3943643a045c92d983ffc53b41880b056bcac2d2dafbeebc3c54738" Namespace="calico-apiserver" Pod="calico-apiserver-6dd74b4fb-zbkd9" WorkloadEndpoint="ci--3975.2.0--f--5a6fbdc7ed-k8s-calico--apiserver--6dd74b4fb--zbkd9-eth0" Aug 6 07:54:41.427614 containerd[1467]: 2024-08-06 07:54:41.383 [INFO][4560] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="021dc137a3943643a045c92d983ffc53b41880b056bcac2d2dafbeebc3c54738" Namespace="calico-apiserver" Pod="calico-apiserver-6dd74b4fb-zbkd9" WorkloadEndpoint="ci--3975.2.0--f--5a6fbdc7ed-k8s-calico--apiserver--6dd74b4fb--zbkd9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.0--f--5a6fbdc7ed-k8s-calico--apiserver--6dd74b4fb--zbkd9-eth0", GenerateName:"calico-apiserver-6dd74b4fb-", Namespace:"calico-apiserver", SelfLink:"", UID:"ee07754a-85c0-420d-afac-a2e4703ca541", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2024, time.August, 6, 7, 54, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dd74b4fb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.0-f-5a6fbdc7ed", ContainerID:"021dc137a3943643a045c92d983ffc53b41880b056bcac2d2dafbeebc3c54738", Pod:"calico-apiserver-6dd74b4fb-zbkd9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.3.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif85c1a31413", MAC:"56:11:3f:6d:5c:27", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 6 07:54:41.427614 containerd[1467]: 2024-08-06 07:54:41.402 [INFO][4560] k8s.go 500: Wrote updated endpoint to datastore ContainerID="021dc137a3943643a045c92d983ffc53b41880b056bcac2d2dafbeebc3c54738" Namespace="calico-apiserver" Pod="calico-apiserver-6dd74b4fb-zbkd9" WorkloadEndpoint="ci--3975.2.0--f--5a6fbdc7ed-k8s-calico--apiserver--6dd74b4fb--zbkd9-eth0" Aug 6 07:54:41.619234 containerd[1467]: time="2024-08-06T07:54:41.616805733Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 6 07:54:41.619234 containerd[1467]: time="2024-08-06T07:54:41.616906452Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 07:54:41.619234 containerd[1467]: time="2024-08-06T07:54:41.616938653Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 6 07:54:41.623122 containerd[1467]: time="2024-08-06T07:54:41.616959061Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 07:54:41.764331 systemd[1]: run-containerd-runc-k8s.io-021dc137a3943643a045c92d983ffc53b41880b056bcac2d2dafbeebc3c54738-runc.AIhqDn.mount: Deactivated successfully. Aug 6 07:54:41.778810 systemd[1]: Started cri-containerd-021dc137a3943643a045c92d983ffc53b41880b056bcac2d2dafbeebc3c54738.scope - libcontainer container 021dc137a3943643a045c92d983ffc53b41880b056bcac2d2dafbeebc3c54738. Aug 6 07:54:41.850257 systemd-networkd[1340]: cali5ba4dcccdd2: Link UP Aug 6 07:54:41.851688 systemd-networkd[1340]: cali5ba4dcccdd2: Gained carrier Aug 6 07:54:41.891097 containerd[1467]: 2024-08-06 07:54:41.469 [INFO][4581] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.2.0--f--5a6fbdc7ed-k8s-calico--apiserver--6dd74b4fb--hg7hb-eth0 calico-apiserver-6dd74b4fb- calico-apiserver e2499d60-022a-4c60-b134-1f5712911b9c 927 0 2024-08-06 07:54:40 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6dd74b4fb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3975.2.0-f-5a6fbdc7ed calico-apiserver-6dd74b4fb-hg7hb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5ba4dcccdd2 [] []}} ContainerID="19cfc03e12eae35c153a18e3f7b58b25130aaf2ccb01d4dcc35ac7c34f0fbcc6" Namespace="calico-apiserver" Pod="calico-apiserver-6dd74b4fb-hg7hb" WorkloadEndpoint="ci--3975.2.0--f--5a6fbdc7ed-k8s-calico--apiserver--6dd74b4fb--hg7hb-" Aug 6 07:54:41.891097 containerd[1467]: 2024-08-06 07:54:41.469 [INFO][4581] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="19cfc03e12eae35c153a18e3f7b58b25130aaf2ccb01d4dcc35ac7c34f0fbcc6" Namespace="calico-apiserver" Pod="calico-apiserver-6dd74b4fb-hg7hb" WorkloadEndpoint="ci--3975.2.0--f--5a6fbdc7ed-k8s-calico--apiserver--6dd74b4fb--hg7hb-eth0" Aug 6 07:54:41.891097 containerd[1467]: 2024-08-06 07:54:41.685 [INFO][4609] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="19cfc03e12eae35c153a18e3f7b58b25130aaf2ccb01d4dcc35ac7c34f0fbcc6" HandleID="k8s-pod-network.19cfc03e12eae35c153a18e3f7b58b25130aaf2ccb01d4dcc35ac7c34f0fbcc6" Workload="ci--3975.2.0--f--5a6fbdc7ed-k8s-calico--apiserver--6dd74b4fb--hg7hb-eth0" Aug 6 07:54:41.891097 containerd[1467]: 2024-08-06 07:54:41.713 [INFO][4609] ipam_plugin.go 264: Auto assigning IP ContainerID="19cfc03e12eae35c153a18e3f7b58b25130aaf2ccb01d4dcc35ac7c34f0fbcc6" HandleID="k8s-pod-network.19cfc03e12eae35c153a18e3f7b58b25130aaf2ccb01d4dcc35ac7c34f0fbcc6" Workload="ci--3975.2.0--f--5a6fbdc7ed-k8s-calico--apiserver--6dd74b4fb--hg7hb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000050d10), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3975.2.0-f-5a6fbdc7ed", "pod":"calico-apiserver-6dd74b4fb-hg7hb", "timestamp":"2024-08-06 07:54:41.682795127 +0000 UTC"}, Hostname:"ci-3975.2.0-f-5a6fbdc7ed", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 6 07:54:41.891097 containerd[1467]: 2024-08-06 07:54:41.713 [INFO][4609] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 6 07:54:41.891097 containerd[1467]: 2024-08-06 07:54:41.713 [INFO][4609] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 6 07:54:41.891097 containerd[1467]: 2024-08-06 07:54:41.713 [INFO][4609] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.2.0-f-5a6fbdc7ed' Aug 6 07:54:41.891097 containerd[1467]: 2024-08-06 07:54:41.723 [INFO][4609] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.19cfc03e12eae35c153a18e3f7b58b25130aaf2ccb01d4dcc35ac7c34f0fbcc6" host="ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:54:41.891097 containerd[1467]: 2024-08-06 07:54:41.776 [INFO][4609] ipam.go 372: Looking up existing affinities for host host="ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:54:41.891097 containerd[1467]: 2024-08-06 07:54:41.798 [INFO][4609] ipam.go 489: Trying affinity for 192.168.3.128/26 host="ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:54:41.891097 containerd[1467]: 2024-08-06 07:54:41.807 [INFO][4609] ipam.go 155: Attempting to load block cidr=192.168.3.128/26 host="ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:54:41.891097 containerd[1467]: 2024-08-06 07:54:41.813 [INFO][4609] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.3.128/26 host="ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:54:41.891097 containerd[1467]: 2024-08-06 07:54:41.813 [INFO][4609] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.3.128/26 handle="k8s-pod-network.19cfc03e12eae35c153a18e3f7b58b25130aaf2ccb01d4dcc35ac7c34f0fbcc6" host="ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:54:41.891097 containerd[1467]: 2024-08-06 07:54:41.816 [INFO][4609] ipam.go 1685: Creating new handle: k8s-pod-network.19cfc03e12eae35c153a18e3f7b58b25130aaf2ccb01d4dcc35ac7c34f0fbcc6 Aug 6 07:54:41.891097 containerd[1467]: 2024-08-06 07:54:41.826 [INFO][4609] ipam.go 1203: Writing block in order to claim IPs block=192.168.3.128/26 handle="k8s-pod-network.19cfc03e12eae35c153a18e3f7b58b25130aaf2ccb01d4dcc35ac7c34f0fbcc6" host="ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:54:41.891097 containerd[1467]: 2024-08-06 07:54:41.835 [INFO][4609] ipam.go 1216: Successfully claimed IPs: [192.168.3.134/26] block=192.168.3.128/26 handle="k8s-pod-network.19cfc03e12eae35c153a18e3f7b58b25130aaf2ccb01d4dcc35ac7c34f0fbcc6" host="ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:54:41.891097 containerd[1467]: 2024-08-06 07:54:41.835 [INFO][4609] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.3.134/26] handle="k8s-pod-network.19cfc03e12eae35c153a18e3f7b58b25130aaf2ccb01d4dcc35ac7c34f0fbcc6" host="ci-3975.2.0-f-5a6fbdc7ed" Aug 6 07:54:41.891097 containerd[1467]: 2024-08-06 07:54:41.835 [INFO][4609] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 6 07:54:41.891097 containerd[1467]: 2024-08-06 07:54:41.835 [INFO][4609] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.3.134/26] IPv6=[] ContainerID="19cfc03e12eae35c153a18e3f7b58b25130aaf2ccb01d4dcc35ac7c34f0fbcc6" HandleID="k8s-pod-network.19cfc03e12eae35c153a18e3f7b58b25130aaf2ccb01d4dcc35ac7c34f0fbcc6" Workload="ci--3975.2.0--f--5a6fbdc7ed-k8s-calico--apiserver--6dd74b4fb--hg7hb-eth0" Aug 6 07:54:41.894817 containerd[1467]: 2024-08-06 07:54:41.840 [INFO][4581] k8s.go 386: Populated endpoint ContainerID="19cfc03e12eae35c153a18e3f7b58b25130aaf2ccb01d4dcc35ac7c34f0fbcc6" Namespace="calico-apiserver" Pod="calico-apiserver-6dd74b4fb-hg7hb" WorkloadEndpoint="ci--3975.2.0--f--5a6fbdc7ed-k8s-calico--apiserver--6dd74b4fb--hg7hb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.0--f--5a6fbdc7ed-k8s-calico--apiserver--6dd74b4fb--hg7hb-eth0", GenerateName:"calico-apiserver-6dd74b4fb-", Namespace:"calico-apiserver", SelfLink:"", UID:"e2499d60-022a-4c60-b134-1f5712911b9c", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2024, time.August, 6, 7, 54, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dd74b4fb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.0-f-5a6fbdc7ed", ContainerID:"", Pod:"calico-apiserver-6dd74b4fb-hg7hb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.3.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5ba4dcccdd2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 6 07:54:41.894817 containerd[1467]: 2024-08-06 07:54:41.841 [INFO][4581] k8s.go 387: Calico CNI using IPs: [192.168.3.134/32] ContainerID="19cfc03e12eae35c153a18e3f7b58b25130aaf2ccb01d4dcc35ac7c34f0fbcc6" Namespace="calico-apiserver" Pod="calico-apiserver-6dd74b4fb-hg7hb" WorkloadEndpoint="ci--3975.2.0--f--5a6fbdc7ed-k8s-calico--apiserver--6dd74b4fb--hg7hb-eth0" Aug 6 07:54:41.894817 containerd[1467]: 2024-08-06 07:54:41.845 [INFO][4581] dataplane_linux.go 68: Setting the host side veth name to cali5ba4dcccdd2 ContainerID="19cfc03e12eae35c153a18e3f7b58b25130aaf2ccb01d4dcc35ac7c34f0fbcc6" Namespace="calico-apiserver" Pod="calico-apiserver-6dd74b4fb-hg7hb" WorkloadEndpoint="ci--3975.2.0--f--5a6fbdc7ed-k8s-calico--apiserver--6dd74b4fb--hg7hb-eth0" Aug 6 07:54:41.894817 containerd[1467]: 2024-08-06 07:54:41.852 [INFO][4581] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="19cfc03e12eae35c153a18e3f7b58b25130aaf2ccb01d4dcc35ac7c34f0fbcc6" Namespace="calico-apiserver" Pod="calico-apiserver-6dd74b4fb-hg7hb" WorkloadEndpoint="ci--3975.2.0--f--5a6fbdc7ed-k8s-calico--apiserver--6dd74b4fb--hg7hb-eth0" Aug 6 07:54:41.894817 containerd[1467]: 2024-08-06 07:54:41.854 [INFO][4581] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="19cfc03e12eae35c153a18e3f7b58b25130aaf2ccb01d4dcc35ac7c34f0fbcc6" Namespace="calico-apiserver" Pod="calico-apiserver-6dd74b4fb-hg7hb" WorkloadEndpoint="ci--3975.2.0--f--5a6fbdc7ed-k8s-calico--apiserver--6dd74b4fb--hg7hb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.0--f--5a6fbdc7ed-k8s-calico--apiserver--6dd74b4fb--hg7hb-eth0", GenerateName:"calico-apiserver-6dd74b4fb-", Namespace:"calico-apiserver", SelfLink:"", UID:"e2499d60-022a-4c60-b134-1f5712911b9c", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2024, time.August, 6, 7, 54, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dd74b4fb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.0-f-5a6fbdc7ed", ContainerID:"19cfc03e12eae35c153a18e3f7b58b25130aaf2ccb01d4dcc35ac7c34f0fbcc6", Pod:"calico-apiserver-6dd74b4fb-hg7hb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.3.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5ba4dcccdd2", MAC:"e2:83:25:82:ce:7f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 6 07:54:41.894817 containerd[1467]: 2024-08-06 07:54:41.886 [INFO][4581] k8s.go 500: Wrote updated endpoint to datastore ContainerID="19cfc03e12eae35c153a18e3f7b58b25130aaf2ccb01d4dcc35ac7c34f0fbcc6" Namespace="calico-apiserver" Pod="calico-apiserver-6dd74b4fb-hg7hb" WorkloadEndpoint="ci--3975.2.0--f--5a6fbdc7ed-k8s-calico--apiserver--6dd74b4fb--hg7hb-eth0" Aug 6 07:54:41.951962 containerd[1467]: time="2024-08-06T07:54:41.950900195Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 6 07:54:41.951962 containerd[1467]: time="2024-08-06T07:54:41.951208439Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 07:54:41.951962 containerd[1467]: time="2024-08-06T07:54:41.951291201Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 6 07:54:41.951962 containerd[1467]: time="2024-08-06T07:54:41.951314776Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 07:54:42.008292 systemd[1]: Started cri-containerd-19cfc03e12eae35c153a18e3f7b58b25130aaf2ccb01d4dcc35ac7c34f0fbcc6.scope - libcontainer container 19cfc03e12eae35c153a18e3f7b58b25130aaf2ccb01d4dcc35ac7c34f0fbcc6. Aug 6 07:54:42.096879 containerd[1467]: time="2024-08-06T07:54:42.096822412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dd74b4fb-zbkd9,Uid:ee07754a-85c0-420d-afac-a2e4703ca541,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"021dc137a3943643a045c92d983ffc53b41880b056bcac2d2dafbeebc3c54738\"" Aug 6 07:54:42.105163 containerd[1467]: time="2024-08-06T07:54:42.105107195Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Aug 6 07:54:42.300362 containerd[1467]: time="2024-08-06T07:54:42.300215047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dd74b4fb-hg7hb,Uid:e2499d60-022a-4c60-b134-1f5712911b9c,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"19cfc03e12eae35c153a18e3f7b58b25130aaf2ccb01d4dcc35ac7c34f0fbcc6\"" Aug 6 07:54:42.908622 systemd-networkd[1340]: cali5ba4dcccdd2: Gained IPv6LL Aug 6 07:54:43.036419 systemd-networkd[1340]: calif85c1a31413: Gained IPv6LL Aug 6 07:54:45.014334 containerd[1467]: time="2024-08-06T07:54:45.014243727Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:54:45.016341 containerd[1467]: time="2024-08-06T07:54:45.016189767Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=40421260" Aug 6 07:54:45.018557 containerd[1467]: time="2024-08-06T07:54:45.018131923Z" level=info msg="ImageCreate event name:\"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:54:45.021921 containerd[1467]: time="2024-08-06T07:54:45.021859461Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:54:45.023140 containerd[1467]: time="2024-08-06T07:54:45.023093440Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"41869036\" in 2.91787717s" Aug 6 07:54:45.023593 containerd[1467]: time="2024-08-06T07:54:45.023309397Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\"" Aug 6 07:54:45.024410 containerd[1467]: time="2024-08-06T07:54:45.024081614Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Aug 6 07:54:45.028934 containerd[1467]: time="2024-08-06T07:54:45.028862454Z" level=info msg="CreateContainer within sandbox \"021dc137a3943643a045c92d983ffc53b41880b056bcac2d2dafbeebc3c54738\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 6 07:54:45.056728 containerd[1467]: time="2024-08-06T07:54:45.056536536Z" level=info msg="CreateContainer within sandbox \"021dc137a3943643a045c92d983ffc53b41880b056bcac2d2dafbeebc3c54738\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5f58e4ea2dd35806c575280831c80b3ebb56ab880d92710eb726a33c78d6a39e\"" Aug 6 07:54:45.058568 containerd[1467]: time="2024-08-06T07:54:45.058506065Z" level=info msg="StartContainer for \"5f58e4ea2dd35806c575280831c80b3ebb56ab880d92710eb726a33c78d6a39e\"" Aug 6 07:54:45.132386 systemd[1]: Started cri-containerd-5f58e4ea2dd35806c575280831c80b3ebb56ab880d92710eb726a33c78d6a39e.scope - libcontainer container 5f58e4ea2dd35806c575280831c80b3ebb56ab880d92710eb726a33c78d6a39e. Aug 6 07:54:45.218951 containerd[1467]: time="2024-08-06T07:54:45.218875805Z" level=info msg="StartContainer for \"5f58e4ea2dd35806c575280831c80b3ebb56ab880d92710eb726a33c78d6a39e\" returns successfully" Aug 6 07:54:45.390100 containerd[1467]: time="2024-08-06T07:54:45.389801546Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:54:45.394040 containerd[1467]: time="2024-08-06T07:54:45.392939905Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=77" Aug 6 07:54:45.397260 containerd[1467]: time="2024-08-06T07:54:45.396981671Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"41869036\" in 372.857891ms" Aug 6 07:54:45.397260 containerd[1467]: time="2024-08-06T07:54:45.397100070Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\"" Aug 6 07:54:45.401637 containerd[1467]: time="2024-08-06T07:54:45.401453233Z" level=info msg="CreateContainer within sandbox \"19cfc03e12eae35c153a18e3f7b58b25130aaf2ccb01d4dcc35ac7c34f0fbcc6\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 6 07:54:45.437443 containerd[1467]: time="2024-08-06T07:54:45.437342888Z" level=info msg="CreateContainer within sandbox \"19cfc03e12eae35c153a18e3f7b58b25130aaf2ccb01d4dcc35ac7c34f0fbcc6\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7bacc8b17e63a0bbd7593243734c23f3c4a75cbe025e341c61556e7af527c118\"" Aug 6 07:54:45.439268 containerd[1467]: time="2024-08-06T07:54:45.439213917Z" level=info msg="StartContainer for \"7bacc8b17e63a0bbd7593243734c23f3c4a75cbe025e341c61556e7af527c118\"" Aug 6 07:54:45.513759 systemd[1]: Started cri-containerd-7bacc8b17e63a0bbd7593243734c23f3c4a75cbe025e341c61556e7af527c118.scope - libcontainer container 7bacc8b17e63a0bbd7593243734c23f3c4a75cbe025e341c61556e7af527c118. Aug 6 07:54:45.656362 containerd[1467]: time="2024-08-06T07:54:45.655491248Z" level=info msg="StartContainer for \"7bacc8b17e63a0bbd7593243734c23f3c4a75cbe025e341c61556e7af527c118\" returns successfully" Aug 6 07:54:45.746026 kubelet[2504]: I0806 07:54:45.745310 2504 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6dd74b4fb-hg7hb" podStartSLOduration=2.650559065 podCreationTimestamp="2024-08-06 07:54:40 +0000 UTC" firstStartedPulling="2024-08-06 07:54:42.303249864 +0000 UTC m=+56.195958015" lastFinishedPulling="2024-08-06 07:54:45.397914751 +0000 UTC m=+59.290622901" observedRunningTime="2024-08-06 07:54:45.715561132 +0000 UTC m=+59.608269289" watchObservedRunningTime="2024-08-06 07:54:45.745223951 +0000 UTC m=+59.637932108" Aug 6 07:54:45.746026 kubelet[2504]: I0806 07:54:45.745792 2504 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6dd74b4fb-zbkd9" podStartSLOduration=2.824611068 podCreationTimestamp="2024-08-06 07:54:40 +0000 UTC" firstStartedPulling="2024-08-06 07:54:42.102687731 +0000 UTC m=+55.995395882" lastFinishedPulling="2024-08-06 07:54:45.02382446 +0000 UTC m=+58.916532631" observedRunningTime="2024-08-06 07:54:45.742775764 +0000 UTC m=+59.635483924" watchObservedRunningTime="2024-08-06 07:54:45.745747817 +0000 UTC m=+59.638455974" Aug 6 07:54:46.103546 systemd[1]: Started sshd@10-143.244.180.140:22-147.75.109.163:51900.service - OpenSSH per-connection server daemon (147.75.109.163:51900). Aug 6 07:54:46.255635 sshd[4798]: Accepted publickey for core from 147.75.109.163 port 51900 ssh2: RSA SHA256:SPvEXBcgWiyzsQCUAYKdH6mATUwSf1+92OLxtqkDC6k Aug 6 07:54:46.261522 sshd[4798]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:54:46.284391 systemd-logind[1443]: New session 9 of user core. Aug 6 07:54:46.288092 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 6 07:54:46.425341 containerd[1467]: time="2024-08-06T07:54:46.424926582Z" level=info msg="StopPodSandbox for \"80b27028f7dca7666cc6459df26ca7ca285314301fb583f5796197c0aa9ceaeb\"" Aug 6 07:54:46.699528 kubelet[2504]: I0806 07:54:46.699039 2504 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 6 07:54:46.701343 kubelet[2504]: I0806 07:54:46.700372 2504 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 6 07:54:46.720147 containerd[1467]: 2024-08-06 07:54:46.608 [WARNING][4821] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="80b27028f7dca7666cc6459df26ca7ca285314301fb583f5796197c0aa9ceaeb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.0--f--5a6fbdc7ed-k8s-csi--node--driver--jbj5f-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6471e7f0-19b6-4660-901b-e1b98911d115", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2024, time.August, 6, 7, 54, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.0-f-5a6fbdc7ed", ContainerID:"1d6e84aff724cde6d8ea261a8b5e02170f9dd915fc7380ad74ef5719d73aa4dc", Pod:"csi-node-driver-jbj5f", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.3.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali40625ed866b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 6 07:54:46.720147 containerd[1467]: 2024-08-06 07:54:46.609 [INFO][4821] k8s.go 608: Cleaning up netns ContainerID="80b27028f7dca7666cc6459df26ca7ca285314301fb583f5796197c0aa9ceaeb" Aug 6 07:54:46.720147 containerd[1467]: 2024-08-06 07:54:46.609 [INFO][4821] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="80b27028f7dca7666cc6459df26ca7ca285314301fb583f5796197c0aa9ceaeb" iface="eth0" netns="" Aug 6 07:54:46.720147 containerd[1467]: 2024-08-06 07:54:46.610 [INFO][4821] k8s.go 615: Releasing IP address(es) ContainerID="80b27028f7dca7666cc6459df26ca7ca285314301fb583f5796197c0aa9ceaeb" Aug 6 07:54:46.720147 containerd[1467]: 2024-08-06 07:54:46.610 [INFO][4821] utils.go 188: Calico CNI releasing IP address ContainerID="80b27028f7dca7666cc6459df26ca7ca285314301fb583f5796197c0aa9ceaeb" Aug 6 07:54:46.720147 containerd[1467]: 2024-08-06 07:54:46.690 [INFO][4827] ipam_plugin.go 411: Releasing address using handleID ContainerID="80b27028f7dca7666cc6459df26ca7ca285314301fb583f5796197c0aa9ceaeb" HandleID="k8s-pod-network.80b27028f7dca7666cc6459df26ca7ca285314301fb583f5796197c0aa9ceaeb" Workload="ci--3975.2.0--f--5a6fbdc7ed-k8s-csi--node--driver--jbj5f-eth0" Aug 6 07:54:46.720147 containerd[1467]: 2024-08-06 07:54:46.691 [INFO][4827] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 6 07:54:46.720147 containerd[1467]: 2024-08-06 07:54:46.691 [INFO][4827] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 6 07:54:46.720147 containerd[1467]: 2024-08-06 07:54:46.705 [WARNING][4827] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="80b27028f7dca7666cc6459df26ca7ca285314301fb583f5796197c0aa9ceaeb" HandleID="k8s-pod-network.80b27028f7dca7666cc6459df26ca7ca285314301fb583f5796197c0aa9ceaeb" Workload="ci--3975.2.0--f--5a6fbdc7ed-k8s-csi--node--driver--jbj5f-eth0" Aug 6 07:54:46.720147 containerd[1467]: 2024-08-06 07:54:46.705 [INFO][4827] ipam_plugin.go 439: Releasing address using workloadID ContainerID="80b27028f7dca7666cc6459df26ca7ca285314301fb583f5796197c0aa9ceaeb" HandleID="k8s-pod-network.80b27028f7dca7666cc6459df26ca7ca285314301fb583f5796197c0aa9ceaeb" Workload="ci--3975.2.0--f--5a6fbdc7ed-k8s-csi--node--driver--jbj5f-eth0" Aug 6 07:54:46.720147 containerd[1467]: 2024-08-06 07:54:46.709 [INFO][4827] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 6 07:54:46.720147 containerd[1467]: 2024-08-06 07:54:46.716 [INFO][4821] k8s.go 621: Teardown processing complete. ContainerID="80b27028f7dca7666cc6459df26ca7ca285314301fb583f5796197c0aa9ceaeb" Aug 6 07:54:46.720147 containerd[1467]: time="2024-08-06T07:54:46.719392042Z" level=info msg="TearDown network for sandbox \"80b27028f7dca7666cc6459df26ca7ca285314301fb583f5796197c0aa9ceaeb\" successfully" Aug 6 07:54:46.720147 containerd[1467]: time="2024-08-06T07:54:46.719434001Z" level=info msg="StopPodSandbox for \"80b27028f7dca7666cc6459df26ca7ca285314301fb583f5796197c0aa9ceaeb\" returns successfully" Aug 6 07:54:46.748235 containerd[1467]: time="2024-08-06T07:54:46.744201944Z" level=info msg="RemovePodSandbox for \"80b27028f7dca7666cc6459df26ca7ca285314301fb583f5796197c0aa9ceaeb\"" Aug 6 07:54:46.827268 containerd[1467]: time="2024-08-06T07:54:46.758728233Z" level=info msg="Forcibly stopping sandbox \"80b27028f7dca7666cc6459df26ca7ca285314301fb583f5796197c0aa9ceaeb\"" Aug 6 07:54:47.060183 sshd[4798]: pam_unix(sshd:session): session closed for user core Aug 6 07:54:47.082347 systemd[1]: sshd@10-143.244.180.140:22-147.75.109.163:51900.service: Deactivated successfully. Aug 6 07:54:47.087941 systemd[1]: session-9.scope: Deactivated successfully. Aug 6 07:54:47.091584 systemd-logind[1443]: Session 9 logged out. Waiting for processes to exit. Aug 6 07:54:47.094401 systemd-logind[1443]: Removed session 9. Aug 6 07:54:47.161238 containerd[1467]: 2024-08-06 07:54:47.020 [WARNING][4850] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="80b27028f7dca7666cc6459df26ca7ca285314301fb583f5796197c0aa9ceaeb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.0--f--5a6fbdc7ed-k8s-csi--node--driver--jbj5f-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6471e7f0-19b6-4660-901b-e1b98911d115", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2024, time.August, 6, 7, 54, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.0-f-5a6fbdc7ed", ContainerID:"1d6e84aff724cde6d8ea261a8b5e02170f9dd915fc7380ad74ef5719d73aa4dc", Pod:"csi-node-driver-jbj5f", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.3.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali40625ed866b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 6 07:54:47.161238 containerd[1467]: 2024-08-06 07:54:47.021 [INFO][4850] k8s.go 608: Cleaning up netns ContainerID="80b27028f7dca7666cc6459df26ca7ca285314301fb583f5796197c0aa9ceaeb" Aug 6 07:54:47.161238 containerd[1467]: 2024-08-06 07:54:47.022 [INFO][4850] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="80b27028f7dca7666cc6459df26ca7ca285314301fb583f5796197c0aa9ceaeb" iface="eth0" netns="" Aug 6 07:54:47.161238 containerd[1467]: 2024-08-06 07:54:47.022 [INFO][4850] k8s.go 615: Releasing IP address(es) ContainerID="80b27028f7dca7666cc6459df26ca7ca285314301fb583f5796197c0aa9ceaeb" Aug 6 07:54:47.161238 containerd[1467]: 2024-08-06 07:54:47.022 [INFO][4850] utils.go 188: Calico CNI releasing IP address ContainerID="80b27028f7dca7666cc6459df26ca7ca285314301fb583f5796197c0aa9ceaeb" Aug 6 07:54:47.161238 containerd[1467]: 2024-08-06 07:54:47.133 [INFO][4856] ipam_plugin.go 411: Releasing address using handleID ContainerID="80b27028f7dca7666cc6459df26ca7ca285314301fb583f5796197c0aa9ceaeb" HandleID="k8s-pod-network.80b27028f7dca7666cc6459df26ca7ca285314301fb583f5796197c0aa9ceaeb" Workload="ci--3975.2.0--f--5a6fbdc7ed-k8s-csi--node--driver--jbj5f-eth0" Aug 6 07:54:47.161238 containerd[1467]: 2024-08-06 07:54:47.133 [INFO][4856] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 6 07:54:47.161238 containerd[1467]: 2024-08-06 07:54:47.133 [INFO][4856] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 6 07:54:47.161238 containerd[1467]: 2024-08-06 07:54:47.151 [WARNING][4856] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="80b27028f7dca7666cc6459df26ca7ca285314301fb583f5796197c0aa9ceaeb" HandleID="k8s-pod-network.80b27028f7dca7666cc6459df26ca7ca285314301fb583f5796197c0aa9ceaeb" Workload="ci--3975.2.0--f--5a6fbdc7ed-k8s-csi--node--driver--jbj5f-eth0" Aug 6 07:54:47.161238 containerd[1467]: 2024-08-06 07:54:47.151 [INFO][4856] ipam_plugin.go 439: Releasing address using workloadID ContainerID="80b27028f7dca7666cc6459df26ca7ca285314301fb583f5796197c0aa9ceaeb" HandleID="k8s-pod-network.80b27028f7dca7666cc6459df26ca7ca285314301fb583f5796197c0aa9ceaeb" Workload="ci--3975.2.0--f--5a6fbdc7ed-k8s-csi--node--driver--jbj5f-eth0" Aug 6 07:54:47.161238 containerd[1467]: 2024-08-06 07:54:47.156 [INFO][4856] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 6 07:54:47.161238 containerd[1467]: 2024-08-06 07:54:47.158 [INFO][4850] k8s.go 621: Teardown processing complete. ContainerID="80b27028f7dca7666cc6459df26ca7ca285314301fb583f5796197c0aa9ceaeb" Aug 6 07:54:47.161238 containerd[1467]: time="2024-08-06T07:54:47.160881027Z" level=info msg="TearDown network for sandbox \"80b27028f7dca7666cc6459df26ca7ca285314301fb583f5796197c0aa9ceaeb\" successfully" Aug 6 07:54:47.182280 containerd[1467]: time="2024-08-06T07:54:47.181965811Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"80b27028f7dca7666cc6459df26ca7ca285314301fb583f5796197c0aa9ceaeb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 6 07:54:47.182280 containerd[1467]: time="2024-08-06T07:54:47.182107827Z" level=info msg="RemovePodSandbox \"80b27028f7dca7666cc6459df26ca7ca285314301fb583f5796197c0aa9ceaeb\" returns successfully" Aug 6 07:54:47.183300 containerd[1467]: time="2024-08-06T07:54:47.182739105Z" level=info msg="StopPodSandbox for \"4b012b36a88ab48fa7b7bb5bbf892da0014fbb122efd2550d325072f78a4ebe8\"" Aug 6 07:54:47.375312 containerd[1467]: 2024-08-06 07:54:47.284 [WARNING][4878] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4b012b36a88ab48fa7b7bb5bbf892da0014fbb122efd2550d325072f78a4ebe8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.0--f--5a6fbdc7ed-k8s-coredns--5dd5756b68--p44ss-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"c477a3e4-8453-408f-927e-4d6c3f9f2de8", ResourceVersion:"770", Generation:0, CreationTimestamp:time.Date(2024, time.August, 6, 7, 53, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.0-f-5a6fbdc7ed", ContainerID:"17c1d44bd47cc1d9be76dab3cb1930a924e282cddb73c64d3184a6489c9ee195", Pod:"coredns-5dd5756b68-p44ss", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.3.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9c0e167aac7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 6 07:54:47.375312 containerd[1467]: 2024-08-06 07:54:47.284 [INFO][4878] k8s.go 608: Cleaning up netns ContainerID="4b012b36a88ab48fa7b7bb5bbf892da0014fbb122efd2550d325072f78a4ebe8" Aug 6 07:54:47.375312 containerd[1467]: 2024-08-06 07:54:47.284 [INFO][4878] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="4b012b36a88ab48fa7b7bb5bbf892da0014fbb122efd2550d325072f78a4ebe8" iface="eth0" netns="" Aug 6 07:54:47.375312 containerd[1467]: 2024-08-06 07:54:47.284 [INFO][4878] k8s.go 615: Releasing IP address(es) ContainerID="4b012b36a88ab48fa7b7bb5bbf892da0014fbb122efd2550d325072f78a4ebe8" Aug 6 07:54:47.375312 containerd[1467]: 2024-08-06 07:54:47.284 [INFO][4878] utils.go 188: Calico CNI releasing IP address ContainerID="4b012b36a88ab48fa7b7bb5bbf892da0014fbb122efd2550d325072f78a4ebe8" Aug 6 07:54:47.375312 containerd[1467]: 2024-08-06 07:54:47.352 [INFO][4884] ipam_plugin.go 411: Releasing address using handleID ContainerID="4b012b36a88ab48fa7b7bb5bbf892da0014fbb122efd2550d325072f78a4ebe8" HandleID="k8s-pod-network.4b012b36a88ab48fa7b7bb5bbf892da0014fbb122efd2550d325072f78a4ebe8" Workload="ci--3975.2.0--f--5a6fbdc7ed-k8s-coredns--5dd5756b68--p44ss-eth0" Aug 6 07:54:47.375312 containerd[1467]: 2024-08-06 07:54:47.352 [INFO][4884] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 6 07:54:47.375312 containerd[1467]: 2024-08-06 07:54:47.352 [INFO][4884] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 6 07:54:47.375312 containerd[1467]: 2024-08-06 07:54:47.367 [WARNING][4884] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="4b012b36a88ab48fa7b7bb5bbf892da0014fbb122efd2550d325072f78a4ebe8" HandleID="k8s-pod-network.4b012b36a88ab48fa7b7bb5bbf892da0014fbb122efd2550d325072f78a4ebe8" Workload="ci--3975.2.0--f--5a6fbdc7ed-k8s-coredns--5dd5756b68--p44ss-eth0" Aug 6 07:54:47.375312 containerd[1467]: 2024-08-06 07:54:47.367 [INFO][4884] ipam_plugin.go 439: Releasing address using workloadID ContainerID="4b012b36a88ab48fa7b7bb5bbf892da0014fbb122efd2550d325072f78a4ebe8" HandleID="k8s-pod-network.4b012b36a88ab48fa7b7bb5bbf892da0014fbb122efd2550d325072f78a4ebe8" Workload="ci--3975.2.0--f--5a6fbdc7ed-k8s-coredns--5dd5756b68--p44ss-eth0" Aug 6 07:54:47.375312 containerd[1467]: 2024-08-06 07:54:47.369 [INFO][4884] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 6 07:54:47.375312 containerd[1467]: 2024-08-06 07:54:47.373 [INFO][4878] k8s.go 621: Teardown processing complete. ContainerID="4b012b36a88ab48fa7b7bb5bbf892da0014fbb122efd2550d325072f78a4ebe8" Aug 6 07:54:47.378942 containerd[1467]: time="2024-08-06T07:54:47.376553465Z" level=info msg="TearDown network for sandbox \"4b012b36a88ab48fa7b7bb5bbf892da0014fbb122efd2550d325072f78a4ebe8\" successfully" Aug 6 07:54:47.378942 containerd[1467]: time="2024-08-06T07:54:47.376600104Z" level=info msg="StopPodSandbox for \"4b012b36a88ab48fa7b7bb5bbf892da0014fbb122efd2550d325072f78a4ebe8\" returns successfully" Aug 6 07:54:47.378942 containerd[1467]: time="2024-08-06T07:54:47.378115891Z" level=info msg="RemovePodSandbox for \"4b012b36a88ab48fa7b7bb5bbf892da0014fbb122efd2550d325072f78a4ebe8\"" Aug 6 07:54:47.378942 containerd[1467]: time="2024-08-06T07:54:47.378164428Z" level=info msg="Forcibly stopping sandbox \"4b012b36a88ab48fa7b7bb5bbf892da0014fbb122efd2550d325072f78a4ebe8\"" Aug 6 07:54:47.554303 containerd[1467]: 2024-08-06 07:54:47.472 [WARNING][4902] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4b012b36a88ab48fa7b7bb5bbf892da0014fbb122efd2550d325072f78a4ebe8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.0--f--5a6fbdc7ed-k8s-coredns--5dd5756b68--p44ss-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"c477a3e4-8453-408f-927e-4d6c3f9f2de8", ResourceVersion:"770", Generation:0, CreationTimestamp:time.Date(2024, time.August, 6, 7, 53, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.0-f-5a6fbdc7ed", ContainerID:"17c1d44bd47cc1d9be76dab3cb1930a924e282cddb73c64d3184a6489c9ee195", Pod:"coredns-5dd5756b68-p44ss", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.3.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9c0e167aac7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 6 07:54:47.554303 containerd[1467]: 2024-08-06 07:54:47.473 [INFO][4902] k8s.go 608: Cleaning up netns ContainerID="4b012b36a88ab48fa7b7bb5bbf892da0014fbb122efd2550d325072f78a4ebe8" Aug 6 07:54:47.554303 containerd[1467]: 2024-08-06 07:54:47.473 [INFO][4902] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="4b012b36a88ab48fa7b7bb5bbf892da0014fbb122efd2550d325072f78a4ebe8" iface="eth0" netns="" Aug 6 07:54:47.554303 containerd[1467]: 2024-08-06 07:54:47.473 [INFO][4902] k8s.go 615: Releasing IP address(es) ContainerID="4b012b36a88ab48fa7b7bb5bbf892da0014fbb122efd2550d325072f78a4ebe8" Aug 6 07:54:47.554303 containerd[1467]: 2024-08-06 07:54:47.473 [INFO][4902] utils.go 188: Calico CNI releasing IP address ContainerID="4b012b36a88ab48fa7b7bb5bbf892da0014fbb122efd2550d325072f78a4ebe8" Aug 6 07:54:47.554303 containerd[1467]: 2024-08-06 07:54:47.520 [INFO][4908] ipam_plugin.go 411: Releasing address using handleID ContainerID="4b012b36a88ab48fa7b7bb5bbf892da0014fbb122efd2550d325072f78a4ebe8" HandleID="k8s-pod-network.4b012b36a88ab48fa7b7bb5bbf892da0014fbb122efd2550d325072f78a4ebe8" Workload="ci--3975.2.0--f--5a6fbdc7ed-k8s-coredns--5dd5756b68--p44ss-eth0" Aug 6 07:54:47.554303 containerd[1467]: 2024-08-06 07:54:47.520 [INFO][4908] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 6 07:54:47.554303 containerd[1467]: 2024-08-06 07:54:47.520 [INFO][4908] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 6 07:54:47.554303 containerd[1467]: 2024-08-06 07:54:47.537 [WARNING][4908] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="4b012b36a88ab48fa7b7bb5bbf892da0014fbb122efd2550d325072f78a4ebe8" HandleID="k8s-pod-network.4b012b36a88ab48fa7b7bb5bbf892da0014fbb122efd2550d325072f78a4ebe8" Workload="ci--3975.2.0--f--5a6fbdc7ed-k8s-coredns--5dd5756b68--p44ss-eth0" Aug 6 07:54:47.554303 containerd[1467]: 2024-08-06 07:54:47.539 [INFO][4908] ipam_plugin.go 439: Releasing address using workloadID ContainerID="4b012b36a88ab48fa7b7bb5bbf892da0014fbb122efd2550d325072f78a4ebe8" HandleID="k8s-pod-network.4b012b36a88ab48fa7b7bb5bbf892da0014fbb122efd2550d325072f78a4ebe8" Workload="ci--3975.2.0--f--5a6fbdc7ed-k8s-coredns--5dd5756b68--p44ss-eth0" Aug 6 07:54:47.554303 containerd[1467]: 2024-08-06 07:54:47.543 [INFO][4908] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 6 07:54:47.554303 containerd[1467]: 2024-08-06 07:54:47.547 [INFO][4902] k8s.go 621: Teardown processing complete. ContainerID="4b012b36a88ab48fa7b7bb5bbf892da0014fbb122efd2550d325072f78a4ebe8" Aug 6 07:54:47.555918 containerd[1467]: time="2024-08-06T07:54:47.554493909Z" level=info msg="TearDown network for sandbox \"4b012b36a88ab48fa7b7bb5bbf892da0014fbb122efd2550d325072f78a4ebe8\" successfully" Aug 6 07:54:47.563372 containerd[1467]: time="2024-08-06T07:54:47.562730279Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4b012b36a88ab48fa7b7bb5bbf892da0014fbb122efd2550d325072f78a4ebe8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 6 07:54:47.563372 containerd[1467]: time="2024-08-06T07:54:47.563002968Z" level=info msg="RemovePodSandbox \"4b012b36a88ab48fa7b7bb5bbf892da0014fbb122efd2550d325072f78a4ebe8\" returns successfully" Aug 6 07:54:47.566489 containerd[1467]: time="2024-08-06T07:54:47.565797007Z" level=info msg="StopPodSandbox for \"646d09070ac7f8d1fb7a6e76d4f0c6a25d14961db68fde76f9fbffac4f13ec10\"" Aug 6 07:54:47.724822 containerd[1467]: 2024-08-06 07:54:47.650 [WARNING][4926] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="646d09070ac7f8d1fb7a6e76d4f0c6a25d14961db68fde76f9fbffac4f13ec10" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.0--f--5a6fbdc7ed-k8s-calico--kube--controllers--5568985977--nxh2j-eth0", GenerateName:"calico-kube-controllers-5568985977-", Namespace:"calico-system", SelfLink:"", UID:"c1565eaa-995e-4ed3-9f2f-779318f9f3a8", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2024, time.August, 6, 7, 54, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5568985977", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.0-f-5a6fbdc7ed", ContainerID:"37e57245be030efb35ebdf0b45151e8b4d6b04898e1083d3cd1b4846c160f324", Pod:"calico-kube-controllers-5568985977-nxh2j", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.3.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidd766066cb0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 6 07:54:47.724822 containerd[1467]: 2024-08-06 07:54:47.650 [INFO][4926] k8s.go 608: Cleaning up netns ContainerID="646d09070ac7f8d1fb7a6e76d4f0c6a25d14961db68fde76f9fbffac4f13ec10" Aug 6 07:54:47.724822 containerd[1467]: 2024-08-06 07:54:47.650 [INFO][4926] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="646d09070ac7f8d1fb7a6e76d4f0c6a25d14961db68fde76f9fbffac4f13ec10" iface="eth0" netns="" Aug 6 07:54:47.724822 containerd[1467]: 2024-08-06 07:54:47.650 [INFO][4926] k8s.go 615: Releasing IP address(es) ContainerID="646d09070ac7f8d1fb7a6e76d4f0c6a25d14961db68fde76f9fbffac4f13ec10" Aug 6 07:54:47.724822 containerd[1467]: 2024-08-06 07:54:47.650 [INFO][4926] utils.go 188: Calico CNI releasing IP address ContainerID="646d09070ac7f8d1fb7a6e76d4f0c6a25d14961db68fde76f9fbffac4f13ec10" Aug 6 07:54:47.724822 containerd[1467]: 2024-08-06 07:54:47.697 [INFO][4932] ipam_plugin.go 411: Releasing address using handleID ContainerID="646d09070ac7f8d1fb7a6e76d4f0c6a25d14961db68fde76f9fbffac4f13ec10" HandleID="k8s-pod-network.646d09070ac7f8d1fb7a6e76d4f0c6a25d14961db68fde76f9fbffac4f13ec10" Workload="ci--3975.2.0--f--5a6fbdc7ed-k8s-calico--kube--controllers--5568985977--nxh2j-eth0" Aug 6 07:54:47.724822 containerd[1467]: 2024-08-06 07:54:47.697 [INFO][4932] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 6 07:54:47.724822 containerd[1467]: 2024-08-06 07:54:47.697 [INFO][4932] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 6 07:54:47.724822 containerd[1467]: 2024-08-06 07:54:47.713 [WARNING][4932] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="646d09070ac7f8d1fb7a6e76d4f0c6a25d14961db68fde76f9fbffac4f13ec10" HandleID="k8s-pod-network.646d09070ac7f8d1fb7a6e76d4f0c6a25d14961db68fde76f9fbffac4f13ec10" Workload="ci--3975.2.0--f--5a6fbdc7ed-k8s-calico--kube--controllers--5568985977--nxh2j-eth0" Aug 6 07:54:47.724822 containerd[1467]: 2024-08-06 07:54:47.713 [INFO][4932] ipam_plugin.go 439: Releasing address using workloadID ContainerID="646d09070ac7f8d1fb7a6e76d4f0c6a25d14961db68fde76f9fbffac4f13ec10" HandleID="k8s-pod-network.646d09070ac7f8d1fb7a6e76d4f0c6a25d14961db68fde76f9fbffac4f13ec10" Workload="ci--3975.2.0--f--5a6fbdc7ed-k8s-calico--kube--controllers--5568985977--nxh2j-eth0" Aug 6 07:54:47.724822 containerd[1467]: 2024-08-06 07:54:47.717 [INFO][4932] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 6 07:54:47.724822 containerd[1467]: 2024-08-06 07:54:47.721 [INFO][4926] k8s.go 621: Teardown processing complete. ContainerID="646d09070ac7f8d1fb7a6e76d4f0c6a25d14961db68fde76f9fbffac4f13ec10" Aug 6 07:54:47.727121 containerd[1467]: time="2024-08-06T07:54:47.724867006Z" level=info msg="TearDown network for sandbox \"646d09070ac7f8d1fb7a6e76d4f0c6a25d14961db68fde76f9fbffac4f13ec10\" successfully" Aug 6 07:54:47.727121 containerd[1467]: time="2024-08-06T07:54:47.724892805Z" level=info msg="StopPodSandbox for \"646d09070ac7f8d1fb7a6e76d4f0c6a25d14961db68fde76f9fbffac4f13ec10\" returns successfully" Aug 6 07:54:47.727741 containerd[1467]: time="2024-08-06T07:54:47.727636807Z" level=info msg="RemovePodSandbox for \"646d09070ac7f8d1fb7a6e76d4f0c6a25d14961db68fde76f9fbffac4f13ec10\"" Aug 6 07:54:47.727741 containerd[1467]: time="2024-08-06T07:54:47.727683347Z" level=info msg="Forcibly stopping sandbox \"646d09070ac7f8d1fb7a6e76d4f0c6a25d14961db68fde76f9fbffac4f13ec10\"" Aug 6 07:54:47.856488 containerd[1467]: 2024-08-06 07:54:47.797 [WARNING][4950] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="646d09070ac7f8d1fb7a6e76d4f0c6a25d14961db68fde76f9fbffac4f13ec10" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.0--f--5a6fbdc7ed-k8s-calico--kube--controllers--5568985977--nxh2j-eth0", GenerateName:"calico-kube-controllers-5568985977-", Namespace:"calico-system", SelfLink:"", UID:"c1565eaa-995e-4ed3-9f2f-779318f9f3a8", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2024, time.August, 6, 7, 54, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5568985977", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.0-f-5a6fbdc7ed", ContainerID:"37e57245be030efb35ebdf0b45151e8b4d6b04898e1083d3cd1b4846c160f324", Pod:"calico-kube-controllers-5568985977-nxh2j", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.3.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidd766066cb0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 6 07:54:47.856488 containerd[1467]: 2024-08-06 07:54:47.797 [INFO][4950] k8s.go 608: Cleaning up netns ContainerID="646d09070ac7f8d1fb7a6e76d4f0c6a25d14961db68fde76f9fbffac4f13ec10" Aug 6 07:54:47.856488 containerd[1467]: 2024-08-06 07:54:47.797 [INFO][4950] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="646d09070ac7f8d1fb7a6e76d4f0c6a25d14961db68fde76f9fbffac4f13ec10" iface="eth0" netns="" Aug 6 07:54:47.856488 containerd[1467]: 2024-08-06 07:54:47.797 [INFO][4950] k8s.go 615: Releasing IP address(es) ContainerID="646d09070ac7f8d1fb7a6e76d4f0c6a25d14961db68fde76f9fbffac4f13ec10" Aug 6 07:54:47.856488 containerd[1467]: 2024-08-06 07:54:47.797 [INFO][4950] utils.go 188: Calico CNI releasing IP address ContainerID="646d09070ac7f8d1fb7a6e76d4f0c6a25d14961db68fde76f9fbffac4f13ec10" Aug 6 07:54:47.856488 containerd[1467]: 2024-08-06 07:54:47.840 [INFO][4957] ipam_plugin.go 411: Releasing address using handleID ContainerID="646d09070ac7f8d1fb7a6e76d4f0c6a25d14961db68fde76f9fbffac4f13ec10" HandleID="k8s-pod-network.646d09070ac7f8d1fb7a6e76d4f0c6a25d14961db68fde76f9fbffac4f13ec10" Workload="ci--3975.2.0--f--5a6fbdc7ed-k8s-calico--kube--controllers--5568985977--nxh2j-eth0" Aug 6 07:54:47.856488 containerd[1467]: 2024-08-06 07:54:47.840 [INFO][4957] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 6 07:54:47.856488 containerd[1467]: 2024-08-06 07:54:47.840 [INFO][4957] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 6 07:54:47.856488 containerd[1467]: 2024-08-06 07:54:47.848 [WARNING][4957] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="646d09070ac7f8d1fb7a6e76d4f0c6a25d14961db68fde76f9fbffac4f13ec10" HandleID="k8s-pod-network.646d09070ac7f8d1fb7a6e76d4f0c6a25d14961db68fde76f9fbffac4f13ec10" Workload="ci--3975.2.0--f--5a6fbdc7ed-k8s-calico--kube--controllers--5568985977--nxh2j-eth0" Aug 6 07:54:47.856488 containerd[1467]: 2024-08-06 07:54:47.848 [INFO][4957] ipam_plugin.go 439: Releasing address using workloadID ContainerID="646d09070ac7f8d1fb7a6e76d4f0c6a25d14961db68fde76f9fbffac4f13ec10" HandleID="k8s-pod-network.646d09070ac7f8d1fb7a6e76d4f0c6a25d14961db68fde76f9fbffac4f13ec10" Workload="ci--3975.2.0--f--5a6fbdc7ed-k8s-calico--kube--controllers--5568985977--nxh2j-eth0" Aug 6 07:54:47.856488 containerd[1467]: 2024-08-06 07:54:47.851 [INFO][4957] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 6 07:54:47.856488 containerd[1467]: 2024-08-06 07:54:47.853 [INFO][4950] k8s.go 621: Teardown processing complete. ContainerID="646d09070ac7f8d1fb7a6e76d4f0c6a25d14961db68fde76f9fbffac4f13ec10" Aug 6 07:54:47.858640 containerd[1467]: time="2024-08-06T07:54:47.856463284Z" level=info msg="TearDown network for sandbox \"646d09070ac7f8d1fb7a6e76d4f0c6a25d14961db68fde76f9fbffac4f13ec10\" successfully" Aug 6 07:54:47.873632 containerd[1467]: time="2024-08-06T07:54:47.873160538Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"646d09070ac7f8d1fb7a6e76d4f0c6a25d14961db68fde76f9fbffac4f13ec10\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 6 07:54:47.873632 containerd[1467]: time="2024-08-06T07:54:47.873275043Z" level=info msg="RemovePodSandbox \"646d09070ac7f8d1fb7a6e76d4f0c6a25d14961db68fde76f9fbffac4f13ec10\" returns successfully" Aug 6 07:54:47.874851 containerd[1467]: time="2024-08-06T07:54:47.874264406Z" level=info msg="StopPodSandbox for \"55870b105d7c660e280087874947b6871e075e5fd6825968e013ef385440eba4\"" Aug 6 07:54:47.994062 containerd[1467]: 2024-08-06 07:54:47.936 [WARNING][4975] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="55870b105d7c660e280087874947b6871e075e5fd6825968e013ef385440eba4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.0--f--5a6fbdc7ed-k8s-coredns--5dd5756b68--d6hq7-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"c9550f08-461d-4533-b70f-c490af4113f0", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2024, time.August, 6, 7, 53, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.0-f-5a6fbdc7ed", ContainerID:"6e0596bfd512a6777e89e395f173da12de4286c86209730391553915a08f5ce9", Pod:"coredns-5dd5756b68-d6hq7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.3.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1025806133d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 6 07:54:47.994062 containerd[1467]: 2024-08-06 07:54:47.936 [INFO][4975] k8s.go 608: Cleaning up netns ContainerID="55870b105d7c660e280087874947b6871e075e5fd6825968e013ef385440eba4" Aug 6 07:54:47.994062 containerd[1467]: 2024-08-06 07:54:47.936 [INFO][4975] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="55870b105d7c660e280087874947b6871e075e5fd6825968e013ef385440eba4" iface="eth0" netns="" Aug 6 07:54:47.994062 containerd[1467]: 2024-08-06 07:54:47.937 [INFO][4975] k8s.go 615: Releasing IP address(es) ContainerID="55870b105d7c660e280087874947b6871e075e5fd6825968e013ef385440eba4" Aug 6 07:54:47.994062 containerd[1467]: 2024-08-06 07:54:47.937 [INFO][4975] utils.go 188: Calico CNI releasing IP address ContainerID="55870b105d7c660e280087874947b6871e075e5fd6825968e013ef385440eba4" Aug 6 07:54:47.994062 containerd[1467]: 2024-08-06 07:54:47.973 [INFO][4981] ipam_plugin.go 411: Releasing address using handleID ContainerID="55870b105d7c660e280087874947b6871e075e5fd6825968e013ef385440eba4" HandleID="k8s-pod-network.55870b105d7c660e280087874947b6871e075e5fd6825968e013ef385440eba4" Workload="ci--3975.2.0--f--5a6fbdc7ed-k8s-coredns--5dd5756b68--d6hq7-eth0" Aug 6 07:54:47.994062 containerd[1467]: 2024-08-06 07:54:47.973 [INFO][4981] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 6 07:54:47.994062 containerd[1467]: 2024-08-06 07:54:47.973 [INFO][4981] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 6 07:54:47.994062 containerd[1467]: 2024-08-06 07:54:47.985 [WARNING][4981] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="55870b105d7c660e280087874947b6871e075e5fd6825968e013ef385440eba4" HandleID="k8s-pod-network.55870b105d7c660e280087874947b6871e075e5fd6825968e013ef385440eba4" Workload="ci--3975.2.0--f--5a6fbdc7ed-k8s-coredns--5dd5756b68--d6hq7-eth0" Aug 6 07:54:47.994062 containerd[1467]: 2024-08-06 07:54:47.985 [INFO][4981] ipam_plugin.go 439: Releasing address using workloadID ContainerID="55870b105d7c660e280087874947b6871e075e5fd6825968e013ef385440eba4" HandleID="k8s-pod-network.55870b105d7c660e280087874947b6871e075e5fd6825968e013ef385440eba4" Workload="ci--3975.2.0--f--5a6fbdc7ed-k8s-coredns--5dd5756b68--d6hq7-eth0" Aug 6 07:54:47.994062 containerd[1467]: 2024-08-06 07:54:47.987 [INFO][4981] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 6 07:54:47.994062 containerd[1467]: 2024-08-06 07:54:47.990 [INFO][4975] k8s.go 621: Teardown processing complete. ContainerID="55870b105d7c660e280087874947b6871e075e5fd6825968e013ef385440eba4" Aug 6 07:54:47.994062 containerd[1467]: time="2024-08-06T07:54:47.993484210Z" level=info msg="TearDown network for sandbox \"55870b105d7c660e280087874947b6871e075e5fd6825968e013ef385440eba4\" successfully" Aug 6 07:54:47.994062 containerd[1467]: time="2024-08-06T07:54:47.993515797Z" level=info msg="StopPodSandbox for \"55870b105d7c660e280087874947b6871e075e5fd6825968e013ef385440eba4\" returns successfully" Aug 6 07:54:47.995984 containerd[1467]: time="2024-08-06T07:54:47.995923034Z" level=info msg="RemovePodSandbox for \"55870b105d7c660e280087874947b6871e075e5fd6825968e013ef385440eba4\"" Aug 6 07:54:47.996144 containerd[1467]: time="2024-08-06T07:54:47.996057549Z" level=info msg="Forcibly stopping sandbox \"55870b105d7c660e280087874947b6871e075e5fd6825968e013ef385440eba4\"" Aug 6 07:54:48.114366 containerd[1467]: 2024-08-06 07:54:48.057 [WARNING][4999] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="55870b105d7c660e280087874947b6871e075e5fd6825968e013ef385440eba4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.0--f--5a6fbdc7ed-k8s-coredns--5dd5756b68--d6hq7-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"c9550f08-461d-4533-b70f-c490af4113f0", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2024, time.August, 6, 7, 53, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.0-f-5a6fbdc7ed", ContainerID:"6e0596bfd512a6777e89e395f173da12de4286c86209730391553915a08f5ce9", Pod:"coredns-5dd5756b68-d6hq7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.3.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1025806133d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 6 07:54:48.114366 containerd[1467]: 2024-08-06 07:54:48.058 [INFO][4999] k8s.go 608: Cleaning up netns ContainerID="55870b105d7c660e280087874947b6871e075e5fd6825968e013ef385440eba4" Aug 6 07:54:48.114366 containerd[1467]: 2024-08-06 07:54:48.058 [INFO][4999] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="55870b105d7c660e280087874947b6871e075e5fd6825968e013ef385440eba4" iface="eth0" netns="" Aug 6 07:54:48.114366 containerd[1467]: 2024-08-06 07:54:48.058 [INFO][4999] k8s.go 615: Releasing IP address(es) ContainerID="55870b105d7c660e280087874947b6871e075e5fd6825968e013ef385440eba4" Aug 6 07:54:48.114366 containerd[1467]: 2024-08-06 07:54:48.058 [INFO][4999] utils.go 188: Calico CNI releasing IP address ContainerID="55870b105d7c660e280087874947b6871e075e5fd6825968e013ef385440eba4" Aug 6 07:54:48.114366 containerd[1467]: 2024-08-06 07:54:48.093 [INFO][5005] ipam_plugin.go 411: Releasing address using handleID ContainerID="55870b105d7c660e280087874947b6871e075e5fd6825968e013ef385440eba4" HandleID="k8s-pod-network.55870b105d7c660e280087874947b6871e075e5fd6825968e013ef385440eba4" Workload="ci--3975.2.0--f--5a6fbdc7ed-k8s-coredns--5dd5756b68--d6hq7-eth0" Aug 6 07:54:48.114366 containerd[1467]: 2024-08-06 07:54:48.094 [INFO][5005] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 6 07:54:48.114366 containerd[1467]: 2024-08-06 07:54:48.094 [INFO][5005] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 6 07:54:48.114366 containerd[1467]: 2024-08-06 07:54:48.107 [WARNING][5005] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="55870b105d7c660e280087874947b6871e075e5fd6825968e013ef385440eba4" HandleID="k8s-pod-network.55870b105d7c660e280087874947b6871e075e5fd6825968e013ef385440eba4" Workload="ci--3975.2.0--f--5a6fbdc7ed-k8s-coredns--5dd5756b68--d6hq7-eth0" Aug 6 07:54:48.114366 containerd[1467]: 2024-08-06 07:54:48.107 [INFO][5005] ipam_plugin.go 439: Releasing address using workloadID ContainerID="55870b105d7c660e280087874947b6871e075e5fd6825968e013ef385440eba4" HandleID="k8s-pod-network.55870b105d7c660e280087874947b6871e075e5fd6825968e013ef385440eba4" Workload="ci--3975.2.0--f--5a6fbdc7ed-k8s-coredns--5dd5756b68--d6hq7-eth0" Aug 6 07:54:48.114366 containerd[1467]: 2024-08-06 07:54:48.109 [INFO][5005] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 6 07:54:48.114366 containerd[1467]: 2024-08-06 07:54:48.111 [INFO][4999] k8s.go 621: Teardown processing complete. ContainerID="55870b105d7c660e280087874947b6871e075e5fd6825968e013ef385440eba4" Aug 6 07:54:48.114366 containerd[1467]: time="2024-08-06T07:54:48.113858420Z" level=info msg="TearDown network for sandbox \"55870b105d7c660e280087874947b6871e075e5fd6825968e013ef385440eba4\" successfully" Aug 6 07:54:48.118913 containerd[1467]: time="2024-08-06T07:54:48.118660745Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"55870b105d7c660e280087874947b6871e075e5fd6825968e013ef385440eba4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 6 07:54:48.118913 containerd[1467]: time="2024-08-06T07:54:48.118762009Z" level=info msg="RemovePodSandbox \"55870b105d7c660e280087874947b6871e075e5fd6825968e013ef385440eba4\" returns successfully" Aug 6 07:54:51.112422 kubelet[2504]: I0806 07:54:51.112282 2504 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 6 07:54:52.081270 systemd[1]: Started sshd@11-143.244.180.140:22-147.75.109.163:51906.service - OpenSSH per-connection server daemon (147.75.109.163:51906). Aug 6 07:54:52.173338 sshd[5028]: Accepted publickey for core from 147.75.109.163 port 51906 ssh2: RSA SHA256:SPvEXBcgWiyzsQCUAYKdH6mATUwSf1+92OLxtqkDC6k Aug 6 07:54:52.178611 sshd[5028]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:54:52.188778 systemd-logind[1443]: New session 10 of user core. Aug 6 07:54:52.195368 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 6 07:54:52.445323 sshd[5028]: pam_unix(sshd:session): session closed for user core Aug 6 07:54:52.456496 systemd[1]: sshd@11-143.244.180.140:22-147.75.109.163:51906.service: Deactivated successfully. Aug 6 07:54:52.460193 systemd[1]: session-10.scope: Deactivated successfully. Aug 6 07:54:52.461590 systemd-logind[1443]: Session 10 logged out. Waiting for processes to exit. Aug 6 07:54:52.474388 systemd[1]: Started sshd@12-143.244.180.140:22-147.75.109.163:51916.service - OpenSSH per-connection server daemon (147.75.109.163:51916). Aug 6 07:54:52.477304 systemd-logind[1443]: Removed session 10. Aug 6 07:54:52.528183 sshd[5041]: Accepted publickey for core from 147.75.109.163 port 51916 ssh2: RSA SHA256:SPvEXBcgWiyzsQCUAYKdH6mATUwSf1+92OLxtqkDC6k Aug 6 07:54:52.530683 sshd[5041]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:54:52.537809 systemd-logind[1443]: New session 11 of user core. Aug 6 07:54:52.547281 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 6 07:54:53.617880 sshd[5041]: pam_unix(sshd:session): session closed for user core Aug 6 07:54:53.634433 systemd[1]: sshd@12-143.244.180.140:22-147.75.109.163:51916.service: Deactivated successfully. Aug 6 07:54:53.642140 systemd[1]: session-11.scope: Deactivated successfully. Aug 6 07:54:53.647914 systemd-logind[1443]: Session 11 logged out. Waiting for processes to exit. Aug 6 07:54:53.657206 systemd[1]: Started sshd@13-143.244.180.140:22-147.75.109.163:51924.service - OpenSSH per-connection server daemon (147.75.109.163:51924). Aug 6 07:54:53.664027 systemd-logind[1443]: Removed session 11. Aug 6 07:54:53.761048 sshd[5052]: Accepted publickey for core from 147.75.109.163 port 51924 ssh2: RSA SHA256:SPvEXBcgWiyzsQCUAYKdH6mATUwSf1+92OLxtqkDC6k Aug 6 07:54:53.762531 sshd[5052]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:54:53.768691 systemd-logind[1443]: New session 12 of user core. Aug 6 07:54:53.773352 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 6 07:54:53.950735 sshd[5052]: pam_unix(sshd:session): session closed for user core Aug 6 07:54:53.955826 systemd[1]: sshd@13-143.244.180.140:22-147.75.109.163:51924.service: Deactivated successfully. Aug 6 07:54:53.958883 systemd[1]: session-12.scope: Deactivated successfully. Aug 6 07:54:53.961351 systemd-logind[1443]: Session 12 logged out. Waiting for processes to exit. Aug 6 07:54:53.963126 systemd-logind[1443]: Removed session 12. Aug 6 07:54:58.966080 systemd[1]: Started sshd@14-143.244.180.140:22-147.75.109.163:35430.service - OpenSSH per-connection server daemon (147.75.109.163:35430). Aug 6 07:54:59.030799 sshd[5069]: Accepted publickey for core from 147.75.109.163 port 35430 ssh2: RSA SHA256:SPvEXBcgWiyzsQCUAYKdH6mATUwSf1+92OLxtqkDC6k Aug 6 07:54:59.033146 sshd[5069]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:54:59.040569 systemd-logind[1443]: New session 13 of user core. Aug 6 07:54:59.046486 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 6 07:54:59.202570 sshd[5069]: pam_unix(sshd:session): session closed for user core Aug 6 07:54:59.207174 systemd-logind[1443]: Session 13 logged out. Waiting for processes to exit. Aug 6 07:54:59.207436 systemd[1]: sshd@14-143.244.180.140:22-147.75.109.163:35430.service: Deactivated successfully. Aug 6 07:54:59.210855 systemd[1]: session-13.scope: Deactivated successfully. Aug 6 07:54:59.215317 systemd-logind[1443]: Removed session 13. Aug 6 07:55:04.235696 systemd[1]: Started sshd@15-143.244.180.140:22-147.75.109.163:51814.service - OpenSSH per-connection server daemon (147.75.109.163:51814). Aug 6 07:55:04.276786 kubelet[2504]: E0806 07:55:04.276730 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:55:04.324039 sshd[5131]: Accepted publickey for core from 147.75.109.163 port 51814 ssh2: RSA SHA256:SPvEXBcgWiyzsQCUAYKdH6mATUwSf1+92OLxtqkDC6k Aug 6 07:55:04.330260 sshd[5131]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:55:04.343087 systemd-logind[1443]: New session 14 of user core. Aug 6 07:55:04.352366 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 6 07:55:04.567940 sshd[5131]: pam_unix(sshd:session): session closed for user core Aug 6 07:55:04.579203 systemd[1]: sshd@15-143.244.180.140:22-147.75.109.163:51814.service: Deactivated successfully. Aug 6 07:55:04.584336 systemd[1]: session-14.scope: Deactivated successfully. Aug 6 07:55:04.587834 systemd-logind[1443]: Session 14 logged out. Waiting for processes to exit. Aug 6 07:55:04.598045 systemd-logind[1443]: Removed session 14. Aug 6 07:55:05.271611 kubelet[2504]: E0806 07:55:05.271559 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:55:09.272480 kubelet[2504]: E0806 07:55:09.271777 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:55:09.592535 systemd[1]: Started sshd@16-143.244.180.140:22-147.75.109.163:51816.service - OpenSSH per-connection server daemon (147.75.109.163:51816). Aug 6 07:55:09.731585 sshd[5168]: Accepted publickey for core from 147.75.109.163 port 51816 ssh2: RSA SHA256:SPvEXBcgWiyzsQCUAYKdH6mATUwSf1+92OLxtqkDC6k Aug 6 07:55:09.731410 sshd[5168]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:55:09.748262 systemd-logind[1443]: New session 15 of user core. Aug 6 07:55:09.753509 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 6 07:55:10.105444 sshd[5168]: pam_unix(sshd:session): session closed for user core Aug 6 07:55:10.110435 systemd[1]: sshd@16-143.244.180.140:22-147.75.109.163:51816.service: Deactivated successfully. Aug 6 07:55:10.114692 systemd[1]: session-15.scope: Deactivated successfully. Aug 6 07:55:10.121930 systemd-logind[1443]: Session 15 logged out. Waiting for processes to exit. Aug 6 07:55:10.124121 systemd-logind[1443]: Removed session 15. Aug 6 07:55:13.080569 systemd[1]: Started sshd@17-143.244.180.140:22-103.240.6.43:52274.service - OpenSSH per-connection server daemon (103.240.6.43:52274). Aug 6 07:55:13.273790 kubelet[2504]: E0806 07:55:13.273234 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:55:14.163557 sshd[5186]: Invalid user kb from 103.240.6.43 port 52274 Aug 6 07:55:14.364077 sshd[5186]: Received disconnect from 103.240.6.43 port 52274:11: Bye Bye [preauth] Aug 6 07:55:14.364077 sshd[5186]: Disconnected from invalid user kb 103.240.6.43 port 52274 [preauth] Aug 6 07:55:14.366799 systemd[1]: sshd@17-143.244.180.140:22-103.240.6.43:52274.service: Deactivated successfully. Aug 6 07:55:15.126578 systemd[1]: Started sshd@18-143.244.180.140:22-147.75.109.163:51212.service - OpenSSH per-connection server daemon (147.75.109.163:51212). Aug 6 07:55:15.184317 sshd[5191]: Accepted publickey for core from 147.75.109.163 port 51212 ssh2: RSA SHA256:SPvEXBcgWiyzsQCUAYKdH6mATUwSf1+92OLxtqkDC6k Aug 6 07:55:15.187216 sshd[5191]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:55:15.195305 systemd-logind[1443]: New session 16 of user core. Aug 6 07:55:15.199277 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 6 07:55:15.362254 sshd[5191]: pam_unix(sshd:session): session closed for user core Aug 6 07:55:15.367750 systemd-logind[1443]: Session 16 logged out. Waiting for processes to exit. Aug 6 07:55:15.368465 systemd[1]: sshd@18-143.244.180.140:22-147.75.109.163:51212.service: Deactivated successfully. Aug 6 07:55:15.372993 systemd[1]: session-16.scope: Deactivated successfully. Aug 6 07:55:15.375332 systemd-logind[1443]: Removed session 16. Aug 6 07:55:20.377300 systemd[1]: Started sshd@19-143.244.180.140:22-147.75.109.163:51228.service - OpenSSH per-connection server daemon (147.75.109.163:51228). Aug 6 07:55:20.446565 sshd[5204]: Accepted publickey for core from 147.75.109.163 port 51228 ssh2: RSA SHA256:SPvEXBcgWiyzsQCUAYKdH6mATUwSf1+92OLxtqkDC6k Aug 6 07:55:20.449281 sshd[5204]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:55:20.455496 systemd-logind[1443]: New session 17 of user core. Aug 6 07:55:20.464346 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 6 07:55:20.641799 sshd[5204]: pam_unix(sshd:session): session closed for user core Aug 6 07:55:20.654084 systemd[1]: sshd@19-143.244.180.140:22-147.75.109.163:51228.service: Deactivated successfully. Aug 6 07:55:20.657651 systemd[1]: session-17.scope: Deactivated successfully. Aug 6 07:55:20.659937 systemd-logind[1443]: Session 17 logged out. Waiting for processes to exit. Aug 6 07:55:20.664560 systemd[1]: Started sshd@20-143.244.180.140:22-147.75.109.163:51238.service - OpenSSH per-connection server daemon (147.75.109.163:51238). Aug 6 07:55:20.667445 systemd-logind[1443]: Removed session 17. Aug 6 07:55:20.759036 sshd[5217]: Accepted publickey for core from 147.75.109.163 port 51238 ssh2: RSA SHA256:SPvEXBcgWiyzsQCUAYKdH6mATUwSf1+92OLxtqkDC6k Aug 6 07:55:20.761204 sshd[5217]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:55:20.769577 systemd-logind[1443]: New session 18 of user core. Aug 6 07:55:20.773365 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 6 07:55:21.177519 sshd[5217]: pam_unix(sshd:session): session closed for user core Aug 6 07:55:21.191902 systemd[1]: sshd@20-143.244.180.140:22-147.75.109.163:51238.service: Deactivated successfully. Aug 6 07:55:21.195323 systemd[1]: session-18.scope: Deactivated successfully. Aug 6 07:55:21.196751 systemd-logind[1443]: Session 18 logged out. Waiting for processes to exit. Aug 6 07:55:21.215441 systemd[1]: Started sshd@21-143.244.180.140:22-147.75.109.163:51246.service - OpenSSH per-connection server daemon (147.75.109.163:51246). Aug 6 07:55:21.216689 systemd-logind[1443]: Removed session 18. Aug 6 07:55:21.283037 sshd[5229]: Accepted publickey for core from 147.75.109.163 port 51246 ssh2: RSA SHA256:SPvEXBcgWiyzsQCUAYKdH6mATUwSf1+92OLxtqkDC6k Aug 6 07:55:21.284186 sshd[5229]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:55:21.291815 systemd-logind[1443]: New session 19 of user core. Aug 6 07:55:21.301310 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 6 07:55:22.469834 sshd[5229]: pam_unix(sshd:session): session closed for user core Aug 6 07:55:22.489291 systemd[1]: Started sshd@22-143.244.180.140:22-147.75.109.163:51254.service - OpenSSH per-connection server daemon (147.75.109.163:51254). Aug 6 07:55:22.492258 systemd[1]: sshd@21-143.244.180.140:22-147.75.109.163:51246.service: Deactivated successfully. Aug 6 07:55:22.498964 systemd[1]: session-19.scope: Deactivated successfully. Aug 6 07:55:22.505056 systemd-logind[1443]: Session 19 logged out. Waiting for processes to exit. Aug 6 07:55:22.510911 systemd-logind[1443]: Removed session 19. Aug 6 07:55:22.603773 sshd[5252]: Accepted publickey for core from 147.75.109.163 port 51254 ssh2: RSA SHA256:SPvEXBcgWiyzsQCUAYKdH6mATUwSf1+92OLxtqkDC6k Aug 6 07:55:22.605721 sshd[5252]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:55:22.613429 systemd-logind[1443]: New session 20 of user core. Aug 6 07:55:22.619262 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 6 07:55:23.515673 sshd[5252]: pam_unix(sshd:session): session closed for user core Aug 6 07:55:23.530532 systemd[1]: sshd@22-143.244.180.140:22-147.75.109.163:51254.service: Deactivated successfully. Aug 6 07:55:23.533402 systemd[1]: session-20.scope: Deactivated successfully. Aug 6 07:55:23.539159 systemd-logind[1443]: Session 20 logged out. Waiting for processes to exit. Aug 6 07:55:23.550642 systemd[1]: Started sshd@23-143.244.180.140:22-147.75.109.163:51266.service - OpenSSH per-connection server daemon (147.75.109.163:51266). Aug 6 07:55:23.555807 systemd-logind[1443]: Removed session 20. Aug 6 07:55:23.627079 sshd[5267]: Accepted publickey for core from 147.75.109.163 port 51266 ssh2: RSA SHA256:SPvEXBcgWiyzsQCUAYKdH6mATUwSf1+92OLxtqkDC6k Aug 6 07:55:23.628521 sshd[5267]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:55:23.636149 systemd-logind[1443]: New session 21 of user core. Aug 6 07:55:23.640265 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 6 07:55:23.793010 sshd[5267]: pam_unix(sshd:session): session closed for user core Aug 6 07:55:23.800744 systemd-logind[1443]: Session 21 logged out. Waiting for processes to exit. Aug 6 07:55:23.803133 systemd[1]: sshd@23-143.244.180.140:22-147.75.109.163:51266.service: Deactivated successfully. Aug 6 07:55:23.810452 systemd[1]: session-21.scope: Deactivated successfully. Aug 6 07:55:23.813454 systemd-logind[1443]: Removed session 21. Aug 6 07:55:24.512268 kubelet[2504]: I0806 07:55:24.511046 2504 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 6 07:55:28.810597 systemd[1]: Started sshd@24-143.244.180.140:22-147.75.109.163:51002.service - OpenSSH per-connection server daemon (147.75.109.163:51002). Aug 6 07:55:28.879481 sshd[5282]: Accepted publickey for core from 147.75.109.163 port 51002 ssh2: RSA SHA256:SPvEXBcgWiyzsQCUAYKdH6mATUwSf1+92OLxtqkDC6k Aug 6 07:55:28.883480 sshd[5282]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:55:28.893939 systemd-logind[1443]: New session 22 of user core. Aug 6 07:55:28.902328 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 6 07:55:29.071449 sshd[5282]: pam_unix(sshd:session): session closed for user core Aug 6 07:55:29.078757 systemd[1]: sshd@24-143.244.180.140:22-147.75.109.163:51002.service: Deactivated successfully. Aug 6 07:55:29.084018 systemd[1]: session-22.scope: Deactivated successfully. Aug 6 07:55:29.085549 systemd-logind[1443]: Session 22 logged out. Waiting for processes to exit. Aug 6 07:55:29.087288 systemd-logind[1443]: Removed session 22. Aug 6 07:55:32.274108 kubelet[2504]: E0806 07:55:32.274012 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:55:34.098432 systemd[1]: Started sshd@25-143.244.180.140:22-147.75.109.163:36018.service - OpenSSH per-connection server daemon (147.75.109.163:36018). Aug 6 07:55:34.146445 sshd[5347]: Accepted publickey for core from 147.75.109.163 port 36018 ssh2: RSA SHA256:SPvEXBcgWiyzsQCUAYKdH6mATUwSf1+92OLxtqkDC6k Aug 6 07:55:34.150884 sshd[5347]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:55:34.163516 systemd-logind[1443]: New session 23 of user core. Aug 6 07:55:34.170276 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 6 07:55:34.423943 sshd[5347]: pam_unix(sshd:session): session closed for user core Aug 6 07:55:34.432801 systemd[1]: sshd@25-143.244.180.140:22-147.75.109.163:36018.service: Deactivated successfully. Aug 6 07:55:34.437061 systemd[1]: session-23.scope: Deactivated successfully. Aug 6 07:55:34.438854 systemd-logind[1443]: Session 23 logged out. Waiting for processes to exit. Aug 6 07:55:34.440478 systemd-logind[1443]: Removed session 23. Aug 6 07:55:37.272805 kubelet[2504]: E0806 07:55:37.272660 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:55:39.443509 systemd[1]: Started sshd@26-143.244.180.140:22-147.75.109.163:36022.service - OpenSSH per-connection server daemon (147.75.109.163:36022). Aug 6 07:55:39.492501 sshd[5361]: Accepted publickey for core from 147.75.109.163 port 36022 ssh2: RSA SHA256:SPvEXBcgWiyzsQCUAYKdH6mATUwSf1+92OLxtqkDC6k Aug 6 07:55:39.495334 sshd[5361]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:55:39.503330 systemd-logind[1443]: New session 24 of user core. Aug 6 07:55:39.510710 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 6 07:55:39.665447 sshd[5361]: pam_unix(sshd:session): session closed for user core Aug 6 07:55:39.669754 systemd-logind[1443]: Session 24 logged out. Waiting for processes to exit. Aug 6 07:55:39.670602 systemd[1]: sshd@26-143.244.180.140:22-147.75.109.163:36022.service: Deactivated successfully. Aug 6 07:55:39.675216 systemd[1]: session-24.scope: Deactivated successfully. Aug 6 07:55:39.680593 systemd-logind[1443]: Removed session 24. Aug 6 07:55:44.688821 systemd[1]: Started sshd@27-143.244.180.140:22-147.75.109.163:53214.service - OpenSSH per-connection server daemon (147.75.109.163:53214). Aug 6 07:55:44.729758 sshd[5380]: Accepted publickey for core from 147.75.109.163 port 53214 ssh2: RSA SHA256:SPvEXBcgWiyzsQCUAYKdH6mATUwSf1+92OLxtqkDC6k Aug 6 07:55:44.730691 sshd[5380]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:55:44.737953 systemd-logind[1443]: New session 25 of user core. Aug 6 07:55:44.744358 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 6 07:55:44.882301 sshd[5380]: pam_unix(sshd:session): session closed for user core Aug 6 07:55:44.887901 systemd[1]: sshd@27-143.244.180.140:22-147.75.109.163:53214.service: Deactivated successfully. Aug 6 07:55:44.891489 systemd[1]: session-25.scope: Deactivated successfully. Aug 6 07:55:44.893691 systemd-logind[1443]: Session 25 logged out. Waiting for processes to exit. Aug 6 07:55:44.895330 systemd-logind[1443]: Removed session 25. Aug 6 07:55:49.908284 systemd[1]: Started sshd@28-143.244.180.140:22-147.75.109.163:53224.service - OpenSSH per-connection server daemon (147.75.109.163:53224). Aug 6 07:55:49.951826 sshd[5400]: Accepted publickey for core from 147.75.109.163 port 53224 ssh2: RSA SHA256:SPvEXBcgWiyzsQCUAYKdH6mATUwSf1+92OLxtqkDC6k Aug 6 07:55:49.953930 sshd[5400]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:55:49.962010 systemd-logind[1443]: New session 26 of user core. Aug 6 07:55:49.967349 systemd[1]: Started session-26.scope - Session 26 of User core. Aug 6 07:55:50.125514 sshd[5400]: pam_unix(sshd:session): session closed for user core Aug 6 07:55:50.131146 systemd[1]: sshd@28-143.244.180.140:22-147.75.109.163:53224.service: Deactivated successfully. Aug 6 07:55:50.134881 systemd[1]: session-26.scope: Deactivated successfully. Aug 6 07:55:50.138706 systemd-logind[1443]: Session 26 logged out. Waiting for processes to exit. Aug 6 07:55:50.139957 systemd-logind[1443]: Removed session 26. Aug 6 07:55:55.148434 systemd[1]: Started sshd@29-143.244.180.140:22-147.75.109.163:42152.service - OpenSSH per-connection server daemon (147.75.109.163:42152). Aug 6 07:55:55.211029 sshd[5418]: Accepted publickey for core from 147.75.109.163 port 42152 ssh2: RSA SHA256:SPvEXBcgWiyzsQCUAYKdH6mATUwSf1+92OLxtqkDC6k Aug 6 07:55:55.213081 sshd[5418]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:55:55.220791 systemd-logind[1443]: New session 27 of user core. Aug 6 07:55:55.226608 systemd[1]: Started session-27.scope - Session 27 of User core. Aug 6 07:55:55.390456 sshd[5418]: pam_unix(sshd:session): session closed for user core Aug 6 07:55:55.396473 systemd[1]: sshd@29-143.244.180.140:22-147.75.109.163:42152.service: Deactivated successfully. Aug 6 07:55:55.400897 systemd[1]: session-27.scope: Deactivated successfully. Aug 6 07:55:55.403269 systemd-logind[1443]: Session 27 logged out. Waiting for processes to exit. Aug 6 07:55:55.404730 systemd-logind[1443]: Removed session 27.