Jun 26 07:15:53.208952 kernel: Linux version 6.6.35-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Tue Jun 25 17:21:28 -00 2024 Jun 26 07:15:53.208990 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=4483672da8ac4c95f5ee13a489103440a13110ce1f63977ab5a6a33d0c137bf8 Jun 26 07:15:53.209009 kernel: BIOS-provided physical RAM map: Jun 26 07:15:53.209020 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jun 26 07:15:53.209029 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jun 26 07:15:53.209039 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jun 26 07:15:53.209051 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffd7fff] usable Jun 26 07:15:53.209062 kernel: BIOS-e820: [mem 0x000000007ffd8000-0x000000007fffffff] reserved Jun 26 07:15:53.209072 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jun 26 07:15:53.209086 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jun 26 07:15:53.211131 kernel: NX (Execute Disable) protection: active Jun 26 07:15:53.211168 kernel: APIC: Static calls initialized Jun 26 07:15:53.211179 kernel: SMBIOS 2.8 present. Jun 26 07:15:53.211191 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Jun 26 07:15:53.211204 kernel: Hypervisor detected: KVM Jun 26 07:15:53.211224 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jun 26 07:15:53.211236 kernel: kvm-clock: using sched offset of 5985830129 cycles Jun 26 07:15:53.211250 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jun 26 07:15:53.211263 kernel: tsc: Detected 2294.608 MHz processor Jun 26 07:15:53.211275 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jun 26 07:15:53.211288 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jun 26 07:15:53.211301 kernel: last_pfn = 0x7ffd8 max_arch_pfn = 0x400000000 Jun 26 07:15:53.211313 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jun 26 07:15:53.211325 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jun 26 07:15:53.211341 kernel: ACPI: Early table checksum verification disabled Jun 26 07:15:53.211353 kernel: ACPI: RSDP 0x00000000000F5A50 000014 (v00 BOCHS ) Jun 26 07:15:53.211366 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 26 07:15:53.211378 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 26 07:15:53.211390 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 26 07:15:53.211402 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jun 26 07:15:53.211414 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 26 07:15:53.211426 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 26 07:15:53.211438 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 26 07:15:53.211454 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 26 07:15:53.211466 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Jun 26 07:15:53.211478 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Jun 26 07:15:53.211489 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jun 26 07:15:53.211501 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Jun 26 07:15:53.211513 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Jun 26 07:15:53.211525 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Jun 26 07:15:53.211545 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Jun 26 07:15:53.211558 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jun 26 07:15:53.211571 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jun 26 07:15:53.211583 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jun 26 07:15:53.211596 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jun 26 07:15:53.211609 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffd7fff] -> [mem 0x00000000-0x7ffd7fff] Jun 26 07:15:53.211622 kernel: NODE_DATA(0) allocated [mem 0x7ffd2000-0x7ffd7fff] Jun 26 07:15:53.211638 kernel: Zone ranges: Jun 26 07:15:53.211651 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jun 26 07:15:53.211664 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffd7fff] Jun 26 07:15:53.211676 kernel: Normal empty Jun 26 07:15:53.211689 kernel: Movable zone start for each node Jun 26 07:15:53.211702 kernel: Early memory node ranges Jun 26 07:15:53.211714 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jun 26 07:15:53.211727 kernel: node 0: [mem 0x0000000000100000-0x000000007ffd7fff] Jun 26 07:15:53.211739 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffd7fff] Jun 26 07:15:53.211756 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jun 26 07:15:53.211768 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jun 26 07:15:53.211781 kernel: On node 0, zone DMA32: 40 pages in unavailable ranges Jun 26 07:15:53.211793 kernel: ACPI: PM-Timer IO Port: 0x608 Jun 26 07:15:53.211806 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jun 26 07:15:53.211819 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jun 26 07:15:53.211831 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jun 26 07:15:53.211844 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jun 26 07:15:53.211856 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jun 26 07:15:53.211873 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jun 26 07:15:53.211886 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jun 26 07:15:53.211899 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jun 26 07:15:53.211911 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jun 26 07:15:53.211924 kernel: TSC deadline timer available Jun 26 07:15:53.211936 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jun 26 07:15:53.211949 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jun 26 07:15:53.211962 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Jun 26 07:15:53.211975 kernel: Booting paravirtualized kernel on KVM Jun 26 07:15:53.211993 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jun 26 07:15:53.212006 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jun 26 07:15:53.212019 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Jun 26 07:15:53.212032 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Jun 26 07:15:53.212044 kernel: pcpu-alloc: [0] 0 1 Jun 26 07:15:53.212056 kernel: kvm-guest: PV spinlocks disabled, no host support Jun 26 07:15:53.212071 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=4483672da8ac4c95f5ee13a489103440a13110ce1f63977ab5a6a33d0c137bf8 Jun 26 07:15:53.212085 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 26 07:15:53.212129 kernel: random: crng init done Jun 26 07:15:53.212142 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 26 07:15:53.212155 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jun 26 07:15:53.212167 kernel: Fallback order for Node 0: 0 Jun 26 07:15:53.212181 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515800 Jun 26 07:15:53.212193 kernel: Policy zone: DMA32 Jun 26 07:15:53.212206 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 26 07:15:53.212219 kernel: Memory: 1965048K/2096600K available (12288K kernel code, 2302K rwdata, 22636K rodata, 49384K init, 1964K bss, 131292K reserved, 0K cma-reserved) Jun 26 07:15:53.212232 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jun 26 07:15:53.212250 kernel: Kernel/User page tables isolation: enabled Jun 26 07:15:53.212262 kernel: ftrace: allocating 37650 entries in 148 pages Jun 26 07:15:53.212275 kernel: ftrace: allocated 148 pages with 3 groups Jun 26 07:15:53.212288 kernel: Dynamic Preempt: voluntary Jun 26 07:15:53.212300 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 26 07:15:53.212315 kernel: rcu: RCU event tracing is enabled. Jun 26 07:15:53.212328 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jun 26 07:15:53.212341 kernel: Trampoline variant of Tasks RCU enabled. Jun 26 07:15:53.212354 kernel: Rude variant of Tasks RCU enabled. Jun 26 07:15:53.212372 kernel: Tracing variant of Tasks RCU enabled. Jun 26 07:15:53.212385 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 26 07:15:53.212398 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jun 26 07:15:53.212410 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jun 26 07:15:53.212423 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 26 07:15:53.212435 kernel: Console: colour VGA+ 80x25 Jun 26 07:15:53.212448 kernel: printk: console [tty0] enabled Jun 26 07:15:53.212461 kernel: printk: console [ttyS0] enabled Jun 26 07:15:53.212473 kernel: ACPI: Core revision 20230628 Jun 26 07:15:53.212486 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jun 26 07:15:53.212503 kernel: APIC: Switch to symmetric I/O mode setup Jun 26 07:15:53.212516 kernel: x2apic enabled Jun 26 07:15:53.212528 kernel: APIC: Switched APIC routing to: physical x2apic Jun 26 07:15:53.212541 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jun 26 07:15:53.212554 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x21134f58f0d, max_idle_ns: 440795217993 ns Jun 26 07:15:53.212567 kernel: Calibrating delay loop (skipped) preset value.. 4589.21 BogoMIPS (lpj=2294608) Jun 26 07:15:53.212580 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jun 26 07:15:53.212593 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jun 26 07:15:53.212621 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jun 26 07:15:53.212635 kernel: Spectre V2 : Mitigation: Retpolines Jun 26 07:15:53.212648 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jun 26 07:15:53.212666 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jun 26 07:15:53.212679 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jun 26 07:15:53.212693 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jun 26 07:15:53.212706 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jun 26 07:15:53.212720 kernel: MDS: Mitigation: Clear CPU buffers Jun 26 07:15:53.212734 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jun 26 07:15:53.212752 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jun 26 07:15:53.212766 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jun 26 07:15:53.212780 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jun 26 07:15:53.212793 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jun 26 07:15:53.212806 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jun 26 07:15:53.212820 kernel: Freeing SMP alternatives memory: 32K Jun 26 07:15:53.212834 kernel: pid_max: default: 32768 minimum: 301 Jun 26 07:15:53.212847 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Jun 26 07:15:53.212865 kernel: SELinux: Initializing. Jun 26 07:15:53.212879 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jun 26 07:15:53.212893 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jun 26 07:15:53.212907 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Jun 26 07:15:53.212920 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jun 26 07:15:53.212934 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jun 26 07:15:53.212948 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jun 26 07:15:53.212961 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Jun 26 07:15:53.212975 kernel: signal: max sigframe size: 1776 Jun 26 07:15:53.212993 kernel: rcu: Hierarchical SRCU implementation. Jun 26 07:15:53.213006 kernel: rcu: Max phase no-delay instances is 400. Jun 26 07:15:53.213020 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jun 26 07:15:53.213033 kernel: smp: Bringing up secondary CPUs ... Jun 26 07:15:53.213047 kernel: smpboot: x86: Booting SMP configuration: Jun 26 07:15:53.213060 kernel: .... node #0, CPUs: #1 Jun 26 07:15:53.213074 kernel: smp: Brought up 1 node, 2 CPUs Jun 26 07:15:53.213087 kernel: smpboot: Max logical packages: 1 Jun 26 07:15:53.215216 kernel: smpboot: Total of 2 processors activated (9178.43 BogoMIPS) Jun 26 07:15:53.215257 kernel: devtmpfs: initialized Jun 26 07:15:53.215272 kernel: x86/mm: Memory block size: 128MB Jun 26 07:15:53.215286 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 26 07:15:53.215301 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jun 26 07:15:53.215315 kernel: pinctrl core: initialized pinctrl subsystem Jun 26 07:15:53.215327 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 26 07:15:53.215340 kernel: audit: initializing netlink subsys (disabled) Jun 26 07:15:53.215353 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 26 07:15:53.215365 kernel: thermal_sys: Registered thermal governor 'user_space' Jun 26 07:15:53.215383 kernel: audit: type=2000 audit(1719386151.500:1): state=initialized audit_enabled=0 res=1 Jun 26 07:15:53.215396 kernel: cpuidle: using governor menu Jun 26 07:15:53.215409 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 26 07:15:53.215422 kernel: dca service started, version 1.12.1 Jun 26 07:15:53.215434 kernel: PCI: Using configuration type 1 for base access Jun 26 07:15:53.215447 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jun 26 07:15:53.215463 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 26 07:15:53.215482 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jun 26 07:15:53.215500 kernel: ACPI: Added _OSI(Module Device) Jun 26 07:15:53.215522 kernel: ACPI: Added _OSI(Processor Device) Jun 26 07:15:53.215540 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jun 26 07:15:53.215559 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 26 07:15:53.215577 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 26 07:15:53.215591 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jun 26 07:15:53.215604 kernel: ACPI: Interpreter enabled Jun 26 07:15:53.215618 kernel: ACPI: PM: (supports S0 S5) Jun 26 07:15:53.215630 kernel: ACPI: Using IOAPIC for interrupt routing Jun 26 07:15:53.215644 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jun 26 07:15:53.215673 kernel: PCI: Using E820 reservations for host bridge windows Jun 26 07:15:53.215686 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jun 26 07:15:53.215698 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jun 26 07:15:53.216001 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jun 26 07:15:53.216384 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jun 26 07:15:53.216557 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jun 26 07:15:53.216582 kernel: acpiphp: Slot [3] registered Jun 26 07:15:53.216607 kernel: acpiphp: Slot [4] registered Jun 26 07:15:53.216623 kernel: acpiphp: Slot [5] registered Jun 26 07:15:53.216639 kernel: acpiphp: Slot [6] registered Jun 26 07:15:53.216655 kernel: acpiphp: Slot [7] registered Jun 26 07:15:53.216672 kernel: acpiphp: Slot [8] registered Jun 26 07:15:53.216688 kernel: acpiphp: Slot [9] registered Jun 26 07:15:53.216704 kernel: acpiphp: Slot [10] registered Jun 26 07:15:53.216732 kernel: acpiphp: Slot [11] registered Jun 26 07:15:53.216750 kernel: acpiphp: Slot [12] registered Jun 26 07:15:53.216773 kernel: acpiphp: Slot [13] registered Jun 26 07:15:53.216790 kernel: acpiphp: Slot [14] registered Jun 26 07:15:53.216807 kernel: acpiphp: Slot [15] registered Jun 26 07:15:53.216824 kernel: acpiphp: Slot [16] registered Jun 26 07:15:53.216840 kernel: acpiphp: Slot [17] registered Jun 26 07:15:53.216855 kernel: acpiphp: Slot [18] registered Jun 26 07:15:53.216873 kernel: acpiphp: Slot [19] registered Jun 26 07:15:53.216889 kernel: acpiphp: Slot [20] registered Jun 26 07:15:53.216905 kernel: acpiphp: Slot [21] registered Jun 26 07:15:53.216922 kernel: acpiphp: Slot [22] registered Jun 26 07:15:53.216938 kernel: acpiphp: Slot [23] registered Jun 26 07:15:53.216961 kernel: acpiphp: Slot [24] registered Jun 26 07:15:53.216978 kernel: acpiphp: Slot [25] registered Jun 26 07:15:53.216993 kernel: acpiphp: Slot [26] registered Jun 26 07:15:53.217010 kernel: acpiphp: Slot [27] registered Jun 26 07:15:53.217026 kernel: acpiphp: Slot [28] registered Jun 26 07:15:53.217043 kernel: acpiphp: Slot [29] registered Jun 26 07:15:53.217058 kernel: acpiphp: Slot [30] registered Jun 26 07:15:53.217074 kernel: acpiphp: Slot [31] registered Jun 26 07:15:53.217091 kernel: PCI host bridge to bus 0000:00 Jun 26 07:15:53.219394 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jun 26 07:15:53.219578 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jun 26 07:15:53.219720 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jun 26 07:15:53.219867 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jun 26 07:15:53.220012 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jun 26 07:15:53.220198 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jun 26 07:15:53.221687 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jun 26 07:15:53.221959 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jun 26 07:15:53.222197 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jun 26 07:15:53.222364 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Jun 26 07:15:53.222531 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jun 26 07:15:53.222724 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jun 26 07:15:53.222887 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jun 26 07:15:53.223046 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jun 26 07:15:53.223291 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Jun 26 07:15:53.223459 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Jun 26 07:15:53.223648 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jun 26 07:15:53.223824 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jun 26 07:15:53.223987 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jun 26 07:15:53.227963 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jun 26 07:15:53.228247 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jun 26 07:15:53.228431 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Jun 26 07:15:53.228610 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Jun 26 07:15:53.228778 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jun 26 07:15:53.228955 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jun 26 07:15:53.229246 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jun 26 07:15:53.229454 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Jun 26 07:15:53.229636 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Jun 26 07:15:53.229810 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Jun 26 07:15:53.230010 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jun 26 07:15:53.230271 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Jun 26 07:15:53.230454 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Jun 26 07:15:53.230672 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Jun 26 07:15:53.230883 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Jun 26 07:15:53.231070 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Jun 26 07:15:53.231352 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Jun 26 07:15:53.231536 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Jun 26 07:15:53.231730 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Jun 26 07:15:53.231916 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Jun 26 07:15:53.232107 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Jun 26 07:15:53.233449 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Jun 26 07:15:53.233664 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Jun 26 07:15:53.233856 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Jun 26 07:15:53.234052 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Jun 26 07:15:53.234343 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Jun 26 07:15:53.234569 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Jun 26 07:15:53.234792 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Jun 26 07:15:53.234988 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Jun 26 07:15:53.235012 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jun 26 07:15:53.235028 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jun 26 07:15:53.235044 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jun 26 07:15:53.235060 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jun 26 07:15:53.235077 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jun 26 07:15:53.235091 kernel: iommu: Default domain type: Translated Jun 26 07:15:53.235263 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jun 26 07:15:53.235292 kernel: PCI: Using ACPI for IRQ routing Jun 26 07:15:53.235310 kernel: PCI: pci_cache_line_size set to 64 bytes Jun 26 07:15:53.235329 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jun 26 07:15:53.235346 kernel: e820: reserve RAM buffer [mem 0x7ffd8000-0x7fffffff] Jun 26 07:15:53.235559 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jun 26 07:15:53.235738 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jun 26 07:15:53.235923 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jun 26 07:15:53.235948 kernel: vgaarb: loaded Jun 26 07:15:53.235976 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jun 26 07:15:53.235994 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jun 26 07:15:53.236013 kernel: clocksource: Switched to clocksource kvm-clock Jun 26 07:15:53.236031 kernel: VFS: Disk quotas dquot_6.6.0 Jun 26 07:15:53.236050 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 26 07:15:53.236069 kernel: pnp: PnP ACPI init Jun 26 07:15:53.236087 kernel: pnp: PnP ACPI: found 4 devices Jun 26 07:15:53.236142 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jun 26 07:15:53.236159 kernel: NET: Registered PF_INET protocol family Jun 26 07:15:53.236181 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jun 26 07:15:53.236197 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jun 26 07:15:53.236214 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 26 07:15:53.236230 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jun 26 07:15:53.236247 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jun 26 07:15:53.236261 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jun 26 07:15:53.236278 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jun 26 07:15:53.236294 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jun 26 07:15:53.236311 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 26 07:15:53.236334 kernel: NET: Registered PF_XDP protocol family Jun 26 07:15:53.236536 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jun 26 07:15:53.236681 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jun 26 07:15:53.236825 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jun 26 07:15:53.236975 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jun 26 07:15:53.237164 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jun 26 07:15:53.237345 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jun 26 07:15:53.237514 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jun 26 07:15:53.237550 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jun 26 07:15:53.237711 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7b0 took 45641 usecs Jun 26 07:15:53.237734 kernel: PCI: CLS 0 bytes, default 64 Jun 26 07:15:53.237751 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jun 26 07:15:53.237767 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x21134f58f0d, max_idle_ns: 440795217993 ns Jun 26 07:15:53.237785 kernel: Initialise system trusted keyrings Jun 26 07:15:53.237801 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jun 26 07:15:53.237817 kernel: Key type asymmetric registered Jun 26 07:15:53.237835 kernel: Asymmetric key parser 'x509' registered Jun 26 07:15:53.237861 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jun 26 07:15:53.237879 kernel: io scheduler mq-deadline registered Jun 26 07:15:53.237897 kernel: io scheduler kyber registered Jun 26 07:15:53.237916 kernel: io scheduler bfq registered Jun 26 07:15:53.237933 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jun 26 07:15:53.237952 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jun 26 07:15:53.237970 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jun 26 07:15:53.237988 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jun 26 07:15:53.238006 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 26 07:15:53.238028 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jun 26 07:15:53.238046 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jun 26 07:15:53.238064 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jun 26 07:15:53.238082 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jun 26 07:15:53.238139 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jun 26 07:15:53.238370 kernel: rtc_cmos 00:03: RTC can wake from S4 Jun 26 07:15:53.238536 kernel: rtc_cmos 00:03: registered as rtc0 Jun 26 07:15:53.238718 kernel: rtc_cmos 00:03: setting system clock to 2024-06-26T07:15:52 UTC (1719386152) Jun 26 07:15:53.238878 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jun 26 07:15:53.238899 kernel: intel_pstate: CPU model not supported Jun 26 07:15:53.238917 kernel: NET: Registered PF_INET6 protocol family Jun 26 07:15:53.238932 kernel: Segment Routing with IPv6 Jun 26 07:15:53.238948 kernel: In-situ OAM (IOAM) with IPv6 Jun 26 07:15:53.238965 kernel: NET: Registered PF_PACKET protocol family Jun 26 07:15:53.238981 kernel: Key type dns_resolver registered Jun 26 07:15:53.238998 kernel: IPI shorthand broadcast: enabled Jun 26 07:15:53.239015 kernel: sched_clock: Marking stable (1431006452, 230792071)->(1791144111, -129345588) Jun 26 07:15:53.239042 kernel: registered taskstats version 1 Jun 26 07:15:53.239060 kernel: Loading compiled-in X.509 certificates Jun 26 07:15:53.239078 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.35-flatcar: 60204e9db5f484c670a1c92aec37e9a0c4d3ae90' Jun 26 07:15:53.239171 kernel: Key type .fscrypt registered Jun 26 07:15:53.239190 kernel: Key type fscrypt-provisioning registered Jun 26 07:15:53.239208 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 26 07:15:53.239226 kernel: ima: Allocated hash algorithm: sha1 Jun 26 07:15:53.239245 kernel: ima: No architecture policies found Jun 26 07:15:53.239268 kernel: clk: Disabling unused clocks Jun 26 07:15:53.239286 kernel: Freeing unused kernel image (initmem) memory: 49384K Jun 26 07:15:53.239304 kernel: Write protecting the kernel read-only data: 36864k Jun 26 07:15:53.239323 kernel: Freeing unused kernel image (rodata/data gap) memory: 1940K Jun 26 07:15:53.239366 kernel: Run /init as init process Jun 26 07:15:53.239386 kernel: with arguments: Jun 26 07:15:53.239403 kernel: /init Jun 26 07:15:53.239419 kernel: with environment: Jun 26 07:15:53.239436 kernel: HOME=/ Jun 26 07:15:53.239453 kernel: TERM=linux Jun 26 07:15:53.239474 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 26 07:15:53.239495 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jun 26 07:15:53.239516 systemd[1]: Detected virtualization kvm. Jun 26 07:15:53.239539 systemd[1]: Detected architecture x86-64. Jun 26 07:15:53.239556 systemd[1]: Running in initrd. Jun 26 07:15:53.239575 systemd[1]: No hostname configured, using default hostname. Jun 26 07:15:53.239594 systemd[1]: Hostname set to . Jun 26 07:15:53.239618 systemd[1]: Initializing machine ID from VM UUID. Jun 26 07:15:53.239638 systemd[1]: Queued start job for default target initrd.target. Jun 26 07:15:53.239658 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 26 07:15:53.239678 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 26 07:15:53.239699 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 26 07:15:53.239719 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 26 07:15:53.239739 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 26 07:15:53.239759 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 26 07:15:53.239787 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 26 07:15:53.239807 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 26 07:15:53.239828 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 26 07:15:53.239848 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 26 07:15:53.239869 systemd[1]: Reached target paths.target - Path Units. Jun 26 07:15:53.239889 systemd[1]: Reached target slices.target - Slice Units. Jun 26 07:15:53.239908 systemd[1]: Reached target swap.target - Swaps. Jun 26 07:15:53.239931 systemd[1]: Reached target timers.target - Timer Units. Jun 26 07:15:53.239952 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 26 07:15:53.239971 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 26 07:15:53.239991 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 26 07:15:53.240010 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jun 26 07:15:53.240034 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 26 07:15:53.240054 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 26 07:15:53.240073 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 26 07:15:53.240093 systemd[1]: Reached target sockets.target - Socket Units. Jun 26 07:15:53.240210 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 26 07:15:53.240230 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 26 07:15:53.240250 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 26 07:15:53.240270 systemd[1]: Starting systemd-fsck-usr.service... Jun 26 07:15:53.240291 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 26 07:15:53.240318 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 26 07:15:53.240338 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 26 07:15:53.240358 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 26 07:15:53.240379 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 26 07:15:53.240455 systemd-journald[183]: Collecting audit messages is disabled. Jun 26 07:15:53.240504 systemd[1]: Finished systemd-fsck-usr.service. Jun 26 07:15:53.240527 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 26 07:15:53.240545 systemd-journald[183]: Journal started Jun 26 07:15:53.240586 systemd-journald[183]: Runtime Journal (/run/log/journal/075d1d9c539a49f4b4b9625da07cbd8b) is 4.9M, max 39.3M, 34.4M free. Jun 26 07:15:53.220880 systemd-modules-load[184]: Inserted module 'overlay' Jun 26 07:15:53.244203 systemd[1]: Started systemd-journald.service - Journal Service. Jun 26 07:15:53.259402 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 26 07:15:53.331238 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 26 07:15:53.331294 kernel: Bridge firewalling registered Jun 26 07:15:53.287727 systemd-modules-load[184]: Inserted module 'br_netfilter' Jun 26 07:15:53.335771 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 26 07:15:53.337605 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 26 07:15:53.348435 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 26 07:15:53.357505 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 26 07:15:53.363809 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 26 07:15:53.366333 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 26 07:15:53.369786 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 26 07:15:53.401962 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 26 07:15:53.403515 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 26 07:15:53.408363 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 26 07:15:53.413459 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 26 07:15:53.419448 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 26 07:15:53.449063 dracut-cmdline[216]: dracut-dracut-053 Jun 26 07:15:53.458441 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=4483672da8ac4c95f5ee13a489103440a13110ce1f63977ab5a6a33d0c137bf8 Jun 26 07:15:53.485611 systemd-resolved[217]: Positive Trust Anchors: Jun 26 07:15:53.485629 systemd-resolved[217]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 26 07:15:53.485688 systemd-resolved[217]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jun 26 07:15:53.490416 systemd-resolved[217]: Defaulting to hostname 'linux'. Jun 26 07:15:53.497827 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 26 07:15:53.498990 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 26 07:15:53.750274 kernel: SCSI subsystem initialized Jun 26 07:15:53.772154 kernel: Loading iSCSI transport class v2.0-870. Jun 26 07:15:53.801181 kernel: iscsi: registered transport (tcp) Jun 26 07:15:53.845144 kernel: iscsi: registered transport (qla4xxx) Jun 26 07:15:53.845247 kernel: QLogic iSCSI HBA Driver Jun 26 07:15:53.956491 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 26 07:15:53.969456 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 26 07:15:54.032240 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 26 07:15:54.032362 kernel: device-mapper: uevent: version 1.0.3 Jun 26 07:15:54.034936 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jun 26 07:15:54.104200 kernel: raid6: avx2x4 gen() 14595 MB/s Jun 26 07:15:54.122199 kernel: raid6: avx2x2 gen() 14018 MB/s Jun 26 07:15:54.140267 kernel: raid6: avx2x1 gen() 10369 MB/s Jun 26 07:15:54.140384 kernel: raid6: using algorithm avx2x4 gen() 14595 MB/s Jun 26 07:15:54.170682 kernel: raid6: .... xor() 7119 MB/s, rmw enabled Jun 26 07:15:54.172468 kernel: raid6: using avx2x2 recovery algorithm Jun 26 07:15:54.214136 kernel: xor: automatically using best checksumming function avx Jun 26 07:15:54.493203 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 26 07:15:54.520411 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 26 07:15:54.530566 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 26 07:15:54.573600 systemd-udevd[400]: Using default interface naming scheme 'v255'. Jun 26 07:15:54.583198 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 26 07:15:54.593361 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 26 07:15:54.633445 dracut-pre-trigger[408]: rd.md=0: removing MD RAID activation Jun 26 07:15:54.697441 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 26 07:15:54.706454 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 26 07:15:54.829572 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 26 07:15:54.842540 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 26 07:15:54.885444 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 26 07:15:54.890968 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 26 07:15:54.892256 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 26 07:15:54.896665 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 26 07:15:54.906688 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 26 07:15:54.941276 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 26 07:15:54.958146 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Jun 26 07:15:55.043917 kernel: scsi host0: Virtio SCSI HBA Jun 26 07:15:55.044229 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jun 26 07:15:55.044466 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jun 26 07:15:55.044492 kernel: GPT:9289727 != 125829119 Jun 26 07:15:55.044526 kernel: GPT:Alternate GPT header not at the end of the disk. Jun 26 07:15:55.044557 kernel: GPT:9289727 != 125829119 Jun 26 07:15:55.044582 kernel: GPT: Use GNU Parted to correct GPT errors. Jun 26 07:15:55.044626 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 26 07:15:55.044658 kernel: cryptd: max_cpu_qlen set to 1000 Jun 26 07:15:55.044703 kernel: AVX2 version of gcm_enc/dec engaged. Jun 26 07:15:55.044734 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Jun 26 07:15:55.086583 kernel: AES CTR mode by8 optimization enabled Jun 26 07:15:55.086654 kernel: virtio_blk virtio5: [vdb] 968 512-byte logical blocks (496 kB/484 KiB) Jun 26 07:15:55.095322 kernel: ACPI: bus type USB registered Jun 26 07:15:55.095407 kernel: usbcore: registered new interface driver usbfs Jun 26 07:15:55.097160 kernel: usbcore: registered new interface driver hub Jun 26 07:15:55.099396 kernel: usbcore: registered new device driver usb Jun 26 07:15:55.118231 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 26 07:15:55.118471 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 26 07:15:55.124888 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 26 07:15:55.125928 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 26 07:15:55.126240 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 26 07:15:55.133158 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 26 07:15:55.151575 kernel: BTRFS: device fsid 329ce27e-ea89-47b5-8f8b-f762c8412eb0 devid 1 transid 31 /dev/vda3 scanned by (udev-worker) (452) Jun 26 07:15:55.156134 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (461) Jun 26 07:15:55.157696 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 26 07:15:55.248155 kernel: libata version 3.00 loaded. Jun 26 07:15:55.263147 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jun 26 07:15:55.303200 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jun 26 07:15:55.409974 kernel: ata_piix 0000:00:01.1: version 2.13 Jun 26 07:15:55.410337 kernel: scsi host1: ata_piix Jun 26 07:15:55.410536 kernel: scsi host2: ata_piix Jun 26 07:15:55.410948 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Jun 26 07:15:55.410976 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Jun 26 07:15:55.411011 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Jun 26 07:15:55.411250 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Jun 26 07:15:55.411437 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Jun 26 07:15:55.411630 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Jun 26 07:15:55.411854 kernel: hub 1-0:1.0: USB hub found Jun 26 07:15:55.412153 kernel: hub 1-0:1.0: 2 ports detected Jun 26 07:15:55.408757 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jun 26 07:15:55.411870 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 26 07:15:55.421315 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 26 07:15:55.441637 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jun 26 07:15:55.453538 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 26 07:15:55.458441 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 26 07:15:55.471231 disk-uuid[539]: Primary Header is updated. Jun 26 07:15:55.471231 disk-uuid[539]: Secondary Entries is updated. Jun 26 07:15:55.471231 disk-uuid[539]: Secondary Header is updated. Jun 26 07:15:55.482182 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 26 07:15:55.491146 kernel: GPT:disk_guids don't match. Jun 26 07:15:55.491233 kernel: GPT: Use GNU Parted to correct GPT errors. Jun 26 07:15:55.492976 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 26 07:15:55.503302 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 26 07:15:55.507492 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 26 07:15:56.519966 disk-uuid[541]: The operation has completed successfully. Jun 26 07:15:56.521350 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 26 07:15:56.611560 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 26 07:15:56.612870 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 26 07:15:56.627524 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 26 07:15:56.648460 sh[563]: Success Jun 26 07:15:56.671220 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jun 26 07:15:56.765425 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 26 07:15:56.776543 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 26 07:15:56.783550 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 26 07:15:56.820164 kernel: BTRFS info (device dm-0): first mount of filesystem 329ce27e-ea89-47b5-8f8b-f762c8412eb0 Jun 26 07:15:56.820271 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jun 26 07:15:56.820300 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jun 26 07:15:56.823359 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jun 26 07:15:56.825259 kernel: BTRFS info (device dm-0): using free space tree Jun 26 07:15:56.851214 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 26 07:15:56.853728 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 26 07:15:56.860519 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 26 07:15:56.864350 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 26 07:15:56.886697 kernel: BTRFS info (device vda6): first mount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 26 07:15:56.886782 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 26 07:15:56.886839 kernel: BTRFS info (device vda6): using free space tree Jun 26 07:15:56.897143 kernel: BTRFS info (device vda6): auto enabling async discard Jun 26 07:15:56.921976 systemd[1]: mnt-oem.mount: Deactivated successfully. Jun 26 07:15:56.925054 kernel: BTRFS info (device vda6): last unmount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 26 07:15:56.946197 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 26 07:15:56.953534 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 26 07:15:57.068669 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 26 07:15:57.078600 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 26 07:15:57.137853 systemd-networkd[747]: lo: Link UP Jun 26 07:15:57.139067 systemd-networkd[747]: lo: Gained carrier Jun 26 07:15:57.144761 systemd-networkd[747]: Enumeration completed Jun 26 07:15:57.146064 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 26 07:15:57.147328 systemd[1]: Reached target network.target - Network. Jun 26 07:15:57.148838 systemd-networkd[747]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jun 26 07:15:57.148844 systemd-networkd[747]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Jun 26 07:15:57.151377 systemd-networkd[747]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 26 07:15:57.151383 systemd-networkd[747]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 26 07:15:57.156327 systemd-networkd[747]: eth0: Link UP Jun 26 07:15:57.156335 systemd-networkd[747]: eth0: Gained carrier Jun 26 07:15:57.156352 systemd-networkd[747]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jun 26 07:15:57.161623 systemd-networkd[747]: eth1: Link UP Jun 26 07:15:57.161630 systemd-networkd[747]: eth1: Gained carrier Jun 26 07:15:57.161650 systemd-networkd[747]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 26 07:15:57.175234 systemd-networkd[747]: eth0: DHCPv4 address 144.126.218.72/20, gateway 144.126.208.1 acquired from 169.254.169.253 Jun 26 07:15:57.183249 systemd-networkd[747]: eth1: DHCPv4 address 10.124.0.20/20 acquired from 169.254.169.253 Jun 26 07:15:57.190682 ignition[661]: Ignition 2.19.0 Jun 26 07:15:57.190696 ignition[661]: Stage: fetch-offline Jun 26 07:15:57.193000 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 26 07:15:57.190757 ignition[661]: no configs at "/usr/lib/ignition/base.d" Jun 26 07:15:57.190771 ignition[661]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jun 26 07:15:57.190961 ignition[661]: parsed url from cmdline: "" Jun 26 07:15:57.190968 ignition[661]: no config URL provided Jun 26 07:15:57.190977 ignition[661]: reading system config file "/usr/lib/ignition/user.ign" Jun 26 07:15:57.190990 ignition[661]: no config at "/usr/lib/ignition/user.ign" Jun 26 07:15:57.190998 ignition[661]: failed to fetch config: resource requires networking Jun 26 07:15:57.191544 ignition[661]: Ignition finished successfully Jun 26 07:15:57.202807 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jun 26 07:15:57.228160 ignition[758]: Ignition 2.19.0 Jun 26 07:15:57.228182 ignition[758]: Stage: fetch Jun 26 07:15:57.228523 ignition[758]: no configs at "/usr/lib/ignition/base.d" Jun 26 07:15:57.228540 ignition[758]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jun 26 07:15:57.228686 ignition[758]: parsed url from cmdline: "" Jun 26 07:15:57.228692 ignition[758]: no config URL provided Jun 26 07:15:57.228701 ignition[758]: reading system config file "/usr/lib/ignition/user.ign" Jun 26 07:15:57.228712 ignition[758]: no config at "/usr/lib/ignition/user.ign" Jun 26 07:15:57.228739 ignition[758]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Jun 26 07:15:57.247076 ignition[758]: GET result: OK Jun 26 07:15:57.247364 ignition[758]: parsing config with SHA512: 6f001931b07a1733eb4e89f50e39389ee10c7face71a3e06ab7588a2d16d3db5262864cb022884cb60df54ff2332aff96b05e8c2bcf5c9339f43e86e74d74746 Jun 26 07:15:57.258217 unknown[758]: fetched base config from "system" Jun 26 07:15:57.258238 unknown[758]: fetched base config from "system" Jun 26 07:15:57.258888 ignition[758]: fetch: fetch complete Jun 26 07:15:57.258247 unknown[758]: fetched user config from "digitalocean" Jun 26 07:15:57.258896 ignition[758]: fetch: fetch passed Jun 26 07:15:57.262835 ignition[758]: Ignition finished successfully Jun 26 07:15:57.265611 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jun 26 07:15:57.271508 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 26 07:15:57.317166 ignition[765]: Ignition 2.19.0 Jun 26 07:15:57.317192 ignition[765]: Stage: kargs Jun 26 07:15:57.317533 ignition[765]: no configs at "/usr/lib/ignition/base.d" Jun 26 07:15:57.317554 ignition[765]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jun 26 07:15:57.321788 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 26 07:15:57.319934 ignition[765]: kargs: kargs passed Jun 26 07:15:57.320040 ignition[765]: Ignition finished successfully Jun 26 07:15:57.330468 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 26 07:15:57.386923 ignition[772]: Ignition 2.19.0 Jun 26 07:15:57.386950 ignition[772]: Stage: disks Jun 26 07:15:57.387295 ignition[772]: no configs at "/usr/lib/ignition/base.d" Jun 26 07:15:57.387317 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jun 26 07:15:57.389260 ignition[772]: disks: disks passed Jun 26 07:15:57.389347 ignition[772]: Ignition finished successfully Jun 26 07:15:57.398523 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 26 07:15:57.401070 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 26 07:15:57.403046 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 26 07:15:57.404963 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 26 07:15:57.405817 systemd[1]: Reached target sysinit.target - System Initialization. Jun 26 07:15:57.406795 systemd[1]: Reached target basic.target - Basic System. Jun 26 07:15:57.428517 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 26 07:15:57.470212 systemd-fsck[781]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jun 26 07:15:57.479825 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 26 07:15:57.496303 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 26 07:15:57.658574 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 26 07:15:57.660283 kernel: EXT4-fs (vda9): mounted filesystem ed685e11-963b-427a-9b96-a4691c40e909 r/w with ordered data mode. Quota mode: none. Jun 26 07:15:57.661787 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 26 07:15:57.675340 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 26 07:15:57.680320 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 26 07:15:57.689474 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Jun 26 07:15:57.711567 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (789) Jun 26 07:15:57.715654 kernel: BTRFS info (device vda6): first mount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 26 07:15:57.715745 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 26 07:15:57.715773 kernel: BTRFS info (device vda6): using free space tree Jun 26 07:15:57.761294 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jun 26 07:15:57.878904 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 26 07:15:57.878961 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 26 07:15:57.890802 kernel: BTRFS info (device vda6): auto enabling async discard Jun 26 07:15:57.886382 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 26 07:15:57.895539 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 26 07:15:57.905546 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 26 07:15:58.354297 coreos-metadata[792]: Jun 26 07:15:58.354 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jun 26 07:15:58.392312 coreos-metadata[792]: Jun 26 07:15:58.390 INFO Fetch successful Jun 26 07:15:58.394275 coreos-metadata[791]: Jun 26 07:15:58.392 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jun 26 07:15:58.493745 coreos-metadata[791]: Jun 26 07:15:58.492 INFO Fetch successful Jun 26 07:15:58.495458 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Jun 26 07:15:58.505825 coreos-metadata[792]: Jun 26 07:15:58.494 INFO wrote hostname ci-4012.0.0-2-1603354b52 to /sysroot/etc/hostname Jun 26 07:15:58.495609 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Jun 26 07:15:58.500368 systemd-networkd[747]: eth0: Gained IPv6LL Jun 26 07:15:58.501396 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 26 07:15:58.541329 initrd-setup-root[822]: cut: /sysroot/etc/passwd: No such file or directory Jun 26 07:15:58.712362 initrd-setup-root[829]: cut: /sysroot/etc/group: No such file or directory Jun 26 07:15:58.725229 initrd-setup-root[836]: cut: /sysroot/etc/shadow: No such file or directory Jun 26 07:15:58.748827 initrd-setup-root[843]: cut: /sysroot/etc/gshadow: No such file or directory Jun 26 07:15:58.946845 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 26 07:15:58.955342 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 26 07:15:58.958423 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 26 07:15:58.990493 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 26 07:15:59.000971 kernel: BTRFS info (device vda6): last unmount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 26 07:15:59.047696 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 26 07:15:59.057943 ignition[912]: INFO : Ignition 2.19.0 Jun 26 07:15:59.061855 ignition[912]: INFO : Stage: mount Jun 26 07:15:59.061855 ignition[912]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 26 07:15:59.061855 ignition[912]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jun 26 07:15:59.068801 ignition[912]: INFO : mount: mount passed Jun 26 07:15:59.069732 ignition[912]: INFO : Ignition finished successfully Jun 26 07:15:59.070712 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 26 07:15:59.087519 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 26 07:15:59.115248 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 26 07:15:59.141520 systemd-networkd[747]: eth1: Gained IPv6LL Jun 26 07:15:59.157958 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (924) Jun 26 07:15:59.158029 kernel: BTRFS info (device vda6): first mount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 26 07:15:59.163821 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 26 07:15:59.163937 kernel: BTRFS info (device vda6): using free space tree Jun 26 07:15:59.176183 kernel: BTRFS info (device vda6): auto enabling async discard Jun 26 07:15:59.186354 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 26 07:15:59.265185 ignition[941]: INFO : Ignition 2.19.0 Jun 26 07:15:59.265185 ignition[941]: INFO : Stage: files Jun 26 07:15:59.265185 ignition[941]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 26 07:15:59.265185 ignition[941]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jun 26 07:15:59.269971 ignition[941]: DEBUG : files: compiled without relabeling support, skipping Jun 26 07:15:59.272873 ignition[941]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 26 07:15:59.272873 ignition[941]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 26 07:15:59.283380 ignition[941]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 26 07:15:59.301201 ignition[941]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 26 07:15:59.324945 unknown[941]: wrote ssh authorized keys file for user: core Jun 26 07:15:59.327233 ignition[941]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 26 07:15:59.341032 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jun 26 07:15:59.341032 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jun 26 07:15:59.344851 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 26 07:15:59.344851 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jun 26 07:15:59.399546 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jun 26 07:15:59.503644 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 26 07:15:59.509343 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jun 26 07:15:59.509343 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jun 26 07:15:59.509343 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 26 07:15:59.509343 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 26 07:15:59.509343 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 26 07:15:59.509343 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 26 07:15:59.509343 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 26 07:15:59.530855 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 26 07:15:59.530855 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 26 07:15:59.530855 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 26 07:15:59.530855 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 26 07:15:59.530855 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 26 07:15:59.530855 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 26 07:15:59.530855 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-x86-64.raw: attempt #1 Jun 26 07:16:00.019486 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jun 26 07:16:00.575125 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 26 07:16:00.575125 ignition[941]: INFO : files: op(c): [started] processing unit "containerd.service" Jun 26 07:16:00.575125 ignition[941]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jun 26 07:16:00.575125 ignition[941]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jun 26 07:16:00.575125 ignition[941]: INFO : files: op(c): [finished] processing unit "containerd.service" Jun 26 07:16:00.575125 ignition[941]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jun 26 07:16:00.575125 ignition[941]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 26 07:16:00.575125 ignition[941]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 26 07:16:00.575125 ignition[941]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jun 26 07:16:00.575125 ignition[941]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jun 26 07:16:00.575125 ignition[941]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jun 26 07:16:00.575125 ignition[941]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 26 07:16:00.575125 ignition[941]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 26 07:16:00.575125 ignition[941]: INFO : files: files passed Jun 26 07:16:00.575125 ignition[941]: INFO : Ignition finished successfully Jun 26 07:16:00.577668 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 26 07:16:00.593455 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 26 07:16:00.612303 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 26 07:16:00.625942 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 26 07:16:00.626152 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 26 07:16:00.645182 initrd-setup-root-after-ignition[970]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 26 07:16:00.645182 initrd-setup-root-after-ignition[970]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 26 07:16:00.651076 initrd-setup-root-after-ignition[974]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 26 07:16:00.652586 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 26 07:16:00.656921 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 26 07:16:00.681455 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 26 07:16:00.740185 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 26 07:16:00.741408 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 26 07:16:00.752467 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 26 07:16:00.755002 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 26 07:16:00.757494 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 26 07:16:00.762406 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 26 07:16:00.842728 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 26 07:16:00.853499 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 26 07:16:00.899117 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 26 07:16:00.905024 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 26 07:16:00.906096 systemd[1]: Stopped target timers.target - Timer Units. Jun 26 07:16:00.907037 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 26 07:16:00.907286 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 26 07:16:00.908457 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 26 07:16:00.909343 systemd[1]: Stopped target basic.target - Basic System. Jun 26 07:16:00.910204 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 26 07:16:00.911188 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 26 07:16:00.912132 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 26 07:16:00.913025 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 26 07:16:00.915243 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 26 07:16:00.920102 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 26 07:16:00.926001 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 26 07:16:00.927452 systemd[1]: Stopped target swap.target - Swaps. Jun 26 07:16:00.934154 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 26 07:16:00.934398 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 26 07:16:00.935940 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 26 07:16:00.937314 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 26 07:16:00.938516 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 26 07:16:00.938683 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 26 07:16:00.940671 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 26 07:16:00.940898 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 26 07:16:00.944508 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 26 07:16:00.946046 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 26 07:16:00.952562 systemd[1]: ignition-files.service: Deactivated successfully. Jun 26 07:16:00.952792 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 26 07:16:00.954777 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jun 26 07:16:00.956157 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 26 07:16:01.002821 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 26 07:16:01.165629 ignition[994]: INFO : Ignition 2.19.0 Jun 26 07:16:01.165629 ignition[994]: INFO : Stage: umount Jun 26 07:16:01.165629 ignition[994]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 26 07:16:01.165629 ignition[994]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jun 26 07:16:01.165629 ignition[994]: INFO : umount: umount passed Jun 26 07:16:01.165629 ignition[994]: INFO : Ignition finished successfully Jun 26 07:16:01.008519 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 26 07:16:01.016715 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 26 07:16:01.017019 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 26 07:16:01.079888 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 26 07:16:01.080196 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 26 07:16:01.110995 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 26 07:16:01.111215 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 26 07:16:01.220402 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 26 07:16:01.221722 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 26 07:16:01.221962 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 26 07:16:01.283999 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 26 07:16:01.284285 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 26 07:16:01.310580 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 26 07:16:01.310715 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 26 07:16:01.333904 systemd[1]: ignition-fetch.service: Deactivated successfully. Jun 26 07:16:01.333996 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jun 26 07:16:01.355055 systemd[1]: Stopped target network.target - Network. Jun 26 07:16:01.366286 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 26 07:16:01.366502 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 26 07:16:01.367639 systemd[1]: Stopped target paths.target - Path Units. Jun 26 07:16:01.369654 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 26 07:16:01.374301 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 26 07:16:01.376591 systemd[1]: Stopped target slices.target - Slice Units. Jun 26 07:16:01.377359 systemd[1]: Stopped target sockets.target - Socket Units. Jun 26 07:16:01.388714 systemd[1]: iscsid.socket: Deactivated successfully. Jun 26 07:16:01.389865 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 26 07:16:01.391238 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 26 07:16:01.391319 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 26 07:16:01.394755 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 26 07:16:01.394862 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 26 07:16:01.400309 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 26 07:16:01.400422 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 26 07:16:01.403136 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 26 07:16:01.404198 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 26 07:16:01.407966 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 26 07:16:01.411773 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 26 07:16:01.413490 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 26 07:16:01.418075 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 26 07:16:01.450208 systemd-networkd[747]: eth1: DHCPv6 lease lost Jun 26 07:16:01.483281 systemd-networkd[747]: eth0: DHCPv6 lease lost Jun 26 07:16:01.492216 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 26 07:16:01.492567 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 26 07:16:01.494524 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 26 07:16:01.494659 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 26 07:16:01.547570 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 26 07:16:01.563114 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 26 07:16:01.563247 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 26 07:16:01.572348 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 26 07:16:01.591988 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 26 07:16:01.592214 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 26 07:16:01.617380 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 26 07:16:01.618020 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 26 07:16:01.619167 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 26 07:16:01.619268 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 26 07:16:01.620243 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 26 07:16:01.620346 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 26 07:16:01.654887 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 26 07:16:01.655690 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 26 07:16:01.657683 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 26 07:16:01.657809 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 26 07:16:01.660496 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 26 07:16:01.660570 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 26 07:16:01.669185 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 26 07:16:01.669296 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 26 07:16:01.701545 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 26 07:16:01.701660 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 26 07:16:01.705160 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 26 07:16:01.705259 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 26 07:16:01.753790 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 26 07:16:01.755968 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 26 07:16:01.757133 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 26 07:16:01.771266 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jun 26 07:16:01.771378 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 26 07:16:01.772748 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 26 07:16:01.772826 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 26 07:16:01.773721 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 26 07:16:01.773801 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 26 07:16:01.777999 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 26 07:16:01.779221 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 26 07:16:01.781800 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 26 07:16:01.781989 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 26 07:16:01.797533 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 26 07:16:01.816491 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 26 07:16:01.856583 systemd[1]: Switching root. Jun 26 07:16:01.948756 systemd-journald[183]: Journal stopped Jun 26 07:16:04.888554 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Jun 26 07:16:04.888693 kernel: SELinux: policy capability network_peer_controls=1 Jun 26 07:16:04.888727 kernel: SELinux: policy capability open_perms=1 Jun 26 07:16:04.888757 kernel: SELinux: policy capability extended_socket_class=1 Jun 26 07:16:04.888775 kernel: SELinux: policy capability always_check_network=0 Jun 26 07:16:04.888795 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 26 07:16:04.888822 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 26 07:16:04.888851 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 26 07:16:04.888883 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 26 07:16:04.888909 kernel: audit: type=1403 audit(1719386162.491:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 26 07:16:04.888941 systemd[1]: Successfully loaded SELinux policy in 80.557ms. Jun 26 07:16:04.888974 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 30.080ms. Jun 26 07:16:04.889013 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jun 26 07:16:04.889042 systemd[1]: Detected virtualization kvm. Jun 26 07:16:04.889077 systemd[1]: Detected architecture x86-64. Jun 26 07:16:04.890272 systemd[1]: Detected first boot. Jun 26 07:16:04.890330 systemd[1]: Hostname set to . Jun 26 07:16:04.890361 systemd[1]: Initializing machine ID from VM UUID. Jun 26 07:16:04.890396 zram_generator::config[1080]: No configuration found. Jun 26 07:16:04.890436 systemd[1]: Populated /etc with preset unit settings. Jun 26 07:16:04.890472 systemd[1]: Queued start job for default target multi-user.target. Jun 26 07:16:04.890522 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jun 26 07:16:04.890560 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 26 07:16:04.890593 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 26 07:16:04.890626 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 26 07:16:04.890663 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 26 07:16:04.890690 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 26 07:16:04.890719 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 26 07:16:04.890748 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 26 07:16:04.890777 systemd[1]: Created slice user.slice - User and Session Slice. Jun 26 07:16:04.890806 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 26 07:16:04.890834 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 26 07:16:04.890863 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 26 07:16:04.890894 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 26 07:16:04.890935 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 26 07:16:04.890969 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 26 07:16:04.891003 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jun 26 07:16:04.891032 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 26 07:16:04.891060 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 26 07:16:04.891088 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 26 07:16:04.893226 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 26 07:16:04.893292 systemd[1]: Reached target slices.target - Slice Units. Jun 26 07:16:04.893322 systemd[1]: Reached target swap.target - Swaps. Jun 26 07:16:04.893350 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 26 07:16:04.893379 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 26 07:16:04.893414 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 26 07:16:04.893446 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jun 26 07:16:04.893492 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 26 07:16:04.893536 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 26 07:16:04.893578 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 26 07:16:04.893611 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 26 07:16:04.893641 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 26 07:16:04.893674 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 26 07:16:04.893707 systemd[1]: Mounting media.mount - External Media Directory... Jun 26 07:16:04.893740 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 26 07:16:04.893768 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 26 07:16:04.893796 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 26 07:16:04.893824 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 26 07:16:04.893861 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 26 07:16:04.893891 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 26 07:16:04.893921 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 26 07:16:04.893956 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 26 07:16:04.893991 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 26 07:16:04.894025 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 26 07:16:04.894061 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 26 07:16:04.894096 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 26 07:16:04.894157 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 26 07:16:04.894196 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 26 07:16:04.894220 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jun 26 07:16:04.894258 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jun 26 07:16:04.894295 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 26 07:16:04.894331 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 26 07:16:04.894355 kernel: loop: module loaded Jun 26 07:16:04.894381 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 26 07:16:04.894415 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 26 07:16:04.894450 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 26 07:16:04.894485 kernel: fuse: init (API version 7.39) Jun 26 07:16:04.894541 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 26 07:16:04.894576 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 26 07:16:04.894611 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 26 07:16:04.894648 systemd[1]: Mounted media.mount - External Media Directory. Jun 26 07:16:04.894677 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 26 07:16:04.894718 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 26 07:16:04.894741 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 26 07:16:04.894772 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 26 07:16:04.894792 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 26 07:16:04.894815 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 26 07:16:04.894837 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 26 07:16:04.894859 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 26 07:16:04.894879 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 26 07:16:04.894908 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 26 07:16:04.894930 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 26 07:16:04.894956 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 26 07:16:04.894987 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 26 07:16:04.895037 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 26 07:16:04.895064 kernel: ACPI: bus type drm_connector registered Jun 26 07:16:04.895091 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 26 07:16:04.898291 systemd-journald[1167]: Collecting audit messages is disabled. Jun 26 07:16:04.898362 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 26 07:16:04.898394 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 26 07:16:04.898424 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 26 07:16:04.898463 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 26 07:16:04.898494 systemd-journald[1167]: Journal started Jun 26 07:16:04.898576 systemd-journald[1167]: Runtime Journal (/run/log/journal/075d1d9c539a49f4b4b9625da07cbd8b) is 4.9M, max 39.3M, 34.4M free. Jun 26 07:16:04.906378 systemd[1]: Started systemd-journald.service - Journal Service. Jun 26 07:16:04.929854 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 26 07:16:04.940327 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 26 07:16:04.953324 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 26 07:16:04.955382 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 26 07:16:04.965500 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 26 07:16:04.993429 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 26 07:16:04.996942 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 26 07:16:05.002430 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jun 26 07:16:05.005391 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 26 07:16:05.017302 systemd-journald[1167]: Time spent on flushing to /var/log/journal/075d1d9c539a49f4b4b9625da07cbd8b is 141.785ms for 973 entries. Jun 26 07:16:05.017302 systemd-journald[1167]: System Journal (/var/log/journal/075d1d9c539a49f4b4b9625da07cbd8b) is 8.0M, max 195.6M, 187.6M free. Jun 26 07:16:05.190391 systemd-journald[1167]: Received client request to flush runtime journal. Jun 26 07:16:05.028432 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 26 07:16:05.049379 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 26 07:16:05.061392 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 26 07:16:05.064647 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 26 07:16:05.066016 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 26 07:16:05.105330 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jun 26 07:16:05.108198 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 26 07:16:05.147856 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 26 07:16:05.164480 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jun 26 07:16:05.169285 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 26 07:16:05.194797 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 26 07:16:05.219873 systemd-tmpfiles[1218]: ACLs are not supported, ignoring. Jun 26 07:16:05.219917 systemd-tmpfiles[1218]: ACLs are not supported, ignoring. Jun 26 07:16:05.222698 udevadm[1229]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jun 26 07:16:05.233941 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 26 07:16:05.244631 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 26 07:16:05.341181 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 26 07:16:05.354857 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 26 07:16:05.439862 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. Jun 26 07:16:05.443227 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. Jun 26 07:16:05.462913 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 26 07:16:06.534520 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 26 07:16:06.548540 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 26 07:16:06.607969 systemd-udevd[1247]: Using default interface naming scheme 'v255'. Jun 26 07:16:06.676327 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 26 07:16:06.725061 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 26 07:16:06.767953 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 26 07:16:06.887165 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1258) Jun 26 07:16:06.904061 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jun 26 07:16:06.925880 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 26 07:16:07.011173 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1261) Jun 26 07:16:07.037965 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 26 07:16:07.038373 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 26 07:16:07.047569 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 26 07:16:07.063777 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 26 07:16:07.081991 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 26 07:16:07.084644 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 26 07:16:07.084723 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 26 07:16:07.084836 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 26 07:16:07.096385 systemd-networkd[1248]: lo: Link UP Jun 26 07:16:07.096400 systemd-networkd[1248]: lo: Gained carrier Jun 26 07:16:07.102673 systemd-networkd[1248]: Enumeration completed Jun 26 07:16:07.103465 systemd-networkd[1248]: eth0: Configuring with /run/systemd/network/10-0e:5d:a6:9d:a2:39.network. Jun 26 07:16:07.104482 systemd-networkd[1248]: eth1: Configuring with /run/systemd/network/10-66:50:31:02:fa:61.network. Jun 26 07:16:07.105265 systemd-networkd[1248]: eth0: Link UP Jun 26 07:16:07.105272 systemd-networkd[1248]: eth0: Gained carrier Jun 26 07:16:07.112019 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 26 07:16:07.117889 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 26 07:16:07.118221 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 26 07:16:07.132956 systemd-networkd[1248]: eth1: Link UP Jun 26 07:16:07.132966 systemd-networkd[1248]: eth1: Gained carrier Jun 26 07:16:07.143575 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 26 07:16:07.143926 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 26 07:16:07.152006 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 26 07:16:07.156478 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 26 07:16:07.231136 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jun 26 07:16:07.268149 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jun 26 07:16:07.273410 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 26 07:16:07.275401 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 26 07:16:07.275484 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 26 07:16:07.303143 kernel: ACPI: button: Power Button [PWRF] Jun 26 07:16:07.361224 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jun 26 07:16:07.430964 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 26 07:16:07.448040 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 26 07:16:07.449322 kernel: mousedev: PS/2 mouse device common for all mice Jun 26 07:16:07.469153 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jun 26 07:16:07.469306 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jun 26 07:16:07.474303 kernel: Console: switching to colour dummy device 80x25 Jun 26 07:16:07.478084 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jun 26 07:16:07.478206 kernel: [drm] features: -context_init Jun 26 07:16:07.483702 kernel: [drm] number of scanouts: 1 Jun 26 07:16:07.483804 kernel: [drm] number of cap sets: 0 Jun 26 07:16:07.494155 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jun 26 07:16:07.535509 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jun 26 07:16:07.535668 kernel: Console: switching to colour frame buffer device 128x48 Jun 26 07:16:07.554590 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jun 26 07:16:07.557155 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 26 07:16:07.557787 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 26 07:16:07.576516 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 26 07:16:07.626427 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 26 07:16:07.626915 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 26 07:16:07.679795 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 26 07:16:07.787868 kernel: EDAC MC: Ver: 3.0.0 Jun 26 07:16:07.827684 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jun 26 07:16:07.841608 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jun 26 07:16:07.895770 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 26 07:16:07.901234 lvm[1313]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 26 07:16:07.950267 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jun 26 07:16:07.951318 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 26 07:16:07.968529 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jun 26 07:16:07.976539 lvm[1319]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 26 07:16:08.017764 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jun 26 07:16:08.021681 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 26 07:16:08.035808 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Jun 26 07:16:08.037362 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 26 07:16:08.037422 systemd[1]: Reached target machines.target - Containers. Jun 26 07:16:08.052364 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 26 07:16:08.106947 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 26 07:16:08.116157 kernel: ISO 9660 Extensions: RRIP_1991A Jun 26 07:16:08.125443 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Jun 26 07:16:08.137138 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 26 07:16:08.140884 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jun 26 07:16:08.148450 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jun 26 07:16:08.160783 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 26 07:16:08.173405 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 26 07:16:08.176378 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jun 26 07:16:08.178635 systemd-networkd[1248]: eth0: Gained IPv6LL Jun 26 07:16:08.187393 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 26 07:16:08.201314 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 26 07:16:08.252957 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jun 26 07:16:08.330974 kernel: loop0: detected capacity change from 0 to 80568 Jun 26 07:16:08.336593 kernel: block loop0: the capability attribute has been deprecated. Jun 26 07:16:08.559295 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 26 07:16:08.565826 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jun 26 07:16:08.603328 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 26 07:16:08.690407 kernel: loop1: detected capacity change from 0 to 209816 Jun 26 07:16:08.932362 systemd-networkd[1248]: eth1: Gained IPv6LL Jun 26 07:16:09.054925 kernel: loop2: detected capacity change from 0 to 139760 Jun 26 07:16:09.291990 kernel: loop3: detected capacity change from 0 to 8 Jun 26 07:16:09.337434 kernel: loop4: detected capacity change from 0 to 80568 Jun 26 07:16:09.418373 kernel: loop5: detected capacity change from 0 to 209816 Jun 26 07:16:09.523644 kernel: loop6: detected capacity change from 0 to 139760 Jun 26 07:16:09.595463 kernel: loop7: detected capacity change from 0 to 8 Jun 26 07:16:09.596486 (sd-merge)[1346]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Jun 26 07:16:09.597506 (sd-merge)[1346]: Merged extensions into '/usr'. Jun 26 07:16:09.651597 systemd[1]: Reloading requested from client PID 1334 ('systemd-sysext') (unit systemd-sysext.service)... Jun 26 07:16:09.651617 systemd[1]: Reloading... Jun 26 07:16:09.869465 zram_generator::config[1371]: No configuration found. Jun 26 07:16:10.368893 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 26 07:16:10.548879 systemd[1]: Reloading finished in 896 ms. Jun 26 07:16:10.597400 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 26 07:16:10.611527 systemd[1]: Starting ensure-sysext.service... Jun 26 07:16:10.626469 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 26 07:16:10.641943 systemd[1]: Reloading requested from client PID 1419 ('systemctl') (unit ensure-sysext.service)... Jun 26 07:16:10.641974 systemd[1]: Reloading... Jun 26 07:16:10.684155 ldconfig[1332]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 26 07:16:10.739060 systemd-tmpfiles[1420]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 26 07:16:10.739747 systemd-tmpfiles[1420]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 26 07:16:10.744431 systemd-tmpfiles[1420]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 26 07:16:10.745037 systemd-tmpfiles[1420]: ACLs are not supported, ignoring. Jun 26 07:16:10.746319 systemd-tmpfiles[1420]: ACLs are not supported, ignoring. Jun 26 07:16:10.754290 systemd-tmpfiles[1420]: Detected autofs mount point /boot during canonicalization of boot. Jun 26 07:16:10.754316 systemd-tmpfiles[1420]: Skipping /boot Jun 26 07:16:10.773177 zram_generator::config[1447]: No configuration found. Jun 26 07:16:10.776635 systemd-tmpfiles[1420]: Detected autofs mount point /boot during canonicalization of boot. Jun 26 07:16:10.776660 systemd-tmpfiles[1420]: Skipping /boot Jun 26 07:16:11.146537 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 26 07:16:11.278343 systemd[1]: Reloading finished in 635 ms. Jun 26 07:16:11.302727 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 26 07:16:11.315436 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 26 07:16:11.335425 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 26 07:16:11.349512 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 26 07:16:11.359831 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 26 07:16:11.374241 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 26 07:16:11.394551 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 26 07:16:11.419469 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 26 07:16:11.419819 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 26 07:16:11.423579 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 26 07:16:11.438619 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 26 07:16:11.472615 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 26 07:16:11.476137 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 26 07:16:11.476456 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 26 07:16:11.482557 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 26 07:16:11.482901 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 26 07:16:11.502723 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 26 07:16:11.511806 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 26 07:16:11.514323 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 26 07:16:11.529766 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 26 07:16:11.539854 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 26 07:16:11.540189 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 26 07:16:11.563840 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 26 07:16:11.564261 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 26 07:16:11.572746 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 26 07:16:11.586566 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 26 07:16:11.619514 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 26 07:16:11.620384 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 26 07:16:11.642542 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 26 07:16:11.648119 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 26 07:16:11.661656 augenrules[1536]: No rules Jun 26 07:16:11.660809 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 26 07:16:11.666300 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 26 07:16:11.679913 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 26 07:16:11.680285 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 26 07:16:11.683094 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 26 07:16:11.683493 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 26 07:16:11.693976 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 26 07:16:11.694804 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 26 07:16:11.707475 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 26 07:16:11.728274 systemd-resolved[1504]: Positive Trust Anchors: Jun 26 07:16:11.728300 systemd-resolved[1504]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 26 07:16:11.728392 systemd-resolved[1504]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jun 26 07:16:11.729835 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 26 07:16:11.731204 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 26 07:16:11.738563 systemd-resolved[1504]: Using system hostname 'ci-4012.0.0-2-1603354b52'. Jun 26 07:16:11.741691 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 26 07:16:11.759317 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 26 07:16:11.783823 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 26 07:16:11.799555 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 26 07:16:11.800553 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 26 07:16:11.800792 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 26 07:16:11.800929 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 26 07:16:11.804333 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 26 07:16:11.808396 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 26 07:16:11.809400 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 26 07:16:11.815969 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 26 07:16:11.816506 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 26 07:16:11.822950 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 26 07:16:11.823750 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 26 07:16:11.827673 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 26 07:16:11.828818 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 26 07:16:11.841613 systemd[1]: Finished ensure-sysext.service. Jun 26 07:16:11.859900 systemd[1]: Reached target network.target - Network. Jun 26 07:16:11.860933 systemd[1]: Reached target network-online.target - Network is Online. Jun 26 07:16:11.861729 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 26 07:16:11.865645 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 26 07:16:11.865778 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 26 07:16:11.880473 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jun 26 07:16:11.992767 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jun 26 07:16:11.994168 systemd[1]: Reached target sysinit.target - System Initialization. Jun 26 07:16:11.997552 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 26 07:16:11.999822 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 26 07:16:12.000795 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 26 07:16:12.002770 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 26 07:16:12.002830 systemd[1]: Reached target paths.target - Path Units. Jun 26 07:16:12.003551 systemd[1]: Reached target time-set.target - System Time Set. Jun 26 07:16:12.005030 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 26 07:16:12.006059 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 26 07:16:12.007129 systemd[1]: Reached target timers.target - Timer Units. Jun 26 07:16:12.013024 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 26 07:16:12.019559 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 26 07:16:12.028132 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 26 07:16:12.033838 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 26 07:16:12.036302 systemd[1]: Reached target sockets.target - Socket Units. Jun 26 07:16:12.037090 systemd[1]: Reached target basic.target - Basic System. Jun 26 07:16:12.038113 systemd[1]: System is tainted: cgroupsv1 Jun 26 07:16:12.039062 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 26 07:16:12.039144 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 26 07:16:12.052533 systemd[1]: Starting containerd.service - containerd container runtime... Jun 26 07:16:12.664985 systemd-resolved[1504]: Clock change detected. Flushing caches. Jun 26 07:16:12.665971 systemd-timesyncd[1571]: Contacted time server 216.229.4.66:123 (0.flatcar.pool.ntp.org). Jun 26 07:16:12.666931 systemd-timesyncd[1571]: Initial clock synchronization to Wed 2024-06-26 07:16:12.664786 UTC. Jun 26 07:16:12.668152 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jun 26 07:16:12.681102 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 26 07:16:12.692902 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 26 07:16:12.703205 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 26 07:16:12.704098 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 26 07:16:12.725671 coreos-metadata[1577]: Jun 26 07:16:12.725 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jun 26 07:16:12.729105 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 26 07:16:12.742086 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 26 07:16:12.752819 coreos-metadata[1577]: Jun 26 07:16:12.742 INFO Fetch successful Jun 26 07:16:12.753539 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 26 07:16:12.774780 jq[1581]: false Jun 26 07:16:12.782918 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 26 07:16:12.803027 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 26 07:16:12.808506 dbus-daemon[1578]: [system] SELinux support is enabled Jun 26 07:16:12.808997 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 26 07:16:12.834992 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 26 07:16:12.846029 extend-filesystems[1582]: Found loop4 Jun 26 07:16:12.846029 extend-filesystems[1582]: Found loop5 Jun 26 07:16:12.846029 extend-filesystems[1582]: Found loop6 Jun 26 07:16:12.846029 extend-filesystems[1582]: Found loop7 Jun 26 07:16:12.846029 extend-filesystems[1582]: Found vda Jun 26 07:16:12.846029 extend-filesystems[1582]: Found vda1 Jun 26 07:16:12.846029 extend-filesystems[1582]: Found vda2 Jun 26 07:16:12.846029 extend-filesystems[1582]: Found vda3 Jun 26 07:16:12.846029 extend-filesystems[1582]: Found usr Jun 26 07:16:12.846029 extend-filesystems[1582]: Found vda4 Jun 26 07:16:12.846029 extend-filesystems[1582]: Found vda6 Jun 26 07:16:12.846029 extend-filesystems[1582]: Found vda7 Jun 26 07:16:12.846029 extend-filesystems[1582]: Found vda9 Jun 26 07:16:12.846029 extend-filesystems[1582]: Checking size of /dev/vda9 Jun 26 07:16:12.842913 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 26 07:16:13.029841 extend-filesystems[1582]: Resized partition /dev/vda9 Jun 26 07:16:12.851396 systemd[1]: Starting update-engine.service - Update Engine... Jun 26 07:16:13.063066 extend-filesystems[1621]: resize2fs 1.47.0 (5-Feb-2023) Jun 26 07:16:13.120735 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Jun 26 07:16:12.876382 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 26 07:16:12.894564 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 26 07:16:12.932842 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 26 07:16:13.137166 jq[1599]: true Jun 26 07:16:12.933280 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 26 07:16:12.961893 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 26 07:16:12.962318 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 26 07:16:13.141594 tar[1617]: linux-amd64/helm Jun 26 07:16:13.051200 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 26 07:16:13.188079 update_engine[1596]: I0626 07:16:13.144150 1596 main.cc:92] Flatcar Update Engine starting Jun 26 07:16:13.051287 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 26 07:16:13.188619 jq[1620]: true Jun 26 07:16:13.053809 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 26 07:16:13.054092 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Jun 26 07:16:13.054162 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 26 07:16:13.085855 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 26 07:16:13.146239 systemd[1]: motdgen.service: Deactivated successfully. Jun 26 07:16:13.224619 update_engine[1596]: I0626 07:16:13.221323 1596 update_check_scheduler.cc:74] Next update check in 10m30s Jun 26 07:16:13.146674 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 26 07:16:13.148523 (ntainerd)[1627]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jun 26 07:16:13.198127 systemd[1]: Started update-engine.service - Update Engine. Jun 26 07:16:13.215217 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jun 26 07:16:13.225385 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 26 07:16:13.227534 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 26 07:16:13.240713 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 26 07:16:13.346712 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1645) Jun 26 07:16:13.370629 systemd-logind[1593]: New seat seat0. Jun 26 07:16:13.381054 systemd-logind[1593]: Watching system buttons on /dev/input/event1 (Power Button) Jun 26 07:16:13.381095 systemd-logind[1593]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jun 26 07:16:13.384273 systemd[1]: Started systemd-logind.service - User Login Management. Jun 26 07:16:13.685903 bash[1668]: Updated "/home/core/.ssh/authorized_keys" Jun 26 07:16:13.685486 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 26 07:16:13.722773 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jun 26 07:16:13.715379 systemd[1]: Starting sshkeys.service... Jun 26 07:16:13.804745 extend-filesystems[1621]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jun 26 07:16:13.804745 extend-filesystems[1621]: old_desc_blocks = 1, new_desc_blocks = 8 Jun 26 07:16:13.804745 extend-filesystems[1621]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jun 26 07:16:13.793988 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 26 07:16:13.832163 extend-filesystems[1582]: Resized filesystem in /dev/vda9 Jun 26 07:16:13.832163 extend-filesystems[1582]: Found vdb Jun 26 07:16:13.794326 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 26 07:16:13.818212 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jun 26 07:16:13.838305 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jun 26 07:16:13.918730 coreos-metadata[1689]: Jun 26 07:16:13.914 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jun 26 07:16:13.919001 locksmithd[1650]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 26 07:16:13.932951 coreos-metadata[1689]: Jun 26 07:16:13.928 INFO Fetch successful Jun 26 07:16:13.951164 unknown[1689]: wrote ssh authorized keys file for user: core Jun 26 07:16:14.057868 update-ssh-keys[1697]: Updated "/home/core/.ssh/authorized_keys" Jun 26 07:16:14.046705 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jun 26 07:16:14.065496 systemd[1]: Finished sshkeys.service. Jun 26 07:16:14.127658 containerd[1627]: time="2024-06-26T07:16:14.126964067Z" level=info msg="starting containerd" revision=cd7148ac666309abf41fd4a49a8a5895b905e7f3 version=v1.7.18 Jun 26 07:16:14.263400 containerd[1627]: time="2024-06-26T07:16:14.263154920Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jun 26 07:16:14.263400 containerd[1627]: time="2024-06-26T07:16:14.263401393Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jun 26 07:16:14.270322 containerd[1627]: time="2024-06-26T07:16:14.270233722Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.35-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jun 26 07:16:14.270322 containerd[1627]: time="2024-06-26T07:16:14.270308547Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jun 26 07:16:14.276486 containerd[1627]: time="2024-06-26T07:16:14.276026019Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 26 07:16:14.276486 containerd[1627]: time="2024-06-26T07:16:14.276090844Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jun 26 07:16:14.276486 containerd[1627]: time="2024-06-26T07:16:14.276327648Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jun 26 07:16:14.280880 containerd[1627]: time="2024-06-26T07:16:14.280583458Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jun 26 07:16:14.280880 containerd[1627]: time="2024-06-26T07:16:14.280650588Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jun 26 07:16:14.280880 containerd[1627]: time="2024-06-26T07:16:14.280865384Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jun 26 07:16:14.284712 containerd[1627]: time="2024-06-26T07:16:14.281245840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jun 26 07:16:14.284712 containerd[1627]: time="2024-06-26T07:16:14.281287929Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jun 26 07:16:14.284712 containerd[1627]: time="2024-06-26T07:16:14.281306958Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jun 26 07:16:14.284712 containerd[1627]: time="2024-06-26T07:16:14.283035646Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 26 07:16:14.284712 containerd[1627]: time="2024-06-26T07:16:14.283075591Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jun 26 07:16:14.284712 containerd[1627]: time="2024-06-26T07:16:14.283220263Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jun 26 07:16:14.284712 containerd[1627]: time="2024-06-26T07:16:14.283245246Z" level=info msg="metadata content store policy set" policy=shared Jun 26 07:16:14.312657 containerd[1627]: time="2024-06-26T07:16:14.312560493Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jun 26 07:16:14.312657 containerd[1627]: time="2024-06-26T07:16:14.312664969Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jun 26 07:16:14.312885 containerd[1627]: time="2024-06-26T07:16:14.312693556Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jun 26 07:16:14.312885 containerd[1627]: time="2024-06-26T07:16:14.312782158Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jun 26 07:16:14.312885 containerd[1627]: time="2024-06-26T07:16:14.312808345Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jun 26 07:16:14.312885 containerd[1627]: time="2024-06-26T07:16:14.312879509Z" level=info msg="NRI interface is disabled by configuration." Jun 26 07:16:14.313151 containerd[1627]: time="2024-06-26T07:16:14.312903555Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jun 26 07:16:14.313234 containerd[1627]: time="2024-06-26T07:16:14.313177010Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jun 26 07:16:14.313234 containerd[1627]: time="2024-06-26T07:16:14.313224124Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jun 26 07:16:14.313329 containerd[1627]: time="2024-06-26T07:16:14.313243722Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jun 26 07:16:14.313329 containerd[1627]: time="2024-06-26T07:16:14.313265391Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jun 26 07:16:14.313329 containerd[1627]: time="2024-06-26T07:16:14.313286881Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jun 26 07:16:14.313329 containerd[1627]: time="2024-06-26T07:16:14.313317699Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jun 26 07:16:14.313533 containerd[1627]: time="2024-06-26T07:16:14.313337664Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jun 26 07:16:14.313533 containerd[1627]: time="2024-06-26T07:16:14.313356693Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jun 26 07:16:14.313533 containerd[1627]: time="2024-06-26T07:16:14.313405576Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jun 26 07:16:14.313533 containerd[1627]: time="2024-06-26T07:16:14.313442777Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jun 26 07:16:14.313533 containerd[1627]: time="2024-06-26T07:16:14.313462494Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jun 26 07:16:14.313533 containerd[1627]: time="2024-06-26T07:16:14.313481563Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jun 26 07:16:14.315843 containerd[1627]: time="2024-06-26T07:16:14.313676613Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jun 26 07:16:14.317229 containerd[1627]: time="2024-06-26T07:16:14.316436755Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jun 26 07:16:14.317229 containerd[1627]: time="2024-06-26T07:16:14.316505307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jun 26 07:16:14.317229 containerd[1627]: time="2024-06-26T07:16:14.316531976Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jun 26 07:16:14.317229 containerd[1627]: time="2024-06-26T07:16:14.316571721Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jun 26 07:16:14.317229 containerd[1627]: time="2024-06-26T07:16:14.316653258Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jun 26 07:16:14.317229 containerd[1627]: time="2024-06-26T07:16:14.316675456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jun 26 07:16:14.317229 containerd[1627]: time="2024-06-26T07:16:14.316992878Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jun 26 07:16:14.317229 containerd[1627]: time="2024-06-26T07:16:14.317019723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jun 26 07:16:14.317229 containerd[1627]: time="2024-06-26T07:16:14.317040620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jun 26 07:16:14.317229 containerd[1627]: time="2024-06-26T07:16:14.317061549Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jun 26 07:16:14.317229 containerd[1627]: time="2024-06-26T07:16:14.317082408Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jun 26 07:16:14.317229 containerd[1627]: time="2024-06-26T07:16:14.317103448Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jun 26 07:16:14.317229 containerd[1627]: time="2024-06-26T07:16:14.317125199Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jun 26 07:16:14.317952 containerd[1627]: time="2024-06-26T07:16:14.317462634Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jun 26 07:16:14.317952 containerd[1627]: time="2024-06-26T07:16:14.317501637Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jun 26 07:16:14.317952 containerd[1627]: time="2024-06-26T07:16:14.317524860Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jun 26 07:16:14.317952 containerd[1627]: time="2024-06-26T07:16:14.317547850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jun 26 07:16:14.317952 containerd[1627]: time="2024-06-26T07:16:14.317572219Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jun 26 07:16:14.317952 containerd[1627]: time="2024-06-26T07:16:14.317595976Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jun 26 07:16:14.317952 containerd[1627]: time="2024-06-26T07:16:14.317618869Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jun 26 07:16:14.317952 containerd[1627]: time="2024-06-26T07:16:14.317639030Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jun 26 07:16:14.321377 containerd[1627]: time="2024-06-26T07:16:14.320059531Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jun 26 07:16:14.321377 containerd[1627]: time="2024-06-26T07:16:14.320186957Z" level=info msg="Connect containerd service" Jun 26 07:16:14.321377 containerd[1627]: time="2024-06-26T07:16:14.320256944Z" level=info msg="using legacy CRI server" Jun 26 07:16:14.321377 containerd[1627]: time="2024-06-26T07:16:14.320269580Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 26 07:16:14.321377 containerd[1627]: time="2024-06-26T07:16:14.320416784Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jun 26 07:16:14.323135 containerd[1627]: time="2024-06-26T07:16:14.322220197Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 26 07:16:14.323135 containerd[1627]: time="2024-06-26T07:16:14.322320394Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jun 26 07:16:14.323135 containerd[1627]: time="2024-06-26T07:16:14.322354063Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jun 26 07:16:14.323135 containerd[1627]: time="2024-06-26T07:16:14.322372905Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jun 26 07:16:14.323135 containerd[1627]: time="2024-06-26T07:16:14.322394567Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jun 26 07:16:14.327293 containerd[1627]: time="2024-06-26T07:16:14.326218440Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 26 07:16:14.327293 containerd[1627]: time="2024-06-26T07:16:14.326340278Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 26 07:16:14.327293 containerd[1627]: time="2024-06-26T07:16:14.326526269Z" level=info msg="Start subscribing containerd event" Jun 26 07:16:14.327293 containerd[1627]: time="2024-06-26T07:16:14.326594188Z" level=info msg="Start recovering state" Jun 26 07:16:14.327293 containerd[1627]: time="2024-06-26T07:16:14.326839286Z" level=info msg="Start event monitor" Jun 26 07:16:14.327293 containerd[1627]: time="2024-06-26T07:16:14.326867513Z" level=info msg="Start snapshots syncer" Jun 26 07:16:14.327293 containerd[1627]: time="2024-06-26T07:16:14.326883582Z" level=info msg="Start cni network conf syncer for default" Jun 26 07:16:14.327293 containerd[1627]: time="2024-06-26T07:16:14.326894792Z" level=info msg="Start streaming server" Jun 26 07:16:14.327293 containerd[1627]: time="2024-06-26T07:16:14.327003220Z" level=info msg="containerd successfully booted in 0.203456s" Jun 26 07:16:14.328072 systemd[1]: Started containerd.service - containerd container runtime. Jun 26 07:16:14.919746 sshd_keygen[1619]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 26 07:16:15.026242 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 26 07:16:15.043278 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 26 07:16:15.091801 systemd[1]: issuegen.service: Deactivated successfully. Jun 26 07:16:15.092268 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 26 07:16:15.115140 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 26 07:16:15.137717 tar[1617]: linux-amd64/LICENSE Jun 26 07:16:15.137717 tar[1617]: linux-amd64/README.md Jun 26 07:16:15.162650 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 26 07:16:15.170282 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 26 07:16:15.191624 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 26 07:16:15.208846 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jun 26 07:16:15.210882 systemd[1]: Reached target getty.target - Login Prompts. Jun 26 07:16:16.491148 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 26 07:16:16.502226 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 26 07:16:16.502760 (kubelet)[1740]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 26 07:16:16.506760 systemd[1]: Startup finished in 11.304s (kernel) + 13.493s (userspace) = 24.797s. Jun 26 07:16:17.855319 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 26 07:16:17.865256 systemd[1]: Started sshd@0-144.126.218.72:22-147.75.109.163:51430.service - OpenSSH per-connection server daemon (147.75.109.163:51430). Jun 26 07:16:17.973337 kubelet[1740]: E0626 07:16:17.973160 1740 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 26 07:16:17.977980 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 26 07:16:17.978223 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 26 07:16:18.010906 sshd[1751]: Accepted publickey for core from 147.75.109.163 port 51430 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:16:18.016518 sshd[1751]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:16:18.042114 systemd-logind[1593]: New session 1 of user core. Jun 26 07:16:18.043847 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 26 07:16:18.052606 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 26 07:16:18.085098 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 26 07:16:18.098363 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 26 07:16:18.121235 (systemd)[1759]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:16:18.329051 systemd[1759]: Queued start job for default target default.target. Jun 26 07:16:18.340841 systemd[1759]: Created slice app.slice - User Application Slice. Jun 26 07:16:18.340900 systemd[1759]: Reached target paths.target - Paths. Jun 26 07:16:18.340929 systemd[1759]: Reached target timers.target - Timers. Jun 26 07:16:18.357942 systemd[1759]: Starting dbus.socket - D-Bus User Message Bus Socket... Jun 26 07:16:18.376635 systemd[1759]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jun 26 07:16:18.376783 systemd[1759]: Reached target sockets.target - Sockets. Jun 26 07:16:18.376810 systemd[1759]: Reached target basic.target - Basic System. Jun 26 07:16:18.376893 systemd[1759]: Reached target default.target - Main User Target. Jun 26 07:16:18.376943 systemd[1759]: Startup finished in 242ms. Jun 26 07:16:18.377822 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 26 07:16:18.388623 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 26 07:16:18.475935 systemd[1]: Started sshd@1-144.126.218.72:22-147.75.109.163:51446.service - OpenSSH per-connection server daemon (147.75.109.163:51446). Jun 26 07:16:18.538289 sshd[1771]: Accepted publickey for core from 147.75.109.163 port 51446 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:16:18.543084 sshd[1771]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:16:18.553328 systemd-logind[1593]: New session 2 of user core. Jun 26 07:16:18.565414 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 26 07:16:18.639011 sshd[1771]: pam_unix(sshd:session): session closed for user core Jun 26 07:16:18.649371 systemd[1]: Started sshd@2-144.126.218.72:22-147.75.109.163:51462.service - OpenSSH per-connection server daemon (147.75.109.163:51462). Jun 26 07:16:18.652281 systemd[1]: sshd@1-144.126.218.72:22-147.75.109.163:51446.service: Deactivated successfully. Jun 26 07:16:18.657586 systemd[1]: session-2.scope: Deactivated successfully. Jun 26 07:16:18.659535 systemd-logind[1593]: Session 2 logged out. Waiting for processes to exit. Jun 26 07:16:18.666239 systemd-logind[1593]: Removed session 2. Jun 26 07:16:18.725761 sshd[1776]: Accepted publickey for core from 147.75.109.163 port 51462 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:16:18.727610 sshd[1776]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:16:18.744089 systemd-logind[1593]: New session 3 of user core. Jun 26 07:16:18.756668 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 26 07:16:18.825496 sshd[1776]: pam_unix(sshd:session): session closed for user core Jun 26 07:16:18.837211 systemd[1]: Started sshd@3-144.126.218.72:22-147.75.109.163:51470.service - OpenSSH per-connection server daemon (147.75.109.163:51470). Jun 26 07:16:18.839239 systemd[1]: sshd@2-144.126.218.72:22-147.75.109.163:51462.service: Deactivated successfully. Jun 26 07:16:18.843384 systemd[1]: session-3.scope: Deactivated successfully. Jun 26 07:16:18.845943 systemd-logind[1593]: Session 3 logged out. Waiting for processes to exit. Jun 26 07:16:18.854769 systemd-logind[1593]: Removed session 3. Jun 26 07:16:18.914493 sshd[1784]: Accepted publickey for core from 147.75.109.163 port 51470 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:16:18.916112 sshd[1784]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:16:18.924247 systemd-logind[1593]: New session 4 of user core. Jun 26 07:16:18.934334 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 26 07:16:19.010093 sshd[1784]: pam_unix(sshd:session): session closed for user core Jun 26 07:16:19.023260 systemd[1]: Started sshd@4-144.126.218.72:22-147.75.109.163:51474.service - OpenSSH per-connection server daemon (147.75.109.163:51474). Jun 26 07:16:19.024224 systemd[1]: sshd@3-144.126.218.72:22-147.75.109.163:51470.service: Deactivated successfully. Jun 26 07:16:19.029952 systemd[1]: session-4.scope: Deactivated successfully. Jun 26 07:16:19.042071 systemd-logind[1593]: Session 4 logged out. Waiting for processes to exit. Jun 26 07:16:19.046636 systemd-logind[1593]: Removed session 4. Jun 26 07:16:19.101371 sshd[1793]: Accepted publickey for core from 147.75.109.163 port 51474 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:16:19.104379 sshd[1793]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:16:19.117121 systemd-logind[1593]: New session 5 of user core. Jun 26 07:16:19.128629 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 26 07:16:19.216427 sudo[1799]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 26 07:16:19.217796 sudo[1799]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 26 07:16:19.241597 sudo[1799]: pam_unix(sudo:session): session closed for user root Jun 26 07:16:19.246113 sshd[1793]: pam_unix(sshd:session): session closed for user core Jun 26 07:16:19.254325 systemd-logind[1593]: Session 5 logged out. Waiting for processes to exit. Jun 26 07:16:19.254821 systemd[1]: sshd@4-144.126.218.72:22-147.75.109.163:51474.service: Deactivated successfully. Jun 26 07:16:19.262365 systemd[1]: session-5.scope: Deactivated successfully. Jun 26 07:16:19.266375 systemd-logind[1593]: Removed session 5. Jun 26 07:16:19.274256 systemd[1]: Started sshd@5-144.126.218.72:22-147.75.109.163:51490.service - OpenSSH per-connection server daemon (147.75.109.163:51490). Jun 26 07:16:19.338718 sshd[1804]: Accepted publickey for core from 147.75.109.163 port 51490 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:16:19.340643 sshd[1804]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:16:19.354624 systemd-logind[1593]: New session 6 of user core. Jun 26 07:16:19.361346 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 26 07:16:19.431177 sudo[1809]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 26 07:16:19.432562 sudo[1809]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 26 07:16:19.439459 sudo[1809]: pam_unix(sudo:session): session closed for user root Jun 26 07:16:19.450198 sudo[1808]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jun 26 07:16:19.451405 sudo[1808]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 26 07:16:19.486302 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jun 26 07:16:19.504637 auditctl[1812]: No rules Jun 26 07:16:19.505851 systemd[1]: audit-rules.service: Deactivated successfully. Jun 26 07:16:19.506313 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jun 26 07:16:19.520619 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 26 07:16:19.574833 augenrules[1831]: No rules Jun 26 07:16:19.583161 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 26 07:16:19.586232 sudo[1808]: pam_unix(sudo:session): session closed for user root Jun 26 07:16:19.592971 sshd[1804]: pam_unix(sshd:session): session closed for user core Jun 26 07:16:19.607255 systemd[1]: Started sshd@6-144.126.218.72:22-147.75.109.163:51494.service - OpenSSH per-connection server daemon (147.75.109.163:51494). Jun 26 07:16:19.608242 systemd[1]: sshd@5-144.126.218.72:22-147.75.109.163:51490.service: Deactivated successfully. Jun 26 07:16:19.613521 systemd-logind[1593]: Session 6 logged out. Waiting for processes to exit. Jun 26 07:16:19.613975 systemd[1]: session-6.scope: Deactivated successfully. Jun 26 07:16:19.619880 systemd-logind[1593]: Removed session 6. Jun 26 07:16:19.687911 sshd[1837]: Accepted publickey for core from 147.75.109.163 port 51494 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:16:19.690530 sshd[1837]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:16:19.700543 systemd-logind[1593]: New session 7 of user core. Jun 26 07:16:19.711523 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 26 07:16:19.780626 sudo[1844]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 26 07:16:19.781159 sudo[1844]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 26 07:16:20.052547 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 26 07:16:20.053589 (dockerd)[1853]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jun 26 07:16:20.639643 dockerd[1853]: time="2024-06-26T07:16:20.634843518Z" level=info msg="Starting up" Jun 26 07:16:20.716012 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2251878839-merged.mount: Deactivated successfully. Jun 26 07:16:21.043645 dockerd[1853]: time="2024-06-26T07:16:21.042731475Z" level=info msg="Loading containers: start." Jun 26 07:16:21.369976 kernel: Initializing XFRM netlink socket Jun 26 07:16:21.597805 systemd-networkd[1248]: docker0: Link UP Jun 26 07:16:21.648715 dockerd[1853]: time="2024-06-26T07:16:21.647663070Z" level=info msg="Loading containers: done." Jun 26 07:16:21.787908 dockerd[1853]: time="2024-06-26T07:16:21.787662322Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 26 07:16:21.792982 dockerd[1853]: time="2024-06-26T07:16:21.792284832Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jun 26 07:16:21.792982 dockerd[1853]: time="2024-06-26T07:16:21.792541566Z" level=info msg="Daemon has completed initialization" Jun 26 07:16:21.901753 dockerd[1853]: time="2024-06-26T07:16:21.901518437Z" level=info msg="API listen on /run/docker.sock" Jun 26 07:16:21.906653 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 26 07:16:23.639830 containerd[1627]: time="2024-06-26T07:16:23.639211754Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\"" Jun 26 07:16:24.771023 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount864096182.mount: Deactivated successfully. Jun 26 07:16:28.219473 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 26 07:16:28.229186 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 26 07:16:28.511116 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 26 07:16:28.523427 (kubelet)[2059]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 26 07:16:28.674989 kubelet[2059]: E0626 07:16:28.674860 2059 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 26 07:16:28.683251 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 26 07:16:28.683715 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 26 07:16:28.879655 containerd[1627]: time="2024-06-26T07:16:28.878064484Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:16:28.882503 containerd[1627]: time="2024-06-26T07:16:28.881415820Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.28.11: active requests=0, bytes read=34605178" Jun 26 07:16:28.887251 containerd[1627]: time="2024-06-26T07:16:28.885521009Z" level=info msg="ImageCreate event name:\"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:16:28.897253 containerd[1627]: time="2024-06-26T07:16:28.897118544Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:16:28.900584 containerd[1627]: time="2024-06-26T07:16:28.899796853Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.28.11\" with image id \"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\", repo tag \"registry.k8s.io/kube-apiserver:v1.28.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\", size \"34601978\" in 5.260491332s" Jun 26 07:16:28.900584 containerd[1627]: time="2024-06-26T07:16:28.900033510Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\" returns image reference \"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\"" Jun 26 07:16:28.962933 containerd[1627]: time="2024-06-26T07:16:28.961736649Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\"" Jun 26 07:16:32.514728 containerd[1627]: time="2024-06-26T07:16:32.513354512Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:16:32.518105 containerd[1627]: time="2024-06-26T07:16:32.518011225Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.28.11: active requests=0, bytes read=31719491" Jun 26 07:16:32.522661 containerd[1627]: time="2024-06-26T07:16:32.522584860Z" level=info msg="ImageCreate event name:\"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:16:32.532361 containerd[1627]: time="2024-06-26T07:16:32.532288209Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:16:32.536716 containerd[1627]: time="2024-06-26T07:16:32.535090492Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.28.11\" with image id \"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\", repo tag \"registry.k8s.io/kube-controller-manager:v1.28.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\", size \"33315989\" in 3.57327267s" Jun 26 07:16:32.536716 containerd[1627]: time="2024-06-26T07:16:32.535174031Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\" returns image reference \"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\"" Jun 26 07:16:32.583059 containerd[1627]: time="2024-06-26T07:16:32.582993154Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\"" Jun 26 07:16:34.312019 containerd[1627]: time="2024-06-26T07:16:34.311741134Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:16:34.315655 containerd[1627]: time="2024-06-26T07:16:34.315235734Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.28.11: active requests=0, bytes read=16925505" Jun 26 07:16:34.318172 containerd[1627]: time="2024-06-26T07:16:34.318103408Z" level=info msg="ImageCreate event name:\"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:16:34.325811 containerd[1627]: time="2024-06-26T07:16:34.325724296Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:16:34.328278 containerd[1627]: time="2024-06-26T07:16:34.328206573Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.28.11\" with image id \"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\", repo tag \"registry.k8s.io/kube-scheduler:v1.28.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\", size \"18522021\" in 1.745147925s" Jun 26 07:16:34.328278 containerd[1627]: time="2024-06-26T07:16:34.328271539Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\" returns image reference \"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\"" Jun 26 07:16:34.366703 containerd[1627]: time="2024-06-26T07:16:34.366321661Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\"" Jun 26 07:16:35.986709 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1313623758.mount: Deactivated successfully. Jun 26 07:16:36.589005 containerd[1627]: time="2024-06-26T07:16:36.588886239Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:16:36.591314 containerd[1627]: time="2024-06-26T07:16:36.590879619Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.28.11: active requests=0, bytes read=28118419" Jun 26 07:16:36.595713 containerd[1627]: time="2024-06-26T07:16:36.594855088Z" level=info msg="ImageCreate event name:\"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:16:36.600003 containerd[1627]: time="2024-06-26T07:16:36.599923989Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:16:36.601517 containerd[1627]: time="2024-06-26T07:16:36.601445430Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.28.11\" with image id \"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\", repo tag \"registry.k8s.io/kube-proxy:v1.28.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\", size \"28117438\" in 2.235060672s" Jun 26 07:16:36.601517 containerd[1627]: time="2024-06-26T07:16:36.601505529Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\" returns image reference \"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\"" Jun 26 07:16:36.640440 containerd[1627]: time="2024-06-26T07:16:36.640385094Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jun 26 07:16:37.314536 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount486571528.mount: Deactivated successfully. Jun 26 07:16:37.329528 containerd[1627]: time="2024-06-26T07:16:37.327919269Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:16:37.329914 containerd[1627]: time="2024-06-26T07:16:37.329795743Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jun 26 07:16:37.332013 containerd[1627]: time="2024-06-26T07:16:37.331945342Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:16:37.338098 containerd[1627]: time="2024-06-26T07:16:37.338022274Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:16:37.339709 containerd[1627]: time="2024-06-26T07:16:37.339608656Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 699.166613ms" Jun 26 07:16:37.340126 containerd[1627]: time="2024-06-26T07:16:37.340082389Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jun 26 07:16:37.381313 containerd[1627]: time="2024-06-26T07:16:37.381263404Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jun 26 07:16:38.127211 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1197371372.mount: Deactivated successfully. Jun 26 07:16:38.719633 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 26 07:16:38.728865 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 26 07:16:38.988334 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 26 07:16:39.012765 (kubelet)[2138]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 26 07:16:39.130107 kubelet[2138]: E0626 07:16:39.130030 2138 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 26 07:16:39.137976 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 26 07:16:39.138314 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 26 07:16:41.199160 containerd[1627]: time="2024-06-26T07:16:41.198354205Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:16:41.201768 containerd[1627]: time="2024-06-26T07:16:41.201700457Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Jun 26 07:16:41.203459 containerd[1627]: time="2024-06-26T07:16:41.203392742Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:16:41.214745 containerd[1627]: time="2024-06-26T07:16:41.213013662Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:16:41.216063 containerd[1627]: time="2024-06-26T07:16:41.215977105Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 3.834653604s" Jun 26 07:16:41.216321 containerd[1627]: time="2024-06-26T07:16:41.216292309Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jun 26 07:16:41.262439 containerd[1627]: time="2024-06-26T07:16:41.262342937Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Jun 26 07:16:42.111396 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2723960628.mount: Deactivated successfully. Jun 26 07:16:42.937075 containerd[1627]: time="2024-06-26T07:16:42.935960898Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:16:42.943665 containerd[1627]: time="2024-06-26T07:16:42.943302978Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.10.1: active requests=0, bytes read=16191749" Jun 26 07:16:42.945851 containerd[1627]: time="2024-06-26T07:16:42.944865592Z" level=info msg="ImageCreate event name:\"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:16:42.952219 containerd[1627]: time="2024-06-26T07:16:42.952152453Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:16:42.953525 containerd[1627]: time="2024-06-26T07:16:42.953446121Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.10.1\" with image id \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\", repo tag \"registry.k8s.io/coredns/coredns:v1.10.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\", size \"16190758\" in 1.690621187s" Jun 26 07:16:42.953860 containerd[1627]: time="2024-06-26T07:16:42.953821186Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Jun 26 07:16:46.624808 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 26 07:16:46.638347 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 26 07:16:46.689568 systemd[1]: Reloading requested from client PID 2258 ('systemctl') (unit session-7.scope)... Jun 26 07:16:46.689594 systemd[1]: Reloading... Jun 26 07:16:46.882764 zram_generator::config[2293]: No configuration found. Jun 26 07:16:47.153474 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 26 07:16:47.293473 systemd[1]: Reloading finished in 603 ms. Jun 26 07:16:47.354986 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jun 26 07:16:47.355135 systemd[1]: kubelet.service: Failed with result 'signal'. Jun 26 07:16:47.355604 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 26 07:16:47.366093 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 26 07:16:47.633136 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 26 07:16:47.648364 (kubelet)[2358]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 26 07:16:47.784720 kubelet[2358]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 26 07:16:47.784720 kubelet[2358]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 26 07:16:47.784720 kubelet[2358]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 26 07:16:47.784720 kubelet[2358]: I0626 07:16:47.783787 2358 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 26 07:16:48.813276 kubelet[2358]: I0626 07:16:48.813222 2358 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jun 26 07:16:48.814260 kubelet[2358]: I0626 07:16:48.814073 2358 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 26 07:16:48.814715 kubelet[2358]: I0626 07:16:48.814690 2358 server.go:895] "Client rotation is on, will bootstrap in background" Jun 26 07:16:48.872797 kubelet[2358]: E0626 07:16:48.872747 2358 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://144.126.218.72:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 144.126.218.72:6443: connect: connection refused Jun 26 07:16:48.872960 kubelet[2358]: I0626 07:16:48.872908 2358 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 26 07:16:48.894441 kubelet[2358]: I0626 07:16:48.894395 2358 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 26 07:16:48.897975 kubelet[2358]: I0626 07:16:48.897359 2358 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 26 07:16:48.897975 kubelet[2358]: I0626 07:16:48.897810 2358 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 26 07:16:48.898612 kubelet[2358]: I0626 07:16:48.898578 2358 topology_manager.go:138] "Creating topology manager with none policy" Jun 26 07:16:48.898773 kubelet[2358]: I0626 07:16:48.898760 2358 container_manager_linux.go:301] "Creating device plugin manager" Jun 26 07:16:48.900752 kubelet[2358]: I0626 07:16:48.900700 2358 state_mem.go:36] "Initialized new in-memory state store" Jun 26 07:16:48.903068 kubelet[2358]: I0626 07:16:48.903027 2358 kubelet.go:393] "Attempting to sync node with API server" Jun 26 07:16:48.903743 kubelet[2358]: I0626 07:16:48.903238 2358 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 26 07:16:48.903743 kubelet[2358]: I0626 07:16:48.903290 2358 kubelet.go:309] "Adding apiserver pod source" Jun 26 07:16:48.903743 kubelet[2358]: I0626 07:16:48.903314 2358 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 26 07:16:48.907421 kubelet[2358]: W0626 07:16:48.906793 2358 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://144.126.218.72:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4012.0.0-2-1603354b52&limit=500&resourceVersion=0": dial tcp 144.126.218.72:6443: connect: connection refused Jun 26 07:16:48.907421 kubelet[2358]: E0626 07:16:48.906885 2358 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://144.126.218.72:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4012.0.0-2-1603354b52&limit=500&resourceVersion=0": dial tcp 144.126.218.72:6443: connect: connection refused Jun 26 07:16:48.908601 kubelet[2358]: I0626 07:16:48.908111 2358 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Jun 26 07:16:48.914913 kubelet[2358]: W0626 07:16:48.914804 2358 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://144.126.218.72:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 144.126.218.72:6443: connect: connection refused Jun 26 07:16:48.914913 kubelet[2358]: E0626 07:16:48.914900 2358 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://144.126.218.72:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 144.126.218.72:6443: connect: connection refused Jun 26 07:16:48.915973 kubelet[2358]: W0626 07:16:48.915747 2358 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 26 07:16:48.916926 kubelet[2358]: I0626 07:16:48.916887 2358 server.go:1232] "Started kubelet" Jun 26 07:16:48.920719 kubelet[2358]: I0626 07:16:48.919100 2358 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jun 26 07:16:48.920979 kubelet[2358]: I0626 07:16:48.920949 2358 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jun 26 07:16:48.921100 kubelet[2358]: I0626 07:16:48.921082 2358 server.go:462] "Adding debug handlers to kubelet server" Jun 26 07:16:48.921458 kubelet[2358]: I0626 07:16:48.921423 2358 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 26 07:16:48.921922 kubelet[2358]: E0626 07:16:48.921751 2358 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-4012.0.0-2-1603354b52.17dc7ca5b631b1a8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-4012.0.0-2-1603354b52", UID:"ci-4012.0.0-2-1603354b52", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-4012.0.0-2-1603354b52"}, FirstTimestamp:time.Date(2024, time.June, 26, 7, 16, 48, 916844968, time.Local), LastTimestamp:time.Date(2024, time.June, 26, 7, 16, 48, 916844968, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-4012.0.0-2-1603354b52"}': 'Post "https://144.126.218.72:6443/api/v1/namespaces/default/events": dial tcp 144.126.218.72:6443: connect: connection refused'(may retry after sleeping) Jun 26 07:16:48.924937 kubelet[2358]: I0626 07:16:48.924511 2358 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 26 07:16:48.929994 kubelet[2358]: E0626 07:16:48.929955 2358 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jun 26 07:16:48.938207 kubelet[2358]: E0626 07:16:48.938145 2358 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 26 07:16:48.938560 kubelet[2358]: I0626 07:16:48.931039 2358 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 26 07:16:48.938999 kubelet[2358]: I0626 07:16:48.931063 2358 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jun 26 07:16:48.940162 kubelet[2358]: W0626 07:16:48.940085 2358 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://144.126.218.72:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 144.126.218.72:6443: connect: connection refused Jun 26 07:16:48.941598 kubelet[2358]: E0626 07:16:48.941568 2358 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://144.126.218.72:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 144.126.218.72:6443: connect: connection refused Jun 26 07:16:48.942761 kubelet[2358]: I0626 07:16:48.939366 2358 reconciler_new.go:29] "Reconciler: start to sync state" Jun 26 07:16:48.942761 kubelet[2358]: E0626 07:16:48.940550 2358 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://144.126.218.72:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012.0.0-2-1603354b52?timeout=10s\": dial tcp 144.126.218.72:6443: connect: connection refused" interval="200ms" Jun 26 07:16:49.005786 kubelet[2358]: I0626 07:16:49.005657 2358 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 26 07:16:49.008851 kubelet[2358]: I0626 07:16:49.008806 2358 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 26 07:16:49.009159 kubelet[2358]: I0626 07:16:49.009137 2358 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 26 07:16:49.013814 kubelet[2358]: I0626 07:16:49.013774 2358 kubelet.go:2303] "Starting kubelet main sync loop" Jun 26 07:16:49.014871 kubelet[2358]: E0626 07:16:49.014774 2358 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 26 07:16:49.016445 kubelet[2358]: W0626 07:16:49.016410 2358 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://144.126.218.72:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 144.126.218.72:6443: connect: connection refused Jun 26 07:16:49.018260 kubelet[2358]: E0626 07:16:49.018238 2358 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://144.126.218.72:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 144.126.218.72:6443: connect: connection refused Jun 26 07:16:49.034080 kubelet[2358]: I0626 07:16:49.034021 2358 kubelet_node_status.go:70] "Attempting to register node" node="ci-4012.0.0-2-1603354b52" Jun 26 07:16:49.034566 kubelet[2358]: E0626 07:16:49.034524 2358 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://144.126.218.72:6443/api/v1/nodes\": dial tcp 144.126.218.72:6443: connect: connection refused" node="ci-4012.0.0-2-1603354b52" Jun 26 07:16:49.044044 kubelet[2358]: I0626 07:16:49.041648 2358 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 26 07:16:49.044044 kubelet[2358]: I0626 07:16:49.041706 2358 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 26 07:16:49.044044 kubelet[2358]: I0626 07:16:49.041745 2358 state_mem.go:36] "Initialized new in-memory state store" Jun 26 07:16:49.049399 kubelet[2358]: I0626 07:16:49.049340 2358 policy_none.go:49] "None policy: Start" Jun 26 07:16:49.056435 kubelet[2358]: I0626 07:16:49.051820 2358 memory_manager.go:169] "Starting memorymanager" policy="None" Jun 26 07:16:49.056435 kubelet[2358]: I0626 07:16:49.051881 2358 state_mem.go:35] "Initializing new in-memory state store" Jun 26 07:16:49.073756 kubelet[2358]: I0626 07:16:49.069519 2358 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 26 07:16:49.083418 kubelet[2358]: I0626 07:16:49.079230 2358 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 26 07:16:49.088582 kubelet[2358]: E0626 07:16:49.088540 2358 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4012.0.0-2-1603354b52\" not found" Jun 26 07:16:49.115313 kubelet[2358]: I0626 07:16:49.115263 2358 topology_manager.go:215] "Topology Admit Handler" podUID="88f435673380c1a59f81f64f4f03a047" podNamespace="kube-system" podName="kube-apiserver-ci-4012.0.0-2-1603354b52" Jun 26 07:16:49.118505 kubelet[2358]: I0626 07:16:49.117297 2358 topology_manager.go:215] "Topology Admit Handler" podUID="dea80f6749dae5f7261b0cd5f0860463" podNamespace="kube-system" podName="kube-controller-manager-ci-4012.0.0-2-1603354b52" Jun 26 07:16:49.124648 kubelet[2358]: I0626 07:16:49.124600 2358 topology_manager.go:215] "Topology Admit Handler" podUID="988f38c0938d7b6fcfb0a76fb069a7ac" podNamespace="kube-system" podName="kube-scheduler-ci-4012.0.0-2-1603354b52" Jun 26 07:16:49.146044 kubelet[2358]: E0626 07:16:49.145984 2358 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://144.126.218.72:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012.0.0-2-1603354b52?timeout=10s\": dial tcp 144.126.218.72:6443: connect: connection refused" interval="400ms" Jun 26 07:16:49.146949 kubelet[2358]: I0626 07:16:49.146888 2358 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dea80f6749dae5f7261b0cd5f0860463-flexvolume-dir\") pod \"kube-controller-manager-ci-4012.0.0-2-1603354b52\" (UID: \"dea80f6749dae5f7261b0cd5f0860463\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-2-1603354b52" Jun 26 07:16:49.147204 kubelet[2358]: I0626 07:16:49.147182 2358 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dea80f6749dae5f7261b0cd5f0860463-kubeconfig\") pod \"kube-controller-manager-ci-4012.0.0-2-1603354b52\" (UID: \"dea80f6749dae5f7261b0cd5f0860463\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-2-1603354b52" Jun 26 07:16:49.147712 kubelet[2358]: I0626 07:16:49.147338 2358 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dea80f6749dae5f7261b0cd5f0860463-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4012.0.0-2-1603354b52\" (UID: \"dea80f6749dae5f7261b0cd5f0860463\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-2-1603354b52" Jun 26 07:16:49.147712 kubelet[2358]: I0626 07:16:49.147381 2358 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/88f435673380c1a59f81f64f4f03a047-ca-certs\") pod \"kube-apiserver-ci-4012.0.0-2-1603354b52\" (UID: \"88f435673380c1a59f81f64f4f03a047\") " pod="kube-system/kube-apiserver-ci-4012.0.0-2-1603354b52" Jun 26 07:16:49.147712 kubelet[2358]: I0626 07:16:49.147435 2358 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dea80f6749dae5f7261b0cd5f0860463-ca-certs\") pod \"kube-controller-manager-ci-4012.0.0-2-1603354b52\" (UID: \"dea80f6749dae5f7261b0cd5f0860463\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-2-1603354b52" Jun 26 07:16:49.147712 kubelet[2358]: I0626 07:16:49.147508 2358 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/988f38c0938d7b6fcfb0a76fb069a7ac-kubeconfig\") pod \"kube-scheduler-ci-4012.0.0-2-1603354b52\" (UID: \"988f38c0938d7b6fcfb0a76fb069a7ac\") " pod="kube-system/kube-scheduler-ci-4012.0.0-2-1603354b52" Jun 26 07:16:49.147712 kubelet[2358]: I0626 07:16:49.147559 2358 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/88f435673380c1a59f81f64f4f03a047-k8s-certs\") pod \"kube-apiserver-ci-4012.0.0-2-1603354b52\" (UID: \"88f435673380c1a59f81f64f4f03a047\") " pod="kube-system/kube-apiserver-ci-4012.0.0-2-1603354b52" Jun 26 07:16:49.148093 kubelet[2358]: I0626 07:16:49.147633 2358 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/88f435673380c1a59f81f64f4f03a047-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4012.0.0-2-1603354b52\" (UID: \"88f435673380c1a59f81f64f4f03a047\") " pod="kube-system/kube-apiserver-ci-4012.0.0-2-1603354b52" Jun 26 07:16:49.148271 kubelet[2358]: I0626 07:16:49.148244 2358 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dea80f6749dae5f7261b0cd5f0860463-k8s-certs\") pod \"kube-controller-manager-ci-4012.0.0-2-1603354b52\" (UID: \"dea80f6749dae5f7261b0cd5f0860463\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-2-1603354b52" Jun 26 07:16:49.237111 kubelet[2358]: I0626 07:16:49.237067 2358 kubelet_node_status.go:70] "Attempting to register node" node="ci-4012.0.0-2-1603354b52" Jun 26 07:16:49.237716 kubelet[2358]: E0626 07:16:49.237661 2358 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://144.126.218.72:6443/api/v1/nodes\": dial tcp 144.126.218.72:6443: connect: connection refused" node="ci-4012.0.0-2-1603354b52" Jun 26 07:16:49.431759 kubelet[2358]: E0626 07:16:49.430916 2358 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:16:49.434162 containerd[1627]: time="2024-06-26T07:16:49.434095425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4012.0.0-2-1603354b52,Uid:88f435673380c1a59f81f64f4f03a047,Namespace:kube-system,Attempt:0,}" Jun 26 07:16:49.452898 kubelet[2358]: E0626 07:16:49.452531 2358 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:16:49.453122 kubelet[2358]: E0626 07:16:49.453104 2358 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:16:49.461052 containerd[1627]: time="2024-06-26T07:16:49.460723550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4012.0.0-2-1603354b52,Uid:988f38c0938d7b6fcfb0a76fb069a7ac,Namespace:kube-system,Attempt:0,}" Jun 26 07:16:49.461052 containerd[1627]: time="2024-06-26T07:16:49.460816072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4012.0.0-2-1603354b52,Uid:dea80f6749dae5f7261b0cd5f0860463,Namespace:kube-system,Attempt:0,}" Jun 26 07:16:49.547901 kubelet[2358]: E0626 07:16:49.547837 2358 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://144.126.218.72:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012.0.0-2-1603354b52?timeout=10s\": dial tcp 144.126.218.72:6443: connect: connection refused" interval="800ms" Jun 26 07:16:49.640989 kubelet[2358]: I0626 07:16:49.640182 2358 kubelet_node_status.go:70] "Attempting to register node" node="ci-4012.0.0-2-1603354b52" Jun 26 07:16:49.641856 kubelet[2358]: E0626 07:16:49.641702 2358 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://144.126.218.72:6443/api/v1/nodes\": dial tcp 144.126.218.72:6443: connect: connection refused" node="ci-4012.0.0-2-1603354b52" Jun 26 07:16:49.783869 kubelet[2358]: W0626 07:16:49.783725 2358 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://144.126.218.72:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 144.126.218.72:6443: connect: connection refused Jun 26 07:16:49.783869 kubelet[2358]: E0626 07:16:49.783851 2358 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://144.126.218.72:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 144.126.218.72:6443: connect: connection refused Jun 26 07:16:49.902947 kubelet[2358]: W0626 07:16:49.902814 2358 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://144.126.218.72:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 144.126.218.72:6443: connect: connection refused Jun 26 07:16:49.902947 kubelet[2358]: E0626 07:16:49.902941 2358 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://144.126.218.72:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 144.126.218.72:6443: connect: connection refused Jun 26 07:16:49.933220 kubelet[2358]: W0626 07:16:49.933084 2358 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://144.126.218.72:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4012.0.0-2-1603354b52&limit=500&resourceVersion=0": dial tcp 144.126.218.72:6443: connect: connection refused Jun 26 07:16:49.933220 kubelet[2358]: E0626 07:16:49.933203 2358 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://144.126.218.72:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4012.0.0-2-1603354b52&limit=500&resourceVersion=0": dial tcp 144.126.218.72:6443: connect: connection refused Jun 26 07:16:50.314808 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3946441727.mount: Deactivated successfully. Jun 26 07:16:50.336071 containerd[1627]: time="2024-06-26T07:16:50.335966612Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 26 07:16:50.342164 containerd[1627]: time="2024-06-26T07:16:50.341407067Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 26 07:16:50.343204 containerd[1627]: time="2024-06-26T07:16:50.343103421Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 26 07:16:50.348960 containerd[1627]: time="2024-06-26T07:16:50.348798824Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 26 07:16:50.349509 kubelet[2358]: E0626 07:16:50.349452 2358 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://144.126.218.72:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012.0.0-2-1603354b52?timeout=10s\": dial tcp 144.126.218.72:6443: connect: connection refused" interval="1.6s" Jun 26 07:16:50.352434 kubelet[2358]: W0626 07:16:50.352122 2358 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://144.126.218.72:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 144.126.218.72:6443: connect: connection refused Jun 26 07:16:50.352434 kubelet[2358]: E0626 07:16:50.352238 2358 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://144.126.218.72:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 144.126.218.72:6443: connect: connection refused Jun 26 07:16:50.353747 containerd[1627]: time="2024-06-26T07:16:50.353702863Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 26 07:16:50.354343 containerd[1627]: time="2024-06-26T07:16:50.354284396Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jun 26 07:16:50.358063 containerd[1627]: time="2024-06-26T07:16:50.357981075Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 26 07:16:50.364127 containerd[1627]: time="2024-06-26T07:16:50.364053610Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 26 07:16:50.367825 containerd[1627]: time="2024-06-26T07:16:50.367762596Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 906.802359ms" Jun 26 07:16:50.371740 containerd[1627]: time="2024-06-26T07:16:50.371351503Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 910.454955ms" Jun 26 07:16:50.372255 containerd[1627]: time="2024-06-26T07:16:50.372023791Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 937.788933ms" Jun 26 07:16:50.444164 kubelet[2358]: I0626 07:16:50.444116 2358 kubelet_node_status.go:70] "Attempting to register node" node="ci-4012.0.0-2-1603354b52" Jun 26 07:16:50.445128 kubelet[2358]: E0626 07:16:50.444970 2358 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://144.126.218.72:6443/api/v1/nodes\": dial tcp 144.126.218.72:6443: connect: connection refused" node="ci-4012.0.0-2-1603354b52" Jun 26 07:16:50.730735 containerd[1627]: time="2024-06-26T07:16:50.724527733Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 26 07:16:50.730735 containerd[1627]: time="2024-06-26T07:16:50.724625211Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:16:50.730735 containerd[1627]: time="2024-06-26T07:16:50.724657605Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 26 07:16:50.730735 containerd[1627]: time="2024-06-26T07:16:50.724697974Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:16:50.767491 containerd[1627]: time="2024-06-26T07:16:50.766202750Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 26 07:16:50.767491 containerd[1627]: time="2024-06-26T07:16:50.766291376Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:16:50.767491 containerd[1627]: time="2024-06-26T07:16:50.766321610Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 26 07:16:50.767491 containerd[1627]: time="2024-06-26T07:16:50.766339896Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:16:50.821928 containerd[1627]: time="2024-06-26T07:16:50.819248187Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 26 07:16:50.821928 containerd[1627]: time="2024-06-26T07:16:50.819373852Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:16:50.821928 containerd[1627]: time="2024-06-26T07:16:50.819449005Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 26 07:16:50.821928 containerd[1627]: time="2024-06-26T07:16:50.819483852Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:16:50.881657 kubelet[2358]: E0626 07:16:50.881611 2358 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://144.126.218.72:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 144.126.218.72:6443: connect: connection refused Jun 26 07:16:50.979877 containerd[1627]: time="2024-06-26T07:16:50.977091433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4012.0.0-2-1603354b52,Uid:88f435673380c1a59f81f64f4f03a047,Namespace:kube-system,Attempt:0,} returns sandbox id \"ed1557e3a626ca47996d80fae6dfad964d3d6e5cede216b825499c8f66e51b76\"" Jun 26 07:16:50.983839 kubelet[2358]: E0626 07:16:50.982460 2358 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:16:50.995280 containerd[1627]: time="2024-06-26T07:16:50.994952926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4012.0.0-2-1603354b52,Uid:988f38c0938d7b6fcfb0a76fb069a7ac,Namespace:kube-system,Attempt:0,} returns sandbox id \"575cea277ef8d42c5344ad52b2937890a9f7838e1032c1b344c0a2d2a649c2b1\"" Jun 26 07:16:50.999284 containerd[1627]: time="2024-06-26T07:16:50.999202608Z" level=info msg="CreateContainer within sandbox \"ed1557e3a626ca47996d80fae6dfad964d3d6e5cede216b825499c8f66e51b76\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 26 07:16:51.000632 kubelet[2358]: E0626 07:16:51.000594 2358 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:16:51.008305 containerd[1627]: time="2024-06-26T07:16:51.008224643Z" level=info msg="CreateContainer within sandbox \"575cea277ef8d42c5344ad52b2937890a9f7838e1032c1b344c0a2d2a649c2b1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 26 07:16:51.068594 containerd[1627]: time="2024-06-26T07:16:51.068535073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4012.0.0-2-1603354b52,Uid:dea80f6749dae5f7261b0cd5f0860463,Namespace:kube-system,Attempt:0,} returns sandbox id \"0dad29ad4ccb768ca72970cee27e08079540fccc5631be22ba32e52257efca42\"" Jun 26 07:16:51.071044 kubelet[2358]: E0626 07:16:51.070156 2358 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:16:51.080984 containerd[1627]: time="2024-06-26T07:16:51.080726973Z" level=info msg="CreateContainer within sandbox \"0dad29ad4ccb768ca72970cee27e08079540fccc5631be22ba32e52257efca42\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 26 07:16:51.142588 containerd[1627]: time="2024-06-26T07:16:51.142327414Z" level=info msg="CreateContainer within sandbox \"ed1557e3a626ca47996d80fae6dfad964d3d6e5cede216b825499c8f66e51b76\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"be90ed4095e45169b6e6a716938d499c141b1ce7b2bc6dbee3376340eeb5d1c5\"" Jun 26 07:16:51.143998 containerd[1627]: time="2024-06-26T07:16:51.143253887Z" level=info msg="StartContainer for \"be90ed4095e45169b6e6a716938d499c141b1ce7b2bc6dbee3376340eeb5d1c5\"" Jun 26 07:16:51.157268 containerd[1627]: time="2024-06-26T07:16:51.157149370Z" level=info msg="CreateContainer within sandbox \"575cea277ef8d42c5344ad52b2937890a9f7838e1032c1b344c0a2d2a649c2b1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f7f82b5aa4971bc382d9cb1443a53e6846da2593c14686cb345e9bfbb2e2c6b7\"" Jun 26 07:16:51.168311 containerd[1627]: time="2024-06-26T07:16:51.158950928Z" level=info msg="StartContainer for \"f7f82b5aa4971bc382d9cb1443a53e6846da2593c14686cb345e9bfbb2e2c6b7\"" Jun 26 07:16:51.199404 containerd[1627]: time="2024-06-26T07:16:51.199322891Z" level=info msg="CreateContainer within sandbox \"0dad29ad4ccb768ca72970cee27e08079540fccc5631be22ba32e52257efca42\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ce6e605608984f37cf75cb0a0cb59ee8037985e32d2dc68502c53fda85a3c2c6\"" Jun 26 07:16:51.201204 containerd[1627]: time="2024-06-26T07:16:51.201001581Z" level=info msg="StartContainer for \"ce6e605608984f37cf75cb0a0cb59ee8037985e32d2dc68502c53fda85a3c2c6\"" Jun 26 07:16:51.442510 containerd[1627]: time="2024-06-26T07:16:51.441390170Z" level=info msg="StartContainer for \"be90ed4095e45169b6e6a716938d499c141b1ce7b2bc6dbee3376340eeb5d1c5\" returns successfully" Jun 26 07:16:51.487729 containerd[1627]: time="2024-06-26T07:16:51.487033043Z" level=info msg="StartContainer for \"ce6e605608984f37cf75cb0a0cb59ee8037985e32d2dc68502c53fda85a3c2c6\" returns successfully" Jun 26 07:16:51.594332 containerd[1627]: time="2024-06-26T07:16:51.594263787Z" level=info msg="StartContainer for \"f7f82b5aa4971bc382d9cb1443a53e6846da2593c14686cb345e9bfbb2e2c6b7\" returns successfully" Jun 26 07:16:51.951187 kubelet[2358]: E0626 07:16:51.951124 2358 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://144.126.218.72:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012.0.0-2-1603354b52?timeout=10s\": dial tcp 144.126.218.72:6443: connect: connection refused" interval="3.2s" Jun 26 07:16:52.056265 kubelet[2358]: I0626 07:16:52.051532 2358 kubelet_node_status.go:70] "Attempting to register node" node="ci-4012.0.0-2-1603354b52" Jun 26 07:16:52.061870 kubelet[2358]: E0626 07:16:52.059572 2358 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:16:52.076225 kubelet[2358]: E0626 07:16:52.076184 2358 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:16:52.083113 kubelet[2358]: E0626 07:16:52.083062 2358 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:16:53.095661 kubelet[2358]: E0626 07:16:53.087454 2358 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:16:53.095661 kubelet[2358]: E0626 07:16:53.091487 2358 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:16:54.088439 kubelet[2358]: E0626 07:16:54.088224 2358 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:16:54.088439 kubelet[2358]: E0626 07:16:54.088353 2358 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:16:54.908799 kubelet[2358]: I0626 07:16:54.908651 2358 apiserver.go:52] "Watching apiserver" Jun 26 07:16:54.940568 kubelet[2358]: I0626 07:16:54.940436 2358 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jun 26 07:16:54.977259 kubelet[2358]: I0626 07:16:54.977195 2358 kubelet_node_status.go:73] "Successfully registered node" node="ci-4012.0.0-2-1603354b52" Jun 26 07:16:55.125379 kubelet[2358]: E0626 07:16:55.124090 2358 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4012.0.0-2-1603354b52\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4012.0.0-2-1603354b52" Jun 26 07:16:55.125379 kubelet[2358]: E0626 07:16:55.125152 2358 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:16:57.771409 kubelet[2358]: W0626 07:16:57.770754 2358 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 26 07:16:57.776152 kubelet[2358]: E0626 07:16:57.775674 2358 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:16:58.100446 kubelet[2358]: E0626 07:16:58.100290 2358 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:16:58.357358 systemd[1]: Reloading requested from client PID 2632 ('systemctl') (unit session-7.scope)... Jun 26 07:16:58.358377 systemd[1]: Reloading... Jun 26 07:16:58.620419 zram_generator::config[2672]: No configuration found. Jun 26 07:16:58.749758 update_engine[1596]: I0626 07:16:58.747568 1596 update_attempter.cc:509] Updating boot flags... Jun 26 07:16:58.853839 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (2722) Jun 26 07:16:58.993533 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 26 07:16:59.258535 systemd[1]: Reloading finished in 899 ms. Jun 26 07:16:59.367600 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 26 07:16:59.391168 kubelet[2358]: I0626 07:16:59.383202 2358 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 26 07:16:59.419549 systemd[1]: kubelet.service: Deactivated successfully. Jun 26 07:16:59.422442 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 26 07:16:59.436875 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 26 07:16:59.710178 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 26 07:16:59.725007 (kubelet)[2743]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 26 07:16:59.921232 kubelet[2743]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 26 07:16:59.921232 kubelet[2743]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 26 07:16:59.921232 kubelet[2743]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 26 07:16:59.921232 kubelet[2743]: I0626 07:16:59.920036 2743 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 26 07:16:59.941755 kubelet[2743]: I0626 07:16:59.941636 2743 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jun 26 07:16:59.941755 kubelet[2743]: I0626 07:16:59.941725 2743 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 26 07:16:59.942176 kubelet[2743]: I0626 07:16:59.942137 2743 server.go:895] "Client rotation is on, will bootstrap in background" Jun 26 07:16:59.949112 kubelet[2743]: I0626 07:16:59.949037 2743 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 26 07:16:59.950988 kubelet[2743]: I0626 07:16:59.950745 2743 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 26 07:16:59.964810 kubelet[2743]: I0626 07:16:59.963792 2743 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 26 07:16:59.964810 kubelet[2743]: I0626 07:16:59.964544 2743 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 26 07:16:59.965263 kubelet[2743]: I0626 07:16:59.965222 2743 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 26 07:16:59.965478 kubelet[2743]: I0626 07:16:59.965464 2743 topology_manager.go:138] "Creating topology manager with none policy" Jun 26 07:16:59.965618 kubelet[2743]: I0626 07:16:59.965602 2743 container_manager_linux.go:301] "Creating device plugin manager" Jun 26 07:16:59.965821 kubelet[2743]: I0626 07:16:59.965806 2743 state_mem.go:36] "Initialized new in-memory state store" Jun 26 07:16:59.966109 kubelet[2743]: I0626 07:16:59.966091 2743 kubelet.go:393] "Attempting to sync node with API server" Jun 26 07:16:59.966256 kubelet[2743]: I0626 07:16:59.966242 2743 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 26 07:16:59.966831 kubelet[2743]: I0626 07:16:59.966812 2743 kubelet.go:309] "Adding apiserver pod source" Jun 26 07:16:59.966961 kubelet[2743]: I0626 07:16:59.966951 2743 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 26 07:16:59.989714 kubelet[2743]: I0626 07:16:59.973027 2743 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Jun 26 07:16:59.999714 kubelet[2743]: I0626 07:16:59.993871 2743 server.go:1232] "Started kubelet" Jun 26 07:17:00.003870 kubelet[2743]: I0626 07:17:00.003820 2743 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 26 07:17:00.014465 kubelet[2743]: I0626 07:17:00.014404 2743 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jun 26 07:17:00.016416 kubelet[2743]: I0626 07:17:00.015915 2743 server.go:462] "Adding debug handlers to kubelet server" Jun 26 07:17:00.028720 kubelet[2743]: I0626 07:17:00.020580 2743 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jun 26 07:17:00.028720 kubelet[2743]: I0626 07:17:00.021077 2743 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 26 07:17:00.047416 kubelet[2743]: I0626 07:17:00.044682 2743 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 26 07:17:00.047416 kubelet[2743]: I0626 07:17:00.045491 2743 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jun 26 07:17:00.047416 kubelet[2743]: I0626 07:17:00.045819 2743 reconciler_new.go:29] "Reconciler: start to sync state" Jun 26 07:17:00.052249 kubelet[2743]: E0626 07:17:00.050957 2743 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jun 26 07:17:00.052249 kubelet[2743]: E0626 07:17:00.051008 2743 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 26 07:17:00.058193 kubelet[2743]: I0626 07:17:00.057348 2743 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 26 07:17:00.061309 kubelet[2743]: I0626 07:17:00.061254 2743 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 26 07:17:00.061309 kubelet[2743]: I0626 07:17:00.061302 2743 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 26 07:17:00.061601 kubelet[2743]: I0626 07:17:00.061337 2743 kubelet.go:2303] "Starting kubelet main sync loop" Jun 26 07:17:00.062485 kubelet[2743]: E0626 07:17:00.062444 2743 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 26 07:17:00.149123 kubelet[2743]: I0626 07:17:00.149085 2743 kubelet_node_status.go:70] "Attempting to register node" node="ci-4012.0.0-2-1603354b52" Jun 26 07:17:00.163820 kubelet[2743]: E0626 07:17:00.163762 2743 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 26 07:17:00.172544 kubelet[2743]: I0626 07:17:00.172423 2743 kubelet_node_status.go:108] "Node was previously registered" node="ci-4012.0.0-2-1603354b52" Jun 26 07:17:00.177959 kubelet[2743]: I0626 07:17:00.177886 2743 kubelet_node_status.go:73] "Successfully registered node" node="ci-4012.0.0-2-1603354b52" Jun 26 07:17:00.296580 kubelet[2743]: I0626 07:17:00.296151 2743 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 26 07:17:00.296580 kubelet[2743]: I0626 07:17:00.296184 2743 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 26 07:17:00.296580 kubelet[2743]: I0626 07:17:00.296218 2743 state_mem.go:36] "Initialized new in-memory state store" Jun 26 07:17:00.297379 kubelet[2743]: I0626 07:17:00.296984 2743 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 26 07:17:00.297379 kubelet[2743]: I0626 07:17:00.297036 2743 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 26 07:17:00.297379 kubelet[2743]: I0626 07:17:00.297049 2743 policy_none.go:49] "None policy: Start" Jun 26 07:17:00.299746 kubelet[2743]: I0626 07:17:00.298776 2743 memory_manager.go:169] "Starting memorymanager" policy="None" Jun 26 07:17:00.299746 kubelet[2743]: I0626 07:17:00.298841 2743 state_mem.go:35] "Initializing new in-memory state store" Jun 26 07:17:00.299746 kubelet[2743]: I0626 07:17:00.299313 2743 state_mem.go:75] "Updated machine memory state" Jun 26 07:17:00.302087 kubelet[2743]: I0626 07:17:00.302055 2743 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 26 07:17:00.326147 kubelet[2743]: I0626 07:17:00.324568 2743 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 26 07:17:00.366459 kubelet[2743]: I0626 07:17:00.365883 2743 topology_manager.go:215] "Topology Admit Handler" podUID="988f38c0938d7b6fcfb0a76fb069a7ac" podNamespace="kube-system" podName="kube-scheduler-ci-4012.0.0-2-1603354b52" Jun 26 07:17:00.366459 kubelet[2743]: I0626 07:17:00.366060 2743 topology_manager.go:215] "Topology Admit Handler" podUID="88f435673380c1a59f81f64f4f03a047" podNamespace="kube-system" podName="kube-apiserver-ci-4012.0.0-2-1603354b52" Jun 26 07:17:00.366459 kubelet[2743]: I0626 07:17:00.366116 2743 topology_manager.go:215] "Topology Admit Handler" podUID="dea80f6749dae5f7261b0cd5f0860463" podNamespace="kube-system" podName="kube-controller-manager-ci-4012.0.0-2-1603354b52" Jun 26 07:17:00.380891 kubelet[2743]: W0626 07:17:00.380205 2743 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 26 07:17:00.388015 kubelet[2743]: W0626 07:17:00.387963 2743 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 26 07:17:00.389637 kubelet[2743]: W0626 07:17:00.389394 2743 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 26 07:17:00.389637 kubelet[2743]: E0626 07:17:00.389550 2743 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4012.0.0-2-1603354b52\" already exists" pod="kube-system/kube-scheduler-ci-4012.0.0-2-1603354b52" Jun 26 07:17:00.449748 kubelet[2743]: I0626 07:17:00.449168 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dea80f6749dae5f7261b0cd5f0860463-ca-certs\") pod \"kube-controller-manager-ci-4012.0.0-2-1603354b52\" (UID: \"dea80f6749dae5f7261b0cd5f0860463\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-2-1603354b52" Jun 26 07:17:00.449748 kubelet[2743]: I0626 07:17:00.449244 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/88f435673380c1a59f81f64f4f03a047-ca-certs\") pod \"kube-apiserver-ci-4012.0.0-2-1603354b52\" (UID: \"88f435673380c1a59f81f64f4f03a047\") " pod="kube-system/kube-apiserver-ci-4012.0.0-2-1603354b52" Jun 26 07:17:00.449748 kubelet[2743]: I0626 07:17:00.449285 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/88f435673380c1a59f81f64f4f03a047-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4012.0.0-2-1603354b52\" (UID: \"88f435673380c1a59f81f64f4f03a047\") " pod="kube-system/kube-apiserver-ci-4012.0.0-2-1603354b52" Jun 26 07:17:00.449748 kubelet[2743]: I0626 07:17:00.449319 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dea80f6749dae5f7261b0cd5f0860463-flexvolume-dir\") pod \"kube-controller-manager-ci-4012.0.0-2-1603354b52\" (UID: \"dea80f6749dae5f7261b0cd5f0860463\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-2-1603354b52" Jun 26 07:17:00.449748 kubelet[2743]: I0626 07:17:00.449358 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dea80f6749dae5f7261b0cd5f0860463-k8s-certs\") pod \"kube-controller-manager-ci-4012.0.0-2-1603354b52\" (UID: \"dea80f6749dae5f7261b0cd5f0860463\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-2-1603354b52" Jun 26 07:17:00.450233 kubelet[2743]: I0626 07:17:00.449414 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dea80f6749dae5f7261b0cd5f0860463-kubeconfig\") pod \"kube-controller-manager-ci-4012.0.0-2-1603354b52\" (UID: \"dea80f6749dae5f7261b0cd5f0860463\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-2-1603354b52" Jun 26 07:17:00.450233 kubelet[2743]: I0626 07:17:00.449455 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dea80f6749dae5f7261b0cd5f0860463-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4012.0.0-2-1603354b52\" (UID: \"dea80f6749dae5f7261b0cd5f0860463\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-2-1603354b52" Jun 26 07:17:00.450233 kubelet[2743]: I0626 07:17:00.449497 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/988f38c0938d7b6fcfb0a76fb069a7ac-kubeconfig\") pod \"kube-scheduler-ci-4012.0.0-2-1603354b52\" (UID: \"988f38c0938d7b6fcfb0a76fb069a7ac\") " pod="kube-system/kube-scheduler-ci-4012.0.0-2-1603354b52" Jun 26 07:17:00.450233 kubelet[2743]: I0626 07:17:00.449530 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/88f435673380c1a59f81f64f4f03a047-k8s-certs\") pod \"kube-apiserver-ci-4012.0.0-2-1603354b52\" (UID: \"88f435673380c1a59f81f64f4f03a047\") " pod="kube-system/kube-apiserver-ci-4012.0.0-2-1603354b52" Jun 26 07:17:00.682489 kubelet[2743]: E0626 07:17:00.682151 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:17:00.691720 kubelet[2743]: E0626 07:17:00.690384 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:17:00.692044 kubelet[2743]: E0626 07:17:00.692009 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:17:00.970406 kubelet[2743]: I0626 07:17:00.970005 2743 apiserver.go:52] "Watching apiserver" Jun 26 07:17:01.046908 kubelet[2743]: I0626 07:17:01.046728 2743 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jun 26 07:17:01.213757 kubelet[2743]: E0626 07:17:01.213052 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:17:01.215047 kubelet[2743]: E0626 07:17:01.215016 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:17:01.216028 kubelet[2743]: E0626 07:17:01.216001 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:17:01.490609 kubelet[2743]: I0626 07:17:01.489953 2743 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4012.0.0-2-1603354b52" podStartSLOduration=1.489884379 podCreationTimestamp="2024-06-26 07:17:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-26 07:17:01.446600832 +0000 UTC m=+1.705185589" watchObservedRunningTime="2024-06-26 07:17:01.489884379 +0000 UTC m=+1.748469138" Jun 26 07:17:01.524801 kubelet[2743]: I0626 07:17:01.523225 2743 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4012.0.0-2-1603354b52" podStartSLOduration=1.523163645 podCreationTimestamp="2024-06-26 07:17:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-26 07:17:01.499398981 +0000 UTC m=+1.757983738" watchObservedRunningTime="2024-06-26 07:17:01.523163645 +0000 UTC m=+1.781748392" Jun 26 07:17:02.228877 kubelet[2743]: E0626 07:17:02.228816 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:17:02.248070 kubelet[2743]: E0626 07:17:02.245222 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:17:03.252333 kubelet[2743]: E0626 07:17:03.250192 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:17:04.080717 kubelet[2743]: E0626 07:17:04.077981 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:17:04.273710 kubelet[2743]: E0626 07:17:04.272655 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:17:09.364341 sudo[1844]: pam_unix(sudo:session): session closed for user root Jun 26 07:17:09.369367 sshd[1837]: pam_unix(sshd:session): session closed for user core Jun 26 07:17:09.381302 systemd[1]: sshd@6-144.126.218.72:22-147.75.109.163:51494.service: Deactivated successfully. Jun 26 07:17:09.386778 systemd-logind[1593]: Session 7 logged out. Waiting for processes to exit. Jun 26 07:17:09.387117 systemd[1]: session-7.scope: Deactivated successfully. Jun 26 07:17:09.395352 systemd-logind[1593]: Removed session 7. Jun 26 07:17:10.136791 kubelet[2743]: E0626 07:17:10.132435 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:17:10.312382 kubelet[2743]: E0626 07:17:10.309016 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:17:11.325085 kubelet[2743]: E0626 07:17:11.325029 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:17:12.108736 kubelet[2743]: I0626 07:17:12.101367 2743 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 26 07:17:12.108736 kubelet[2743]: I0626 07:17:12.107400 2743 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 26 07:17:12.108992 containerd[1627]: time="2024-06-26T07:17:12.106589162Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 26 07:17:12.615660 kubelet[2743]: I0626 07:17:12.614993 2743 topology_manager.go:215] "Topology Admit Handler" podUID="51b36648-a307-45de-b2d5-696c6d36ce2f" podNamespace="kube-system" podName="kube-proxy-jmhdr" Jun 26 07:17:12.709923 kubelet[2743]: I0626 07:17:12.709781 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/51b36648-a307-45de-b2d5-696c6d36ce2f-lib-modules\") pod \"kube-proxy-jmhdr\" (UID: \"51b36648-a307-45de-b2d5-696c6d36ce2f\") " pod="kube-system/kube-proxy-jmhdr" Jun 26 07:17:12.709923 kubelet[2743]: I0626 07:17:12.709866 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2v6g7\" (UniqueName: \"kubernetes.io/projected/51b36648-a307-45de-b2d5-696c6d36ce2f-kube-api-access-2v6g7\") pod \"kube-proxy-jmhdr\" (UID: \"51b36648-a307-45de-b2d5-696c6d36ce2f\") " pod="kube-system/kube-proxy-jmhdr" Jun 26 07:17:12.710213 kubelet[2743]: I0626 07:17:12.710008 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/51b36648-a307-45de-b2d5-696c6d36ce2f-kube-proxy\") pod \"kube-proxy-jmhdr\" (UID: \"51b36648-a307-45de-b2d5-696c6d36ce2f\") " pod="kube-system/kube-proxy-jmhdr" Jun 26 07:17:12.710213 kubelet[2743]: I0626 07:17:12.710062 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/51b36648-a307-45de-b2d5-696c6d36ce2f-xtables-lock\") pod \"kube-proxy-jmhdr\" (UID: \"51b36648-a307-45de-b2d5-696c6d36ce2f\") " pod="kube-system/kube-proxy-jmhdr" Jun 26 07:17:12.862076 kubelet[2743]: E0626 07:17:12.861999 2743 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jun 26 07:17:12.862076 kubelet[2743]: E0626 07:17:12.862088 2743 projected.go:198] Error preparing data for projected volume kube-api-access-2v6g7 for pod kube-system/kube-proxy-jmhdr: configmap "kube-root-ca.crt" not found Jun 26 07:17:12.870912 kubelet[2743]: E0626 07:17:12.866761 2743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/51b36648-a307-45de-b2d5-696c6d36ce2f-kube-api-access-2v6g7 podName:51b36648-a307-45de-b2d5-696c6d36ce2f nodeName:}" failed. No retries permitted until 2024-06-26 07:17:13.362508899 +0000 UTC m=+13.621093656 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2v6g7" (UniqueName: "kubernetes.io/projected/51b36648-a307-45de-b2d5-696c6d36ce2f-kube-api-access-2v6g7") pod "kube-proxy-jmhdr" (UID: "51b36648-a307-45de-b2d5-696c6d36ce2f") : configmap "kube-root-ca.crt" not found Jun 26 07:17:13.160213 kubelet[2743]: I0626 07:17:13.160046 2743 topology_manager.go:215] "Topology Admit Handler" podUID="948de60e-80be-4f99-b3fd-1bbafaa4313c" podNamespace="tigera-operator" podName="tigera-operator-76c4974c85-fv8h2" Jun 26 07:17:13.319859 kubelet[2743]: I0626 07:17:13.319443 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxqv4\" (UniqueName: \"kubernetes.io/projected/948de60e-80be-4f99-b3fd-1bbafaa4313c-kube-api-access-dxqv4\") pod \"tigera-operator-76c4974c85-fv8h2\" (UID: \"948de60e-80be-4f99-b3fd-1bbafaa4313c\") " pod="tigera-operator/tigera-operator-76c4974c85-fv8h2" Jun 26 07:17:13.319859 kubelet[2743]: I0626 07:17:13.319549 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/948de60e-80be-4f99-b3fd-1bbafaa4313c-var-lib-calico\") pod \"tigera-operator-76c4974c85-fv8h2\" (UID: \"948de60e-80be-4f99-b3fd-1bbafaa4313c\") " pod="tigera-operator/tigera-operator-76c4974c85-fv8h2" Jun 26 07:17:13.471129 containerd[1627]: time="2024-06-26T07:17:13.470719535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-fv8h2,Uid:948de60e-80be-4f99-b3fd-1bbafaa4313c,Namespace:tigera-operator,Attempt:0,}" Jun 26 07:17:13.557317 kubelet[2743]: E0626 07:17:13.556309 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:17:13.566973 containerd[1627]: time="2024-06-26T07:17:13.566902286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jmhdr,Uid:51b36648-a307-45de-b2d5-696c6d36ce2f,Namespace:kube-system,Attempt:0,}" Jun 26 07:17:13.580154 containerd[1627]: time="2024-06-26T07:17:13.579203122Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 26 07:17:13.580154 containerd[1627]: time="2024-06-26T07:17:13.579398035Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:17:13.580154 containerd[1627]: time="2024-06-26T07:17:13.579437620Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 26 07:17:13.580154 containerd[1627]: time="2024-06-26T07:17:13.579462554Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:17:13.720354 containerd[1627]: time="2024-06-26T07:17:13.719842913Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 26 07:17:13.720354 containerd[1627]: time="2024-06-26T07:17:13.719954287Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:17:13.720354 containerd[1627]: time="2024-06-26T07:17:13.719990846Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 26 07:17:13.720354 containerd[1627]: time="2024-06-26T07:17:13.720017004Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:17:13.790334 containerd[1627]: time="2024-06-26T07:17:13.788648552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-fv8h2,Uid:948de60e-80be-4f99-b3fd-1bbafaa4313c,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"465eaa62ebe8b8f0c978c2fe08b30ba7ee072c5fbdc1e64cc9dce2edb179f236\"" Jun 26 07:17:13.823078 containerd[1627]: time="2024-06-26T07:17:13.822363571Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Jun 26 07:17:13.841947 containerd[1627]: time="2024-06-26T07:17:13.841895877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jmhdr,Uid:51b36648-a307-45de-b2d5-696c6d36ce2f,Namespace:kube-system,Attempt:0,} returns sandbox id \"51d18a07b4ffabc112def4be1ab10910633383266b333bef5d57cc7f7d526236\"" Jun 26 07:17:13.843455 kubelet[2743]: E0626 07:17:13.843320 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:17:13.854586 containerd[1627]: time="2024-06-26T07:17:13.854398347Z" level=info msg="CreateContainer within sandbox \"51d18a07b4ffabc112def4be1ab10910633383266b333bef5d57cc7f7d526236\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 26 07:17:13.900000 containerd[1627]: time="2024-06-26T07:17:13.899664097Z" level=info msg="CreateContainer within sandbox \"51d18a07b4ffabc112def4be1ab10910633383266b333bef5d57cc7f7d526236\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1ea79d0d0b371d19c98618054857f9a17a21fcfed8a6a897e52768c0912f061d\"" Jun 26 07:17:13.902737 containerd[1627]: time="2024-06-26T07:17:13.901668184Z" level=info msg="StartContainer for \"1ea79d0d0b371d19c98618054857f9a17a21fcfed8a6a897e52768c0912f061d\"" Jun 26 07:17:14.061540 containerd[1627]: time="2024-06-26T07:17:14.060481146Z" level=info msg="StartContainer for \"1ea79d0d0b371d19c98618054857f9a17a21fcfed8a6a897e52768c0912f061d\" returns successfully" Jun 26 07:17:14.369774 kubelet[2743]: E0626 07:17:14.367818 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:17:14.396265 kubelet[2743]: I0626 07:17:14.396156 2743 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-jmhdr" podStartSLOduration=2.39609327 podCreationTimestamp="2024-06-26 07:17:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-26 07:17:14.394753535 +0000 UTC m=+14.653338362" watchObservedRunningTime="2024-06-26 07:17:14.39609327 +0000 UTC m=+14.654678027" Jun 26 07:17:15.797873 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3627428050.mount: Deactivated successfully. Jun 26 07:17:17.094137 containerd[1627]: time="2024-06-26T07:17:17.094019293Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:17:17.111578 containerd[1627]: time="2024-06-26T07:17:17.095997499Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=22076080" Jun 26 07:17:17.127972 containerd[1627]: time="2024-06-26T07:17:17.127893691Z" level=info msg="ImageCreate event name:\"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:17:17.151627 containerd[1627]: time="2024-06-26T07:17:17.151278937Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:17:17.154045 containerd[1627]: time="2024-06-26T07:17:17.153978801Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"22070263\" in 3.331549373s" Jun 26 07:17:17.154291 containerd[1627]: time="2024-06-26T07:17:17.154268291Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\"" Jun 26 07:17:17.187304 containerd[1627]: time="2024-06-26T07:17:17.187243270Z" level=info msg="CreateContainer within sandbox \"465eaa62ebe8b8f0c978c2fe08b30ba7ee072c5fbdc1e64cc9dce2edb179f236\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jun 26 07:17:17.251264 containerd[1627]: time="2024-06-26T07:17:17.251173569Z" level=info msg="CreateContainer within sandbox \"465eaa62ebe8b8f0c978c2fe08b30ba7ee072c5fbdc1e64cc9dce2edb179f236\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"b2f2ef89c0203cb89417e4f84214f14301eb07b9b23f8efbd0ac4573efb0df44\"" Jun 26 07:17:17.253382 containerd[1627]: time="2024-06-26T07:17:17.253290618Z" level=info msg="StartContainer for \"b2f2ef89c0203cb89417e4f84214f14301eb07b9b23f8efbd0ac4573efb0df44\"" Jun 26 07:17:17.331375 systemd[1]: run-containerd-runc-k8s.io-b2f2ef89c0203cb89417e4f84214f14301eb07b9b23f8efbd0ac4573efb0df44-runc.9p7Boh.mount: Deactivated successfully. Jun 26 07:17:17.421576 containerd[1627]: time="2024-06-26T07:17:17.421361380Z" level=info msg="StartContainer for \"b2f2ef89c0203cb89417e4f84214f14301eb07b9b23f8efbd0ac4573efb0df44\" returns successfully" Jun 26 07:17:18.466595 kubelet[2743]: I0626 07:17:18.461329 2743 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4974c85-fv8h2" podStartSLOduration=2.119334299 podCreationTimestamp="2024-06-26 07:17:13 +0000 UTC" firstStartedPulling="2024-06-26 07:17:13.819049293 +0000 UTC m=+14.077634028" lastFinishedPulling="2024-06-26 07:17:17.160981606 +0000 UTC m=+17.419566355" observedRunningTime="2024-06-26 07:17:18.453088329 +0000 UTC m=+18.711673086" watchObservedRunningTime="2024-06-26 07:17:18.461266626 +0000 UTC m=+18.719851384" Jun 26 07:17:21.405028 kubelet[2743]: I0626 07:17:21.404852 2743 topology_manager.go:215] "Topology Admit Handler" podUID="d76258e0-2661-4390-afdd-883b5bcf4a7c" podNamespace="calico-system" podName="calico-typha-57b54455b8-txcfv" Jun 26 07:17:21.472073 kubelet[2743]: I0626 07:17:21.471051 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/d76258e0-2661-4390-afdd-883b5bcf4a7c-typha-certs\") pod \"calico-typha-57b54455b8-txcfv\" (UID: \"d76258e0-2661-4390-afdd-883b5bcf4a7c\") " pod="calico-system/calico-typha-57b54455b8-txcfv" Jun 26 07:17:21.472073 kubelet[2743]: I0626 07:17:21.471122 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d76258e0-2661-4390-afdd-883b5bcf4a7c-tigera-ca-bundle\") pod \"calico-typha-57b54455b8-txcfv\" (UID: \"d76258e0-2661-4390-afdd-883b5bcf4a7c\") " pod="calico-system/calico-typha-57b54455b8-txcfv" Jun 26 07:17:21.472073 kubelet[2743]: I0626 07:17:21.471170 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plks8\" (UniqueName: \"kubernetes.io/projected/d76258e0-2661-4390-afdd-883b5bcf4a7c-kube-api-access-plks8\") pod \"calico-typha-57b54455b8-txcfv\" (UID: \"d76258e0-2661-4390-afdd-883b5bcf4a7c\") " pod="calico-system/calico-typha-57b54455b8-txcfv" Jun 26 07:17:21.619730 kubelet[2743]: I0626 07:17:21.616405 2743 topology_manager.go:215] "Topology Admit Handler" podUID="b8ecd2b1-8d38-4f28-8302-a60535a129cd" podNamespace="calico-system" podName="calico-node-wmmkr" Jun 26 07:17:21.674810 kubelet[2743]: I0626 07:17:21.674528 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b8ecd2b1-8d38-4f28-8302-a60535a129cd-cni-bin-dir\") pod \"calico-node-wmmkr\" (UID: \"b8ecd2b1-8d38-4f28-8302-a60535a129cd\") " pod="calico-system/calico-node-wmmkr" Jun 26 07:17:21.676771 kubelet[2743]: I0626 07:17:21.676650 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b8ecd2b1-8d38-4f28-8302-a60535a129cd-flexvol-driver-host\") pod \"calico-node-wmmkr\" (UID: \"b8ecd2b1-8d38-4f28-8302-a60535a129cd\") " pod="calico-system/calico-node-wmmkr" Jun 26 07:17:21.678928 kubelet[2743]: I0626 07:17:21.677694 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8ecd2b1-8d38-4f28-8302-a60535a129cd-tigera-ca-bundle\") pod \"calico-node-wmmkr\" (UID: \"b8ecd2b1-8d38-4f28-8302-a60535a129cd\") " pod="calico-system/calico-node-wmmkr" Jun 26 07:17:21.678928 kubelet[2743]: I0626 07:17:21.677766 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wch6\" (UniqueName: \"kubernetes.io/projected/b8ecd2b1-8d38-4f28-8302-a60535a129cd-kube-api-access-2wch6\") pod \"calico-node-wmmkr\" (UID: \"b8ecd2b1-8d38-4f28-8302-a60535a129cd\") " pod="calico-system/calico-node-wmmkr" Jun 26 07:17:21.678928 kubelet[2743]: I0626 07:17:21.677810 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b8ecd2b1-8d38-4f28-8302-a60535a129cd-cni-log-dir\") pod \"calico-node-wmmkr\" (UID: \"b8ecd2b1-8d38-4f28-8302-a60535a129cd\") " pod="calico-system/calico-node-wmmkr" Jun 26 07:17:21.678928 kubelet[2743]: I0626 07:17:21.677841 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b8ecd2b1-8d38-4f28-8302-a60535a129cd-node-certs\") pod \"calico-node-wmmkr\" (UID: \"b8ecd2b1-8d38-4f28-8302-a60535a129cd\") " pod="calico-system/calico-node-wmmkr" Jun 26 07:17:21.678928 kubelet[2743]: I0626 07:17:21.677878 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b8ecd2b1-8d38-4f28-8302-a60535a129cd-policysync\") pod \"calico-node-wmmkr\" (UID: \"b8ecd2b1-8d38-4f28-8302-a60535a129cd\") " pod="calico-system/calico-node-wmmkr" Jun 26 07:17:21.681142 kubelet[2743]: I0626 07:17:21.677914 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b8ecd2b1-8d38-4f28-8302-a60535a129cd-var-lib-calico\") pod \"calico-node-wmmkr\" (UID: \"b8ecd2b1-8d38-4f28-8302-a60535a129cd\") " pod="calico-system/calico-node-wmmkr" Jun 26 07:17:21.681142 kubelet[2743]: I0626 07:17:21.677951 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b8ecd2b1-8d38-4f28-8302-a60535a129cd-var-run-calico\") pod \"calico-node-wmmkr\" (UID: \"b8ecd2b1-8d38-4f28-8302-a60535a129cd\") " pod="calico-system/calico-node-wmmkr" Jun 26 07:17:21.681142 kubelet[2743]: I0626 07:17:21.677993 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b8ecd2b1-8d38-4f28-8302-a60535a129cd-lib-modules\") pod \"calico-node-wmmkr\" (UID: \"b8ecd2b1-8d38-4f28-8302-a60535a129cd\") " pod="calico-system/calico-node-wmmkr" Jun 26 07:17:21.681142 kubelet[2743]: I0626 07:17:21.678027 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b8ecd2b1-8d38-4f28-8302-a60535a129cd-xtables-lock\") pod \"calico-node-wmmkr\" (UID: \"b8ecd2b1-8d38-4f28-8302-a60535a129cd\") " pod="calico-system/calico-node-wmmkr" Jun 26 07:17:21.681142 kubelet[2743]: I0626 07:17:21.678057 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b8ecd2b1-8d38-4f28-8302-a60535a129cd-cni-net-dir\") pod \"calico-node-wmmkr\" (UID: \"b8ecd2b1-8d38-4f28-8302-a60535a129cd\") " pod="calico-system/calico-node-wmmkr" Jun 26 07:17:21.736075 kubelet[2743]: E0626 07:17:21.733297 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:17:21.738756 containerd[1627]: time="2024-06-26T07:17:21.738463612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-57b54455b8-txcfv,Uid:d76258e0-2661-4390-afdd-883b5bcf4a7c,Namespace:calico-system,Attempt:0,}" Jun 26 07:17:21.845150 kubelet[2743]: E0626 07:17:21.845109 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:21.846518 kubelet[2743]: W0626 07:17:21.845426 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:21.846518 kubelet[2743]: E0626 07:17:21.845469 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:21.846518 kubelet[2743]: I0626 07:17:21.846279 2743 topology_manager.go:215] "Topology Admit Handler" podUID="13062869-9328-4eb6-9b1e-f48ab6dc9503" podNamespace="calico-system" podName="csi-node-driver-csr69" Jun 26 07:17:21.850057 kubelet[2743]: E0626 07:17:21.849432 2743 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-csr69" podUID="13062869-9328-4eb6-9b1e-f48ab6dc9503" Jun 26 07:17:21.875403 kubelet[2743]: E0626 07:17:21.869853 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:21.875403 kubelet[2743]: W0626 07:17:21.873356 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:21.875403 kubelet[2743]: E0626 07:17:21.874875 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:21.898735 kubelet[2743]: E0626 07:17:21.894187 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:21.898735 kubelet[2743]: W0626 07:17:21.894226 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:21.898735 kubelet[2743]: E0626 07:17:21.894278 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:21.899371 kubelet[2743]: E0626 07:17:21.899308 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:21.899541 kubelet[2743]: W0626 07:17:21.899514 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:21.901700 kubelet[2743]: E0626 07:17:21.900342 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:21.915303 kubelet[2743]: E0626 07:17:21.914072 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:21.915303 kubelet[2743]: W0626 07:17:21.914099 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:21.915303 kubelet[2743]: E0626 07:17:21.914137 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:21.918340 kubelet[2743]: E0626 07:17:21.917395 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:21.918340 kubelet[2743]: W0626 07:17:21.917839 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:21.918340 kubelet[2743]: E0626 07:17:21.918192 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:21.924226 kubelet[2743]: E0626 07:17:21.923033 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:21.924226 kubelet[2743]: W0626 07:17:21.923060 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:21.924226 kubelet[2743]: E0626 07:17:21.923100 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:21.927110 kubelet[2743]: E0626 07:17:21.925812 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:21.927110 kubelet[2743]: W0626 07:17:21.925846 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:21.927110 kubelet[2743]: E0626 07:17:21.925914 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:21.931581 kubelet[2743]: E0626 07:17:21.928764 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:21.931581 kubelet[2743]: W0626 07:17:21.929363 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:21.931581 kubelet[2743]: E0626 07:17:21.931407 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:21.933871 containerd[1627]: time="2024-06-26T07:17:21.932220008Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 26 07:17:21.933871 containerd[1627]: time="2024-06-26T07:17:21.932353557Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:17:21.933871 containerd[1627]: time="2024-06-26T07:17:21.932392387Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 26 07:17:21.933871 containerd[1627]: time="2024-06-26T07:17:21.932418493Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:17:21.936661 kubelet[2743]: E0626 07:17:21.936179 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:21.936661 kubelet[2743]: W0626 07:17:21.936215 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:21.939821 kubelet[2743]: E0626 07:17:21.937913 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:21.939821 kubelet[2743]: W0626 07:17:21.937958 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:21.939821 kubelet[2743]: E0626 07:17:21.938007 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:21.939821 kubelet[2743]: E0626 07:17:21.938192 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:21.943717 kubelet[2743]: E0626 07:17:21.940955 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:21.943717 kubelet[2743]: W0626 07:17:21.940991 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:21.943717 kubelet[2743]: E0626 07:17:21.941032 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:21.944636 kubelet[2743]: E0626 07:17:21.944177 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:21.944636 kubelet[2743]: W0626 07:17:21.944201 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:21.944636 kubelet[2743]: E0626 07:17:21.944289 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:21.947624 kubelet[2743]: E0626 07:17:21.946166 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:21.947624 kubelet[2743]: W0626 07:17:21.946199 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:21.947624 kubelet[2743]: E0626 07:17:21.946379 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:21.948919 kubelet[2743]: E0626 07:17:21.948665 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:21.950117 kubelet[2743]: W0626 07:17:21.949577 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:21.950117 kubelet[2743]: E0626 07:17:21.949622 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:21.955261 kubelet[2743]: E0626 07:17:21.952588 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:17:21.955622 kubelet[2743]: E0626 07:17:21.953813 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:21.955887 kubelet[2743]: W0626 07:17:21.955857 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:21.957131 kubelet[2743]: E0626 07:17:21.956648 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:21.957131 kubelet[2743]: I0626 07:17:21.956740 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/13062869-9328-4eb6-9b1e-f48ab6dc9503-varrun\") pod \"csi-node-driver-csr69\" (UID: \"13062869-9328-4eb6-9b1e-f48ab6dc9503\") " pod="calico-system/csi-node-driver-csr69" Jun 26 07:17:21.963517 containerd[1627]: time="2024-06-26T07:17:21.960080162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wmmkr,Uid:b8ecd2b1-8d38-4f28-8302-a60535a129cd,Namespace:calico-system,Attempt:0,}" Jun 26 07:17:21.964121 kubelet[2743]: E0626 07:17:21.962084 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:21.964121 kubelet[2743]: W0626 07:17:21.962117 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:21.965748 kubelet[2743]: E0626 07:17:21.965297 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:21.965748 kubelet[2743]: I0626 07:17:21.965363 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/13062869-9328-4eb6-9b1e-f48ab6dc9503-kubelet-dir\") pod \"csi-node-driver-csr69\" (UID: \"13062869-9328-4eb6-9b1e-f48ab6dc9503\") " pod="calico-system/csi-node-driver-csr69" Jun 26 07:17:21.966902 kubelet[2743]: E0626 07:17:21.966873 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:21.968134 kubelet[2743]: W0626 07:17:21.967023 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:21.968134 kubelet[2743]: E0626 07:17:21.967179 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:21.969359 kubelet[2743]: E0626 07:17:21.969332 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:21.969529 kubelet[2743]: W0626 07:17:21.969505 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:21.970357 kubelet[2743]: E0626 07:17:21.970335 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:21.973623 kubelet[2743]: E0626 07:17:21.973188 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:21.973623 kubelet[2743]: W0626 07:17:21.973414 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:21.978791 kubelet[2743]: E0626 07:17:21.976848 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:21.980332 kubelet[2743]: E0626 07:17:21.980102 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:21.981435 kubelet[2743]: W0626 07:17:21.981294 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:21.987285 kubelet[2743]: E0626 07:17:21.984124 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:21.987285 kubelet[2743]: I0626 07:17:21.984187 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/13062869-9328-4eb6-9b1e-f48ab6dc9503-socket-dir\") pod \"csi-node-driver-csr69\" (UID: \"13062869-9328-4eb6-9b1e-f48ab6dc9503\") " pod="calico-system/csi-node-driver-csr69" Jun 26 07:17:21.988447 kubelet[2743]: E0626 07:17:21.988243 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:21.989982 kubelet[2743]: W0626 07:17:21.989829 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:21.993492 kubelet[2743]: E0626 07:17:21.993236 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:21.995022 kubelet[2743]: E0626 07:17:21.994973 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:21.995022 kubelet[2743]: W0626 07:17:21.995005 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:21.996830 kubelet[2743]: E0626 07:17:21.996707 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:22.001902 kubelet[2743]: E0626 07:17:22.001848 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:22.001902 kubelet[2743]: W0626 07:17:22.001882 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:22.001902 kubelet[2743]: E0626 07:17:22.001935 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:22.008139 kubelet[2743]: E0626 07:17:22.008047 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:22.008139 kubelet[2743]: W0626 07:17:22.008087 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:22.008139 kubelet[2743]: E0626 07:17:22.008140 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:22.013776 kubelet[2743]: E0626 07:17:22.013315 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:22.015897 kubelet[2743]: W0626 07:17:22.013787 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:22.015897 kubelet[2743]: E0626 07:17:22.015907 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:22.019417 kubelet[2743]: E0626 07:17:22.018160 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:22.019417 kubelet[2743]: W0626 07:17:22.018212 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:22.019417 kubelet[2743]: E0626 07:17:22.018260 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:22.019417 kubelet[2743]: E0626 07:17:22.018883 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:22.019417 kubelet[2743]: W0626 07:17:22.018902 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:22.019417 kubelet[2743]: E0626 07:17:22.018940 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:22.019417 kubelet[2743]: E0626 07:17:22.019320 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:22.019417 kubelet[2743]: W0626 07:17:22.019338 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:22.019417 kubelet[2743]: E0626 07:17:22.019398 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:22.022033 kubelet[2743]: E0626 07:17:22.020838 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:22.022033 kubelet[2743]: W0626 07:17:22.020865 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:22.022033 kubelet[2743]: E0626 07:17:22.020896 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:22.022033 kubelet[2743]: E0626 07:17:22.021643 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:22.022033 kubelet[2743]: W0626 07:17:22.021661 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:22.022033 kubelet[2743]: E0626 07:17:22.021713 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:22.022033 kubelet[2743]: E0626 07:17:22.022157 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:22.022033 kubelet[2743]: W0626 07:17:22.022174 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:22.022033 kubelet[2743]: E0626 07:17:22.022201 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:22.113396 containerd[1627]: time="2024-06-26T07:17:22.113145362Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 26 07:17:22.113902 containerd[1627]: time="2024-06-26T07:17:22.113505052Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:17:22.113902 containerd[1627]: time="2024-06-26T07:17:22.113733187Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 26 07:17:22.115225 containerd[1627]: time="2024-06-26T07:17:22.113802197Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:17:22.120040 kubelet[2743]: E0626 07:17:22.119817 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:22.120040 kubelet[2743]: W0626 07:17:22.119867 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:22.123708 kubelet[2743]: E0626 07:17:22.121272 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:22.125704 kubelet[2743]: E0626 07:17:22.125042 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:22.125704 kubelet[2743]: W0626 07:17:22.125065 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:22.125704 kubelet[2743]: E0626 07:17:22.125097 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:22.125704 kubelet[2743]: I0626 07:17:22.125132 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/13062869-9328-4eb6-9b1e-f48ab6dc9503-registration-dir\") pod \"csi-node-driver-csr69\" (UID: \"13062869-9328-4eb6-9b1e-f48ab6dc9503\") " pod="calico-system/csi-node-driver-csr69" Jun 26 07:17:22.128218 kubelet[2743]: E0626 07:17:22.127540 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:22.128218 kubelet[2743]: W0626 07:17:22.127565 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:22.128218 kubelet[2743]: E0626 07:17:22.127670 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:22.128218 kubelet[2743]: I0626 07:17:22.127737 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdttn\" (UniqueName: \"kubernetes.io/projected/13062869-9328-4eb6-9b1e-f48ab6dc9503-kube-api-access-cdttn\") pod \"csi-node-driver-csr69\" (UID: \"13062869-9328-4eb6-9b1e-f48ab6dc9503\") " pod="calico-system/csi-node-driver-csr69" Jun 26 07:17:22.135982 kubelet[2743]: E0626 07:17:22.133954 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:22.135982 kubelet[2743]: W0626 07:17:22.133978 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:22.144723 kubelet[2743]: E0626 07:17:22.140827 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:22.144723 kubelet[2743]: W0626 07:17:22.140856 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:22.149836 kubelet[2743]: E0626 07:17:22.149783 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:22.154737 kubelet[2743]: E0626 07:17:22.152284 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:22.154737 kubelet[2743]: W0626 07:17:22.152318 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:22.155035 kubelet[2743]: E0626 07:17:22.154769 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:22.157402 kubelet[2743]: E0626 07:17:22.156253 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:22.159778 kubelet[2743]: E0626 07:17:22.158901 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:22.159778 kubelet[2743]: W0626 07:17:22.158929 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:22.164888 kubelet[2743]: E0626 07:17:22.164844 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:22.167194 kubelet[2743]: W0626 07:17:22.166758 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:22.171671 kubelet[2743]: E0626 07:17:22.167805 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:22.171671 kubelet[2743]: W0626 07:17:22.167919 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:22.182847 kubelet[2743]: E0626 07:17:22.180639 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:22.183451 kubelet[2743]: W0626 07:17:22.183372 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:22.188115 kubelet[2743]: E0626 07:17:22.184105 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:22.188115 kubelet[2743]: E0626 07:17:22.184266 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:22.188115 kubelet[2743]: E0626 07:17:22.184390 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:22.194615 kubelet[2743]: E0626 07:17:22.194567 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:22.196723 kubelet[2743]: W0626 07:17:22.195936 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:22.196723 kubelet[2743]: E0626 07:17:22.196009 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:22.204050 kubelet[2743]: E0626 07:17:22.200456 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:22.204050 kubelet[2743]: E0626 07:17:22.203541 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:22.204050 kubelet[2743]: W0626 07:17:22.203560 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:22.204050 kubelet[2743]: E0626 07:17:22.203594 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:22.212750 kubelet[2743]: E0626 07:17:22.210809 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:22.212750 kubelet[2743]: W0626 07:17:22.210836 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:22.212750 kubelet[2743]: E0626 07:17:22.210880 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:22.219564 kubelet[2743]: E0626 07:17:22.219264 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:22.219564 kubelet[2743]: W0626 07:17:22.219313 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:22.219564 kubelet[2743]: E0626 07:17:22.219360 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:22.223863 kubelet[2743]: E0626 07:17:22.223813 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:22.224326 kubelet[2743]: W0626 07:17:22.224057 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:22.224326 kubelet[2743]: E0626 07:17:22.224098 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:22.225795 kubelet[2743]: E0626 07:17:22.225769 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:22.229186 kubelet[2743]: W0626 07:17:22.226737 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:22.229186 kubelet[2743]: E0626 07:17:22.226802 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:22.233903 kubelet[2743]: E0626 07:17:22.233865 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:22.234217 kubelet[2743]: W0626 07:17:22.234106 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:22.234217 kubelet[2743]: E0626 07:17:22.234145 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:22.237951 kubelet[2743]: E0626 07:17:22.236997 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:22.239043 kubelet[2743]: W0626 07:17:22.238772 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:22.240860 kubelet[2743]: E0626 07:17:22.239186 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:22.246722 kubelet[2743]: E0626 07:17:22.246362 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:22.246722 kubelet[2743]: W0626 07:17:22.246392 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:22.246722 kubelet[2743]: E0626 07:17:22.246430 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:22.249282 kubelet[2743]: E0626 07:17:22.249043 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:22.249282 kubelet[2743]: W0626 07:17:22.249082 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:22.249282 kubelet[2743]: E0626 07:17:22.249121 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:22.253734 kubelet[2743]: E0626 07:17:22.252903 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:22.253734 kubelet[2743]: W0626 07:17:22.253008 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:22.253734 kubelet[2743]: E0626 07:17:22.253050 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:22.304612 containerd[1627]: time="2024-06-26T07:17:22.304554138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-57b54455b8-txcfv,Uid:d76258e0-2661-4390-afdd-883b5bcf4a7c,Namespace:calico-system,Attempt:0,} returns sandbox id \"574f7b437108dd530c09132691330d23dd7b4672ccab416fd8aca7df4114012c\"" Jun 26 07:17:22.311145 kubelet[2743]: E0626 07:17:22.310359 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:17:22.318734 containerd[1627]: time="2024-06-26T07:17:22.318278080Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Jun 26 07:17:22.347727 kubelet[2743]: E0626 07:17:22.347309 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:22.347727 kubelet[2743]: W0626 07:17:22.347343 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:22.347727 kubelet[2743]: E0626 07:17:22.347402 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:22.348026 kubelet[2743]: E0626 07:17:22.347980 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:22.348026 kubelet[2743]: W0626 07:17:22.347999 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:22.348125 kubelet[2743]: E0626 07:17:22.348039 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:22.349138 kubelet[2743]: E0626 07:17:22.348489 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:22.349138 kubelet[2743]: W0626 07:17:22.348517 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:22.349138 kubelet[2743]: E0626 07:17:22.348539 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:22.349138 kubelet[2743]: E0626 07:17:22.348886 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:22.349138 kubelet[2743]: W0626 07:17:22.348897 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:22.349138 kubelet[2743]: E0626 07:17:22.348922 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:22.349138 kubelet[2743]: E0626 07:17:22.349134 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:22.349138 kubelet[2743]: W0626 07:17:22.349142 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:22.351192 kubelet[2743]: E0626 07:17:22.349173 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:22.351192 kubelet[2743]: E0626 07:17:22.350288 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:22.351192 kubelet[2743]: W0626 07:17:22.350315 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:22.351192 kubelet[2743]: E0626 07:17:22.350392 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:22.353089 kubelet[2743]: E0626 07:17:22.352427 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:22.353089 kubelet[2743]: W0626 07:17:22.352456 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:22.353089 kubelet[2743]: E0626 07:17:22.352487 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:22.354913 kubelet[2743]: E0626 07:17:22.354342 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:22.354913 kubelet[2743]: W0626 07:17:22.354466 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:22.354913 kubelet[2743]: E0626 07:17:22.354502 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:22.357000 kubelet[2743]: E0626 07:17:22.356440 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:22.357000 kubelet[2743]: W0626 07:17:22.356467 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:22.357000 kubelet[2743]: E0626 07:17:22.356499 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:22.358426 kubelet[2743]: E0626 07:17:22.358053 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:22.358426 kubelet[2743]: W0626 07:17:22.358083 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:22.358426 kubelet[2743]: E0626 07:17:22.358115 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:22.375169 kubelet[2743]: E0626 07:17:22.374910 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:22.375169 kubelet[2743]: W0626 07:17:22.374934 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:22.375169 kubelet[2743]: E0626 07:17:22.374962 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:22.418373 containerd[1627]: time="2024-06-26T07:17:22.418147310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wmmkr,Uid:b8ecd2b1-8d38-4f28-8302-a60535a129cd,Namespace:calico-system,Attempt:0,} returns sandbox id \"a578c470f961bf9a4a711cfe3ba7d43d3e698a92267600960cc404cbf0b709fb\"" Jun 26 07:17:22.425523 kubelet[2743]: E0626 07:17:22.425475 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:17:23.080931 kubelet[2743]: E0626 07:17:23.064986 2743 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-csr69" podUID="13062869-9328-4eb6-9b1e-f48ab6dc9503" Jun 26 07:17:25.062455 kubelet[2743]: E0626 07:17:25.062406 2743 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-csr69" podUID="13062869-9328-4eb6-9b1e-f48ab6dc9503" Jun 26 07:17:25.775957 containerd[1627]: time="2024-06-26T07:17:25.775833561Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:17:25.783246 containerd[1627]: time="2024-06-26T07:17:25.781872822Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=29458030" Jun 26 07:17:25.787670 containerd[1627]: time="2024-06-26T07:17:25.786956724Z" level=info msg="ImageCreate event name:\"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:17:25.823932 containerd[1627]: time="2024-06-26T07:17:25.823866934Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:17:25.828666 containerd[1627]: time="2024-06-26T07:17:25.827250533Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"30905782\" in 3.507723484s" Jun 26 07:17:25.828666 containerd[1627]: time="2024-06-26T07:17:25.827327097Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\"" Jun 26 07:17:25.830712 containerd[1627]: time="2024-06-26T07:17:25.829961850Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Jun 26 07:17:25.868189 containerd[1627]: time="2024-06-26T07:17:25.867930848Z" level=info msg="CreateContainer within sandbox \"574f7b437108dd530c09132691330d23dd7b4672ccab416fd8aca7df4114012c\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jun 26 07:17:25.917804 containerd[1627]: time="2024-06-26T07:17:25.912185965Z" level=info msg="CreateContainer within sandbox \"574f7b437108dd530c09132691330d23dd7b4672ccab416fd8aca7df4114012c\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"9b3042b82c11e8cf2de1cd64896346ece2fcf80ff8e50198de0050fec49788a8\"" Jun 26 07:17:25.917804 containerd[1627]: time="2024-06-26T07:17:25.913312554Z" level=info msg="StartContainer for \"9b3042b82c11e8cf2de1cd64896346ece2fcf80ff8e50198de0050fec49788a8\"" Jun 26 07:17:26.147061 containerd[1627]: time="2024-06-26T07:17:26.145578296Z" level=info msg="StartContainer for \"9b3042b82c11e8cf2de1cd64896346ece2fcf80ff8e50198de0050fec49788a8\" returns successfully" Jun 26 07:17:26.494414 kubelet[2743]: E0626 07:17:26.494190 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:17:26.525724 kubelet[2743]: E0626 07:17:26.525526 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:26.525724 kubelet[2743]: W0626 07:17:26.525555 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:26.525724 kubelet[2743]: E0626 07:17:26.525598 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:26.531283 kubelet[2743]: E0626 07:17:26.530792 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:26.531283 kubelet[2743]: W0626 07:17:26.530821 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:26.531283 kubelet[2743]: E0626 07:17:26.530863 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:26.532901 kubelet[2743]: E0626 07:17:26.532632 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:26.532901 kubelet[2743]: W0626 07:17:26.532662 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:26.532901 kubelet[2743]: E0626 07:17:26.532722 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:26.533480 kubelet[2743]: E0626 07:17:26.533387 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:26.533864 kubelet[2743]: W0626 07:17:26.533421 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:26.533864 kubelet[2743]: E0626 07:17:26.533630 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:26.536224 kubelet[2743]: E0626 07:17:26.535830 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:26.536224 kubelet[2743]: W0626 07:17:26.535856 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:26.536224 kubelet[2743]: E0626 07:17:26.535888 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:26.540781 kubelet[2743]: E0626 07:17:26.539160 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:26.541344 kubelet[2743]: W0626 07:17:26.541064 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:26.541344 kubelet[2743]: E0626 07:17:26.541139 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:26.542150 kubelet[2743]: E0626 07:17:26.541904 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:26.542150 kubelet[2743]: W0626 07:17:26.541925 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:26.542150 kubelet[2743]: E0626 07:17:26.541952 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:26.542966 kubelet[2743]: E0626 07:17:26.542411 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:26.542966 kubelet[2743]: W0626 07:17:26.542426 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:26.542966 kubelet[2743]: E0626 07:17:26.542450 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:26.543752 kubelet[2743]: E0626 07:17:26.543724 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:26.543958 kubelet[2743]: W0626 07:17:26.543932 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:26.544101 kubelet[2743]: E0626 07:17:26.544081 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:26.546295 kubelet[2743]: E0626 07:17:26.545137 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:26.546295 kubelet[2743]: W0626 07:17:26.545161 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:26.546295 kubelet[2743]: E0626 07:17:26.545195 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:26.549698 kubelet[2743]: E0626 07:17:26.549393 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:26.549698 kubelet[2743]: W0626 07:17:26.549455 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:26.549698 kubelet[2743]: E0626 07:17:26.549527 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:26.556391 kubelet[2743]: E0626 07:17:26.556134 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:26.556391 kubelet[2743]: W0626 07:17:26.556169 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:26.556391 kubelet[2743]: E0626 07:17:26.556211 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:26.557447 kubelet[2743]: E0626 07:17:26.557229 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:26.557447 kubelet[2743]: W0626 07:17:26.557256 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:26.557447 kubelet[2743]: E0626 07:17:26.557291 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:26.557984 kubelet[2743]: E0626 07:17:26.557652 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:26.557984 kubelet[2743]: W0626 07:17:26.557667 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:26.557984 kubelet[2743]: E0626 07:17:26.557714 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:26.560327 kubelet[2743]: E0626 07:17:26.559224 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:26.560327 kubelet[2743]: W0626 07:17:26.559248 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:26.560327 kubelet[2743]: E0626 07:17:26.559279 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:26.561322 kubelet[2743]: E0626 07:17:26.560998 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:26.561322 kubelet[2743]: W0626 07:17:26.561018 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:26.561322 kubelet[2743]: E0626 07:17:26.561074 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:26.562339 kubelet[2743]: E0626 07:17:26.561984 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:26.562339 kubelet[2743]: W0626 07:17:26.562003 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:26.562339 kubelet[2743]: E0626 07:17:26.562045 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:26.562928 kubelet[2743]: E0626 07:17:26.562694 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:26.562928 kubelet[2743]: W0626 07:17:26.562729 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:26.562928 kubelet[2743]: E0626 07:17:26.562762 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:26.563623 kubelet[2743]: E0626 07:17:26.563397 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:26.563623 kubelet[2743]: W0626 07:17:26.563429 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:26.563623 kubelet[2743]: E0626 07:17:26.563464 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:26.564433 kubelet[2743]: E0626 07:17:26.564182 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:26.564433 kubelet[2743]: W0626 07:17:26.564200 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:26.564433 kubelet[2743]: E0626 07:17:26.564228 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:26.565090 kubelet[2743]: E0626 07:17:26.564818 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:26.565090 kubelet[2743]: W0626 07:17:26.564835 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:26.565090 kubelet[2743]: E0626 07:17:26.564970 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:26.565635 kubelet[2743]: E0626 07:17:26.565533 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:26.565635 kubelet[2743]: W0626 07:17:26.565550 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:26.565954 kubelet[2743]: E0626 07:17:26.565847 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:26.567082 kubelet[2743]: E0626 07:17:26.567061 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:26.567475 kubelet[2743]: W0626 07:17:26.567225 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:26.567475 kubelet[2743]: E0626 07:17:26.567423 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:26.569128 kubelet[2743]: E0626 07:17:26.568941 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:26.569128 kubelet[2743]: W0626 07:17:26.568967 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:26.569749 kubelet[2743]: E0626 07:17:26.569394 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:26.570198 kubelet[2743]: E0626 07:17:26.570174 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:26.570453 kubelet[2743]: W0626 07:17:26.570422 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:26.571467 kubelet[2743]: E0626 07:17:26.571144 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:26.574519 kubelet[2743]: E0626 07:17:26.574468 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:26.577462 kubelet[2743]: W0626 07:17:26.574921 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:26.580347 kubelet[2743]: E0626 07:17:26.580292 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:26.580347 kubelet[2743]: W0626 07:17:26.580328 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:26.580783 kubelet[2743]: E0626 07:17:26.580758 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:26.580783 kubelet[2743]: W0626 07:17:26.580775 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:26.580944 kubelet[2743]: E0626 07:17:26.580799 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:26.581714 kubelet[2743]: E0626 07:17:26.581073 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:26.581714 kubelet[2743]: W0626 07:17:26.581087 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:26.581714 kubelet[2743]: E0626 07:17:26.581102 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:26.582146 kubelet[2743]: E0626 07:17:26.582120 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:26.582146 kubelet[2743]: W0626 07:17:26.582143 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:26.582309 kubelet[2743]: E0626 07:17:26.582165 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:26.582309 kubelet[2743]: E0626 07:17:26.582211 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:26.582717 kubelet[2743]: E0626 07:17:26.582422 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:26.582717 kubelet[2743]: W0626 07:17:26.582436 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:26.582717 kubelet[2743]: E0626 07:17:26.582450 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:26.582717 kubelet[2743]: E0626 07:17:26.582707 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:26.582717 kubelet[2743]: W0626 07:17:26.582718 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:26.582989 kubelet[2743]: E0626 07:17:26.582731 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:26.584745 kubelet[2743]: E0626 07:17:26.583156 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:26.584745 kubelet[2743]: W0626 07:17:26.583173 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:26.584745 kubelet[2743]: E0626 07:17:26.583188 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:26.584745 kubelet[2743]: E0626 07:17:26.583220 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:27.063045 kubelet[2743]: E0626 07:17:27.062835 2743 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-csr69" podUID="13062869-9328-4eb6-9b1e-f48ab6dc9503" Jun 26 07:17:27.505918 kubelet[2743]: I0626 07:17:27.505855 2743 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 26 07:17:27.512855 kubelet[2743]: E0626 07:17:27.511256 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:17:27.574092 kubelet[2743]: E0626 07:17:27.572581 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:27.574092 kubelet[2743]: W0626 07:17:27.572639 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:27.574092 kubelet[2743]: E0626 07:17:27.572676 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:27.574092 kubelet[2743]: E0626 07:17:27.573673 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:27.574092 kubelet[2743]: W0626 07:17:27.574020 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:27.574092 kubelet[2743]: E0626 07:17:27.574060 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:27.576301 kubelet[2743]: E0626 07:17:27.576254 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:27.576301 kubelet[2743]: W0626 07:17:27.576288 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:27.576301 kubelet[2743]: E0626 07:17:27.576323 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:27.578315 kubelet[2743]: E0626 07:17:27.578255 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:27.578572 kubelet[2743]: W0626 07:17:27.578300 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:27.579324 kubelet[2743]: E0626 07:17:27.578823 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:27.581663 kubelet[2743]: E0626 07:17:27.581614 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:27.581663 kubelet[2743]: W0626 07:17:27.581645 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:27.582503 kubelet[2743]: E0626 07:17:27.582352 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:27.583796 kubelet[2743]: E0626 07:17:27.583766 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:27.583796 kubelet[2743]: W0626 07:17:27.583789 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:27.584558 kubelet[2743]: E0626 07:17:27.583828 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:27.587286 kubelet[2743]: E0626 07:17:27.587088 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:27.587286 kubelet[2743]: W0626 07:17:27.587134 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:27.587286 kubelet[2743]: E0626 07:17:27.587172 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:27.589847 kubelet[2743]: E0626 07:17:27.589792 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:27.589847 kubelet[2743]: W0626 07:17:27.589821 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:27.589847 kubelet[2743]: E0626 07:17:27.589853 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:27.594121 kubelet[2743]: E0626 07:17:27.593805 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:27.594121 kubelet[2743]: W0626 07:17:27.593840 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:27.594121 kubelet[2743]: E0626 07:17:27.593879 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:27.596264 kubelet[2743]: E0626 07:17:27.595256 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:27.596264 kubelet[2743]: W0626 07:17:27.595288 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:27.596264 kubelet[2743]: E0626 07:17:27.595327 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:27.597046 kubelet[2743]: E0626 07:17:27.596728 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:27.597046 kubelet[2743]: W0626 07:17:27.596764 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:27.597416 kubelet[2743]: E0626 07:17:27.596808 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:27.597982 kubelet[2743]: E0626 07:17:27.597957 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:27.598144 kubelet[2743]: W0626 07:17:27.598120 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:27.598358 kubelet[2743]: E0626 07:17:27.598243 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:27.598829 kubelet[2743]: E0626 07:17:27.598807 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:27.599108 kubelet[2743]: W0626 07:17:27.598974 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:27.599108 kubelet[2743]: E0626 07:17:27.599013 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:27.599711 kubelet[2743]: E0626 07:17:27.599545 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:27.599711 kubelet[2743]: W0626 07:17:27.599566 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:27.599711 kubelet[2743]: E0626 07:17:27.599596 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:27.600354 kubelet[2743]: E0626 07:17:27.600212 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:27.600354 kubelet[2743]: W0626 07:17:27.600230 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:27.600354 kubelet[2743]: E0626 07:17:27.600256 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:27.684236 kubelet[2743]: E0626 07:17:27.683465 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:27.684236 kubelet[2743]: W0626 07:17:27.683504 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:27.684236 kubelet[2743]: E0626 07:17:27.683545 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:27.686976 kubelet[2743]: E0626 07:17:27.686918 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:27.687986 kubelet[2743]: W0626 07:17:27.687936 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:27.688134 kubelet[2743]: E0626 07:17:27.688004 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:27.689876 kubelet[2743]: E0626 07:17:27.689839 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:27.689876 kubelet[2743]: W0626 07:17:27.689868 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:27.690942 kubelet[2743]: E0626 07:17:27.690898 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:27.693267 kubelet[2743]: E0626 07:17:27.692758 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:27.693267 kubelet[2743]: W0626 07:17:27.692790 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:27.693267 kubelet[2743]: E0626 07:17:27.692915 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:27.694384 kubelet[2743]: E0626 07:17:27.694330 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:27.695302 kubelet[2743]: W0626 07:17:27.695263 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:27.695302 kubelet[2743]: E0626 07:17:27.695353 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:27.696232 kubelet[2743]: E0626 07:17:27.695931 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:27.696232 kubelet[2743]: W0626 07:17:27.696059 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:27.696232 kubelet[2743]: E0626 07:17:27.696198 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:27.697527 kubelet[2743]: E0626 07:17:27.697110 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:27.697527 kubelet[2743]: W0626 07:17:27.697225 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:27.697527 kubelet[2743]: E0626 07:17:27.697469 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:27.698485 kubelet[2743]: E0626 07:17:27.698370 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:27.698485 kubelet[2743]: W0626 07:17:27.698406 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:27.699087 kubelet[2743]: E0626 07:17:27.698718 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:27.699087 kubelet[2743]: E0626 07:17:27.698930 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:27.699087 kubelet[2743]: W0626 07:17:27.698945 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:27.699087 kubelet[2743]: E0626 07:17:27.698989 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:27.705255 kubelet[2743]: E0626 07:17:27.703245 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:27.705255 kubelet[2743]: W0626 07:17:27.703281 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:27.705872 kubelet[2743]: E0626 07:17:27.705835 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:27.708488 kubelet[2743]: E0626 07:17:27.708209 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:27.708488 kubelet[2743]: W0626 07:17:27.708246 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:27.710351 kubelet[2743]: E0626 07:17:27.709944 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:27.716056 kubelet[2743]: E0626 07:17:27.715559 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:27.716056 kubelet[2743]: W0626 07:17:27.715591 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:27.717462 kubelet[2743]: E0626 07:17:27.716167 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:27.717799 kubelet[2743]: E0626 07:17:27.717653 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:27.718055 kubelet[2743]: W0626 07:17:27.717985 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:27.718724 kubelet[2743]: E0626 07:17:27.718259 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:27.720576 kubelet[2743]: E0626 07:17:27.720494 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:27.720959 kubelet[2743]: W0626 07:17:27.720887 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:27.721254 kubelet[2743]: E0626 07:17:27.721240 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:27.721917 kubelet[2743]: E0626 07:17:27.721854 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:27.721917 kubelet[2743]: W0626 07:17:27.721871 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:27.722435 kubelet[2743]: E0626 07:17:27.722397 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:27.724971 kubelet[2743]: E0626 07:17:27.724935 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:27.725484 kubelet[2743]: W0626 07:17:27.725177 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:27.725484 kubelet[2743]: E0626 07:17:27.725259 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:27.726018 kubelet[2743]: E0626 07:17:27.725971 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:27.726018 kubelet[2743]: W0626 07:17:27.725988 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:27.726343 kubelet[2743]: E0626 07:17:27.726222 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:27.726839 kubelet[2743]: E0626 07:17:27.726755 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:17:27.726839 kubelet[2743]: W0626 07:17:27.726772 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:17:27.726839 kubelet[2743]: E0626 07:17:27.726805 2743 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:17:27.773854 containerd[1627]: time="2024-06-26T07:17:27.772554089Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:17:27.782184 containerd[1627]: time="2024-06-26T07:17:27.781833029Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=5140568" Jun 26 07:17:27.786341 containerd[1627]: time="2024-06-26T07:17:27.786225892Z" level=info msg="ImageCreate event name:\"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:17:27.794795 containerd[1627]: time="2024-06-26T07:17:27.794453069Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:17:27.802202 containerd[1627]: time="2024-06-26T07:17:27.801527306Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6588288\" in 1.971489474s" Jun 26 07:17:27.802202 containerd[1627]: time="2024-06-26T07:17:27.802035922Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\"" Jun 26 07:17:27.808637 containerd[1627]: time="2024-06-26T07:17:27.808388396Z" level=info msg="CreateContainer within sandbox \"a578c470f961bf9a4a711cfe3ba7d43d3e698a92267600960cc404cbf0b709fb\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jun 26 07:17:27.864206 containerd[1627]: time="2024-06-26T07:17:27.864121210Z" level=info msg="CreateContainer within sandbox \"a578c470f961bf9a4a711cfe3ba7d43d3e698a92267600960cc404cbf0b709fb\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"aaf6a9347f80077300951320f06a1921b10b73a3c36d918600bd0c85436648cf\"" Jun 26 07:17:27.870044 containerd[1627]: time="2024-06-26T07:17:27.869935576Z" level=info msg="StartContainer for \"aaf6a9347f80077300951320f06a1921b10b73a3c36d918600bd0c85436648cf\"" Jun 26 07:17:27.877573 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3465872822.mount: Deactivated successfully. Jun 26 07:17:28.065978 containerd[1627]: time="2024-06-26T07:17:28.064535190Z" level=info msg="StartContainer for \"aaf6a9347f80077300951320f06a1921b10b73a3c36d918600bd0c85436648cf\" returns successfully" Jun 26 07:17:28.205757 containerd[1627]: time="2024-06-26T07:17:28.205624880Z" level=info msg="shim disconnected" id=aaf6a9347f80077300951320f06a1921b10b73a3c36d918600bd0c85436648cf namespace=k8s.io Jun 26 07:17:28.205757 containerd[1627]: time="2024-06-26T07:17:28.205740563Z" level=warning msg="cleaning up after shim disconnected" id=aaf6a9347f80077300951320f06a1921b10b73a3c36d918600bd0c85436648cf namespace=k8s.io Jun 26 07:17:28.205757 containerd[1627]: time="2024-06-26T07:17:28.205758510Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 26 07:17:28.515281 kubelet[2743]: E0626 07:17:28.514739 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:17:28.516067 containerd[1627]: time="2024-06-26T07:17:28.515938637Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Jun 26 07:17:28.552048 kubelet[2743]: I0626 07:17:28.551975 2743 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-57b54455b8-txcfv" podStartSLOduration=4.0410039 podCreationTimestamp="2024-06-26 07:17:21 +0000 UTC" firstStartedPulling="2024-06-26 07:17:22.316839484 +0000 UTC m=+22.575424223" lastFinishedPulling="2024-06-26 07:17:25.827749924 +0000 UTC m=+26.086334674" observedRunningTime="2024-06-26 07:17:26.532667144 +0000 UTC m=+26.791251902" watchObservedRunningTime="2024-06-26 07:17:28.551914351 +0000 UTC m=+28.810499110" Jun 26 07:17:28.851099 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aaf6a9347f80077300951320f06a1921b10b73a3c36d918600bd0c85436648cf-rootfs.mount: Deactivated successfully. Jun 26 07:17:29.063872 kubelet[2743]: E0626 07:17:29.063791 2743 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-csr69" podUID="13062869-9328-4eb6-9b1e-f48ab6dc9503" Jun 26 07:17:31.068739 kubelet[2743]: E0626 07:17:31.062587 2743 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-csr69" podUID="13062869-9328-4eb6-9b1e-f48ab6dc9503" Jun 26 07:17:33.063641 kubelet[2743]: E0626 07:17:33.063586 2743 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-csr69" podUID="13062869-9328-4eb6-9b1e-f48ab6dc9503" Jun 26 07:17:34.268740 containerd[1627]: time="2024-06-26T07:17:34.268575894Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:17:34.272256 containerd[1627]: time="2024-06-26T07:17:34.271735463Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=93087850" Jun 26 07:17:34.284325 containerd[1627]: time="2024-06-26T07:17:34.284146187Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"94535610\" in 5.768150027s" Jun 26 07:17:34.284325 containerd[1627]: time="2024-06-26T07:17:34.284222198Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\"" Jun 26 07:17:34.284325 containerd[1627]: time="2024-06-26T07:17:34.284241104Z" level=info msg="ImageCreate event name:\"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:17:34.287283 containerd[1627]: time="2024-06-26T07:17:34.286515190Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:17:34.292640 containerd[1627]: time="2024-06-26T07:17:34.292566691Z" level=info msg="CreateContainer within sandbox \"a578c470f961bf9a4a711cfe3ba7d43d3e698a92267600960cc404cbf0b709fb\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jun 26 07:17:34.364262 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1141245307.mount: Deactivated successfully. Jun 26 07:17:34.375416 containerd[1627]: time="2024-06-26T07:17:34.375330999Z" level=info msg="CreateContainer within sandbox \"a578c470f961bf9a4a711cfe3ba7d43d3e698a92267600960cc404cbf0b709fb\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"834badfbf3c8e9600afaf34cbb01c138bb3695caec96039ad7fcfc1b0ac01796\"" Jun 26 07:17:34.376340 containerd[1627]: time="2024-06-26T07:17:34.376032860Z" level=info msg="StartContainer for \"834badfbf3c8e9600afaf34cbb01c138bb3695caec96039ad7fcfc1b0ac01796\"" Jun 26 07:17:34.597878 containerd[1627]: time="2024-06-26T07:17:34.597724682Z" level=info msg="StartContainer for \"834badfbf3c8e9600afaf34cbb01c138bb3695caec96039ad7fcfc1b0ac01796\" returns successfully" Jun 26 07:17:35.062801 kubelet[2743]: E0626 07:17:35.062731 2743 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-csr69" podUID="13062869-9328-4eb6-9b1e-f48ab6dc9503" Jun 26 07:17:35.489352 systemd-journald[1167]: Under memory pressure, flushing caches. Jun 26 07:17:35.487098 systemd-resolved[1504]: Under memory pressure, flushing caches. Jun 26 07:17:35.487193 systemd-resolved[1504]: Flushed all caches. Jun 26 07:17:35.569306 kubelet[2743]: E0626 07:17:35.568747 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:17:35.786911 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-834badfbf3c8e9600afaf34cbb01c138bb3695caec96039ad7fcfc1b0ac01796-rootfs.mount: Deactivated successfully. Jun 26 07:17:35.823550 containerd[1627]: time="2024-06-26T07:17:35.823106754Z" level=info msg="shim disconnected" id=834badfbf3c8e9600afaf34cbb01c138bb3695caec96039ad7fcfc1b0ac01796 namespace=k8s.io Jun 26 07:17:35.823550 containerd[1627]: time="2024-06-26T07:17:35.823214053Z" level=warning msg="cleaning up after shim disconnected" id=834badfbf3c8e9600afaf34cbb01c138bb3695caec96039ad7fcfc1b0ac01796 namespace=k8s.io Jun 26 07:17:35.823550 containerd[1627]: time="2024-06-26T07:17:35.823228501Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 26 07:17:35.826154 kubelet[2743]: I0626 07:17:35.826112 2743 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Jun 26 07:17:35.904694 kubelet[2743]: I0626 07:17:35.902236 2743 topology_manager.go:215] "Topology Admit Handler" podUID="682fa9e0-fbd0-428c-a73b-d24968366d72" podNamespace="kube-system" podName="coredns-5dd5756b68-2ztk4" Jun 26 07:17:35.921374 kubelet[2743]: I0626 07:17:35.916337 2743 topology_manager.go:215] "Topology Admit Handler" podUID="c348e5ac-b5ea-4ba1-8073-0aea0fe971e5" podNamespace="kube-system" podName="coredns-5dd5756b68-x5rld" Jun 26 07:17:35.929371 kubelet[2743]: I0626 07:17:35.929294 2743 topology_manager.go:215] "Topology Admit Handler" podUID="488cb136-f123-48bb-a389-28fb8d6e1e83" podNamespace="calico-system" podName="calico-kube-controllers-7bfb745d89-z8kg8" Jun 26 07:17:36.015453 kubelet[2743]: I0626 07:17:36.015385 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtntq\" (UniqueName: \"kubernetes.io/projected/682fa9e0-fbd0-428c-a73b-d24968366d72-kube-api-access-qtntq\") pod \"coredns-5dd5756b68-2ztk4\" (UID: \"682fa9e0-fbd0-428c-a73b-d24968366d72\") " pod="kube-system/coredns-5dd5756b68-2ztk4" Jun 26 07:17:36.015884 kubelet[2743]: I0626 07:17:36.015852 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qccl2\" (UniqueName: \"kubernetes.io/projected/c348e5ac-b5ea-4ba1-8073-0aea0fe971e5-kube-api-access-qccl2\") pod \"coredns-5dd5756b68-x5rld\" (UID: \"c348e5ac-b5ea-4ba1-8073-0aea0fe971e5\") " pod="kube-system/coredns-5dd5756b68-x5rld" Jun 26 07:17:36.016147 kubelet[2743]: I0626 07:17:36.016126 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/682fa9e0-fbd0-428c-a73b-d24968366d72-config-volume\") pod \"coredns-5dd5756b68-2ztk4\" (UID: \"682fa9e0-fbd0-428c-a73b-d24968366d72\") " pod="kube-system/coredns-5dd5756b68-2ztk4" Jun 26 07:17:36.016345 kubelet[2743]: I0626 07:17:36.016323 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c348e5ac-b5ea-4ba1-8073-0aea0fe971e5-config-volume\") pod \"coredns-5dd5756b68-x5rld\" (UID: \"c348e5ac-b5ea-4ba1-8073-0aea0fe971e5\") " pod="kube-system/coredns-5dd5756b68-x5rld" Jun 26 07:17:36.118839 kubelet[2743]: I0626 07:17:36.117912 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdw2d\" (UniqueName: \"kubernetes.io/projected/488cb136-f123-48bb-a389-28fb8d6e1e83-kube-api-access-gdw2d\") pod \"calico-kube-controllers-7bfb745d89-z8kg8\" (UID: \"488cb136-f123-48bb-a389-28fb8d6e1e83\") " pod="calico-system/calico-kube-controllers-7bfb745d89-z8kg8" Jun 26 07:17:36.118839 kubelet[2743]: I0626 07:17:36.118087 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/488cb136-f123-48bb-a389-28fb8d6e1e83-tigera-ca-bundle\") pod \"calico-kube-controllers-7bfb745d89-z8kg8\" (UID: \"488cb136-f123-48bb-a389-28fb8d6e1e83\") " pod="calico-system/calico-kube-controllers-7bfb745d89-z8kg8" Jun 26 07:17:36.223531 kubelet[2743]: E0626 07:17:36.217006 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:17:36.223531 kubelet[2743]: E0626 07:17:36.220904 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:17:36.228175 containerd[1627]: time="2024-06-26T07:17:36.228119730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-x5rld,Uid:c348e5ac-b5ea-4ba1-8073-0aea0fe971e5,Namespace:kube-system,Attempt:0,}" Jun 26 07:17:36.229173 containerd[1627]: time="2024-06-26T07:17:36.228154877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-2ztk4,Uid:682fa9e0-fbd0-428c-a73b-d24968366d72,Namespace:kube-system,Attempt:0,}" Jun 26 07:17:36.543028 containerd[1627]: time="2024-06-26T07:17:36.542964470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bfb745d89-z8kg8,Uid:488cb136-f123-48bb-a389-28fb8d6e1e83,Namespace:calico-system,Attempt:0,}" Jun 26 07:17:36.576239 kubelet[2743]: E0626 07:17:36.575293 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:17:36.582027 containerd[1627]: time="2024-06-26T07:17:36.581982082Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Jun 26 07:17:36.727431 containerd[1627]: time="2024-06-26T07:17:36.726463232Z" level=error msg="Failed to destroy network for sandbox \"3d9d6db6636bf77476cbc23d898ac7cb0678031819b216dce55afd5f7b643329\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 26 07:17:36.736226 containerd[1627]: time="2024-06-26T07:17:36.736074106Z" level=error msg="Failed to destroy network for sandbox \"c942e3b6edd04158e0b7153c7e29a782e0994f28365938f136ea7de8eaf5499c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 26 07:17:36.741404 containerd[1627]: time="2024-06-26T07:17:36.740949792Z" level=error msg="encountered an error cleaning up failed sandbox \"3d9d6db6636bf77476cbc23d898ac7cb0678031819b216dce55afd5f7b643329\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 26 07:17:36.763735 containerd[1627]: time="2024-06-26T07:17:36.759894182Z" level=error msg="encountered an error cleaning up failed sandbox \"c942e3b6edd04158e0b7153c7e29a782e0994f28365938f136ea7de8eaf5499c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 26 07:17:36.763735 containerd[1627]: time="2024-06-26T07:17:36.760035732Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-2ztk4,Uid:682fa9e0-fbd0-428c-a73b-d24968366d72,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c942e3b6edd04158e0b7153c7e29a782e0994f28365938f136ea7de8eaf5499c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 26 07:17:36.763986 kubelet[2743]: E0626 07:17:36.760422 2743 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c942e3b6edd04158e0b7153c7e29a782e0994f28365938f136ea7de8eaf5499c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 26 07:17:36.763986 kubelet[2743]: E0626 07:17:36.760612 2743 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c942e3b6edd04158e0b7153c7e29a782e0994f28365938f136ea7de8eaf5499c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-2ztk4" Jun 26 07:17:36.763986 kubelet[2743]: E0626 07:17:36.760652 2743 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c942e3b6edd04158e0b7153c7e29a782e0994f28365938f136ea7de8eaf5499c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-2ztk4" Jun 26 07:17:36.764182 kubelet[2743]: E0626 07:17:36.761232 2743 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-2ztk4_kube-system(682fa9e0-fbd0-428c-a73b-d24968366d72)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-2ztk4_kube-system(682fa9e0-fbd0-428c-a73b-d24968366d72)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c942e3b6edd04158e0b7153c7e29a782e0994f28365938f136ea7de8eaf5499c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-2ztk4" podUID="682fa9e0-fbd0-428c-a73b-d24968366d72" Jun 26 07:17:36.776476 containerd[1627]: time="2024-06-26T07:17:36.775122761Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-x5rld,Uid:c348e5ac-b5ea-4ba1-8073-0aea0fe971e5,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3d9d6db6636bf77476cbc23d898ac7cb0678031819b216dce55afd5f7b643329\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 26 07:17:36.777270 kubelet[2743]: E0626 07:17:36.776137 2743 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d9d6db6636bf77476cbc23d898ac7cb0678031819b216dce55afd5f7b643329\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 26 07:17:36.777270 kubelet[2743]: E0626 07:17:36.776224 2743 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d9d6db6636bf77476cbc23d898ac7cb0678031819b216dce55afd5f7b643329\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-x5rld" Jun 26 07:17:36.777270 kubelet[2743]: E0626 07:17:36.776260 2743 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d9d6db6636bf77476cbc23d898ac7cb0678031819b216dce55afd5f7b643329\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-x5rld" Jun 26 07:17:36.788788 kubelet[2743]: E0626 07:17:36.787859 2743 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-x5rld_kube-system(c348e5ac-b5ea-4ba1-8073-0aea0fe971e5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-x5rld_kube-system(c348e5ac-b5ea-4ba1-8073-0aea0fe971e5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3d9d6db6636bf77476cbc23d898ac7cb0678031819b216dce55afd5f7b643329\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-x5rld" podUID="c348e5ac-b5ea-4ba1-8073-0aea0fe971e5" Jun 26 07:17:36.798526 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c942e3b6edd04158e0b7153c7e29a782e0994f28365938f136ea7de8eaf5499c-shm.mount: Deactivated successfully. Jun 26 07:17:36.895729 containerd[1627]: time="2024-06-26T07:17:36.895526382Z" level=error msg="Failed to destroy network for sandbox \"b5ad93a53b8bb3ab696b457709ee79a5aaf128b47f66f6c214d8936c7aa0e4f5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 26 07:17:36.908169 containerd[1627]: time="2024-06-26T07:17:36.900173360Z" level=error msg="encountered an error cleaning up failed sandbox \"b5ad93a53b8bb3ab696b457709ee79a5aaf128b47f66f6c214d8936c7aa0e4f5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 26 07:17:36.908169 containerd[1627]: time="2024-06-26T07:17:36.900282578Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bfb745d89-z8kg8,Uid:488cb136-f123-48bb-a389-28fb8d6e1e83,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b5ad93a53b8bb3ab696b457709ee79a5aaf128b47f66f6c214d8936c7aa0e4f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 26 07:17:36.906851 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b5ad93a53b8bb3ab696b457709ee79a5aaf128b47f66f6c214d8936c7aa0e4f5-shm.mount: Deactivated successfully. Jun 26 07:17:36.908587 kubelet[2743]: E0626 07:17:36.904644 2743 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5ad93a53b8bb3ab696b457709ee79a5aaf128b47f66f6c214d8936c7aa0e4f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 26 07:17:36.915365 kubelet[2743]: E0626 07:17:36.911599 2743 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5ad93a53b8bb3ab696b457709ee79a5aaf128b47f66f6c214d8936c7aa0e4f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7bfb745d89-z8kg8" Jun 26 07:17:36.915365 kubelet[2743]: E0626 07:17:36.911661 2743 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5ad93a53b8bb3ab696b457709ee79a5aaf128b47f66f6c214d8936c7aa0e4f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7bfb745d89-z8kg8" Jun 26 07:17:36.916595 kubelet[2743]: E0626 07:17:36.915877 2743 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7bfb745d89-z8kg8_calico-system(488cb136-f123-48bb-a389-28fb8d6e1e83)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7bfb745d89-z8kg8_calico-system(488cb136-f123-48bb-a389-28fb8d6e1e83)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b5ad93a53b8bb3ab696b457709ee79a5aaf128b47f66f6c214d8936c7aa0e4f5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7bfb745d89-z8kg8" podUID="488cb136-f123-48bb-a389-28fb8d6e1e83" Jun 26 07:17:37.069428 containerd[1627]: time="2024-06-26T07:17:37.068733260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-csr69,Uid:13062869-9328-4eb6-9b1e-f48ab6dc9503,Namespace:calico-system,Attempt:0,}" Jun 26 07:17:37.265064 containerd[1627]: time="2024-06-26T07:17:37.264990162Z" level=error msg="Failed to destroy network for sandbox \"d08f56fee5c4ff66d8c534e754c0484eedc2966e11d58b6c76541284dd9cd72c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 26 07:17:37.267137 containerd[1627]: time="2024-06-26T07:17:37.267074492Z" level=error msg="encountered an error cleaning up failed sandbox \"d08f56fee5c4ff66d8c534e754c0484eedc2966e11d58b6c76541284dd9cd72c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 26 07:17:37.267734 containerd[1627]: time="2024-06-26T07:17:37.267366531Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-csr69,Uid:13062869-9328-4eb6-9b1e-f48ab6dc9503,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d08f56fee5c4ff66d8c534e754c0484eedc2966e11d58b6c76541284dd9cd72c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 26 07:17:37.269757 kubelet[2743]: E0626 07:17:37.267890 2743 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d08f56fee5c4ff66d8c534e754c0484eedc2966e11d58b6c76541284dd9cd72c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 26 07:17:37.269757 kubelet[2743]: E0626 07:17:37.268066 2743 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d08f56fee5c4ff66d8c534e754c0484eedc2966e11d58b6c76541284dd9cd72c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-csr69" Jun 26 07:17:37.269757 kubelet[2743]: E0626 07:17:37.268108 2743 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d08f56fee5c4ff66d8c534e754c0484eedc2966e11d58b6c76541284dd9cd72c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-csr69" Jun 26 07:17:37.270425 kubelet[2743]: E0626 07:17:37.268190 2743 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-csr69_calico-system(13062869-9328-4eb6-9b1e-f48ab6dc9503)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-csr69_calico-system(13062869-9328-4eb6-9b1e-f48ab6dc9503)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d08f56fee5c4ff66d8c534e754c0484eedc2966e11d58b6c76541284dd9cd72c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-csr69" podUID="13062869-9328-4eb6-9b1e-f48ab6dc9503" Jun 26 07:17:37.273858 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d08f56fee5c4ff66d8c534e754c0484eedc2966e11d58b6c76541284dd9cd72c-shm.mount: Deactivated successfully. Jun 26 07:17:37.590615 kubelet[2743]: I0626 07:17:37.586269 2743 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c942e3b6edd04158e0b7153c7e29a782e0994f28365938f136ea7de8eaf5499c" Jun 26 07:17:37.596711 containerd[1627]: time="2024-06-26T07:17:37.594633076Z" level=info msg="StopPodSandbox for \"c942e3b6edd04158e0b7153c7e29a782e0994f28365938f136ea7de8eaf5499c\"" Jun 26 07:17:37.599983 kubelet[2743]: I0626 07:17:37.599762 2743 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d9d6db6636bf77476cbc23d898ac7cb0678031819b216dce55afd5f7b643329" Jun 26 07:17:37.603730 containerd[1627]: time="2024-06-26T07:17:37.602584996Z" level=info msg="Ensure that sandbox c942e3b6edd04158e0b7153c7e29a782e0994f28365938f136ea7de8eaf5499c in task-service has been cleanup successfully" Jun 26 07:17:37.615402 containerd[1627]: time="2024-06-26T07:17:37.614448548Z" level=info msg="StopPodSandbox for \"3d9d6db6636bf77476cbc23d898ac7cb0678031819b216dce55afd5f7b643329\"" Jun 26 07:17:37.615402 containerd[1627]: time="2024-06-26T07:17:37.615033174Z" level=info msg="Ensure that sandbox 3d9d6db6636bf77476cbc23d898ac7cb0678031819b216dce55afd5f7b643329 in task-service has been cleanup successfully" Jun 26 07:17:37.617958 kubelet[2743]: I0626 07:17:37.617903 2743 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5ad93a53b8bb3ab696b457709ee79a5aaf128b47f66f6c214d8936c7aa0e4f5" Jun 26 07:17:37.648109 containerd[1627]: time="2024-06-26T07:17:37.648009756Z" level=info msg="StopPodSandbox for \"b5ad93a53b8bb3ab696b457709ee79a5aaf128b47f66f6c214d8936c7aa0e4f5\"" Jun 26 07:17:37.649552 containerd[1627]: time="2024-06-26T07:17:37.649019509Z" level=info msg="Ensure that sandbox b5ad93a53b8bb3ab696b457709ee79a5aaf128b47f66f6c214d8936c7aa0e4f5 in task-service has been cleanup successfully" Jun 26 07:17:37.660723 kubelet[2743]: I0626 07:17:37.658284 2743 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d08f56fee5c4ff66d8c534e754c0484eedc2966e11d58b6c76541284dd9cd72c" Jun 26 07:17:37.663478 containerd[1627]: time="2024-06-26T07:17:37.663412317Z" level=info msg="StopPodSandbox for \"d08f56fee5c4ff66d8c534e754c0484eedc2966e11d58b6c76541284dd9cd72c\"" Jun 26 07:17:37.664885 containerd[1627]: time="2024-06-26T07:17:37.664834120Z" level=info msg="Ensure that sandbox d08f56fee5c4ff66d8c534e754c0484eedc2966e11d58b6c76541284dd9cd72c in task-service has been cleanup successfully" Jun 26 07:17:37.826162 containerd[1627]: time="2024-06-26T07:17:37.826089144Z" level=error msg="StopPodSandbox for \"b5ad93a53b8bb3ab696b457709ee79a5aaf128b47f66f6c214d8936c7aa0e4f5\" failed" error="failed to destroy network for sandbox \"b5ad93a53b8bb3ab696b457709ee79a5aaf128b47f66f6c214d8936c7aa0e4f5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 26 07:17:37.833208 kubelet[2743]: E0626 07:17:37.826927 2743 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b5ad93a53b8bb3ab696b457709ee79a5aaf128b47f66f6c214d8936c7aa0e4f5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b5ad93a53b8bb3ab696b457709ee79a5aaf128b47f66f6c214d8936c7aa0e4f5" Jun 26 07:17:37.833208 kubelet[2743]: E0626 07:17:37.829203 2743 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b5ad93a53b8bb3ab696b457709ee79a5aaf128b47f66f6c214d8936c7aa0e4f5"} Jun 26 07:17:37.833208 kubelet[2743]: E0626 07:17:37.829386 2743 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"488cb136-f123-48bb-a389-28fb8d6e1e83\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b5ad93a53b8bb3ab696b457709ee79a5aaf128b47f66f6c214d8936c7aa0e4f5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 26 07:17:37.833208 kubelet[2743]: E0626 07:17:37.830436 2743 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"488cb136-f123-48bb-a389-28fb8d6e1e83\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b5ad93a53b8bb3ab696b457709ee79a5aaf128b47f66f6c214d8936c7aa0e4f5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7bfb745d89-z8kg8" podUID="488cb136-f123-48bb-a389-28fb8d6e1e83" Jun 26 07:17:37.841403 containerd[1627]: time="2024-06-26T07:17:37.841173224Z" level=error msg="StopPodSandbox for \"d08f56fee5c4ff66d8c534e754c0484eedc2966e11d58b6c76541284dd9cd72c\" failed" error="failed to destroy network for sandbox \"d08f56fee5c4ff66d8c534e754c0484eedc2966e11d58b6c76541284dd9cd72c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 26 07:17:37.844748 kubelet[2743]: E0626 07:17:37.843362 2743 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d08f56fee5c4ff66d8c534e754c0484eedc2966e11d58b6c76541284dd9cd72c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d08f56fee5c4ff66d8c534e754c0484eedc2966e11d58b6c76541284dd9cd72c" Jun 26 07:17:37.845150 kubelet[2743]: E0626 07:17:37.845114 2743 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d08f56fee5c4ff66d8c534e754c0484eedc2966e11d58b6c76541284dd9cd72c"} Jun 26 07:17:37.845399 kubelet[2743]: E0626 07:17:37.845376 2743 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"13062869-9328-4eb6-9b1e-f48ab6dc9503\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d08f56fee5c4ff66d8c534e754c0484eedc2966e11d58b6c76541284dd9cd72c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 26 07:17:37.845618 kubelet[2743]: E0626 07:17:37.845599 2743 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"13062869-9328-4eb6-9b1e-f48ab6dc9503\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d08f56fee5c4ff66d8c534e754c0484eedc2966e11d58b6c76541284dd9cd72c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-csr69" podUID="13062869-9328-4eb6-9b1e-f48ab6dc9503" Jun 26 07:17:37.860592 containerd[1627]: time="2024-06-26T07:17:37.860508508Z" level=error msg="StopPodSandbox for \"c942e3b6edd04158e0b7153c7e29a782e0994f28365938f136ea7de8eaf5499c\" failed" error="failed to destroy network for sandbox \"c942e3b6edd04158e0b7153c7e29a782e0994f28365938f136ea7de8eaf5499c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 26 07:17:37.863156 containerd[1627]: time="2024-06-26T07:17:37.862844216Z" level=error msg="StopPodSandbox for \"3d9d6db6636bf77476cbc23d898ac7cb0678031819b216dce55afd5f7b643329\" failed" error="failed to destroy network for sandbox \"3d9d6db6636bf77476cbc23d898ac7cb0678031819b216dce55afd5f7b643329\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 26 07:17:37.863717 kubelet[2743]: E0626 07:17:37.863203 2743 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c942e3b6edd04158e0b7153c7e29a782e0994f28365938f136ea7de8eaf5499c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c942e3b6edd04158e0b7153c7e29a782e0994f28365938f136ea7de8eaf5499c" Jun 26 07:17:37.863717 kubelet[2743]: E0626 07:17:37.863284 2743 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c942e3b6edd04158e0b7153c7e29a782e0994f28365938f136ea7de8eaf5499c"} Jun 26 07:17:37.863717 kubelet[2743]: E0626 07:17:37.863445 2743 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3d9d6db6636bf77476cbc23d898ac7cb0678031819b216dce55afd5f7b643329\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3d9d6db6636bf77476cbc23d898ac7cb0678031819b216dce55afd5f7b643329" Jun 26 07:17:37.864264 kubelet[2743]: E0626 07:17:37.863809 2743 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3d9d6db6636bf77476cbc23d898ac7cb0678031819b216dce55afd5f7b643329"} Jun 26 07:17:37.864264 kubelet[2743]: E0626 07:17:37.863347 2743 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"682fa9e0-fbd0-428c-a73b-d24968366d72\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c942e3b6edd04158e0b7153c7e29a782e0994f28365938f136ea7de8eaf5499c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 26 07:17:37.864264 kubelet[2743]: E0626 07:17:37.864063 2743 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"682fa9e0-fbd0-428c-a73b-d24968366d72\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c942e3b6edd04158e0b7153c7e29a782e0994f28365938f136ea7de8eaf5499c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-2ztk4" podUID="682fa9e0-fbd0-428c-a73b-d24968366d72" Jun 26 07:17:37.864924 kubelet[2743]: E0626 07:17:37.864097 2743 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c348e5ac-b5ea-4ba1-8073-0aea0fe971e5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3d9d6db6636bf77476cbc23d898ac7cb0678031819b216dce55afd5f7b643329\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 26 07:17:37.864924 kubelet[2743]: E0626 07:17:37.864584 2743 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c348e5ac-b5ea-4ba1-8073-0aea0fe971e5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3d9d6db6636bf77476cbc23d898ac7cb0678031819b216dce55afd5f7b643329\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-x5rld" podUID="c348e5ac-b5ea-4ba1-8073-0aea0fe971e5" Jun 26 07:17:45.666181 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount536139814.mount: Deactivated successfully. Jun 26 07:17:45.733767 containerd[1627]: time="2024-06-26T07:17:45.733529762Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:17:45.738272 containerd[1627]: time="2024-06-26T07:17:45.738184821Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=115238750" Jun 26 07:17:45.744720 containerd[1627]: time="2024-06-26T07:17:45.744553769Z" level=info msg="ImageCreate event name:\"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:17:45.751474 containerd[1627]: time="2024-06-26T07:17:45.751409742Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:17:45.753626 containerd[1627]: time="2024-06-26T07:17:45.753102654Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"115238612\" in 9.168381903s" Jun 26 07:17:45.753626 containerd[1627]: time="2024-06-26T07:17:45.753167145Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\"" Jun 26 07:17:45.978183 containerd[1627]: time="2024-06-26T07:17:45.978089451Z" level=info msg="CreateContainer within sandbox \"a578c470f961bf9a4a711cfe3ba7d43d3e698a92267600960cc404cbf0b709fb\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jun 26 07:17:46.035483 containerd[1627]: time="2024-06-26T07:17:46.035372424Z" level=info msg="CreateContainer within sandbox \"a578c470f961bf9a4a711cfe3ba7d43d3e698a92267600960cc404cbf0b709fb\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"036c6154d537f379ac7ef983518e71bb8a26d7a9801b3d060f7f17ce3ce91fbb\"" Jun 26 07:17:46.036809 containerd[1627]: time="2024-06-26T07:17:46.036743093Z" level=info msg="StartContainer for \"036c6154d537f379ac7ef983518e71bb8a26d7a9801b3d060f7f17ce3ce91fbb\"" Jun 26 07:17:46.202726 containerd[1627]: time="2024-06-26T07:17:46.202178986Z" level=info msg="StartContainer for \"036c6154d537f379ac7ef983518e71bb8a26d7a9801b3d060f7f17ce3ce91fbb\" returns successfully" Jun 26 07:17:46.312188 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jun 26 07:17:46.312377 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jun 26 07:17:46.769399 kubelet[2743]: E0626 07:17:46.769340 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:17:46.822824 kubelet[2743]: I0626 07:17:46.820751 2743 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-wmmkr" podStartSLOduration=2.495335878 podCreationTimestamp="2024-06-26 07:17:21 +0000 UTC" firstStartedPulling="2024-06-26 07:17:22.430106592 +0000 UTC m=+22.688691326" lastFinishedPulling="2024-06-26 07:17:45.754429818 +0000 UTC m=+46.013014564" observedRunningTime="2024-06-26 07:17:46.813470791 +0000 UTC m=+47.072055563" watchObservedRunningTime="2024-06-26 07:17:46.819659116 +0000 UTC m=+47.078243894" Jun 26 07:17:47.522119 systemd-resolved[1504]: Under memory pressure, flushing caches. Jun 26 07:17:47.525032 systemd-journald[1167]: Under memory pressure, flushing caches. Jun 26 07:17:47.522162 systemd-resolved[1504]: Flushed all caches. Jun 26 07:17:47.780030 kubelet[2743]: E0626 07:17:47.778883 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:17:48.781104 kubelet[2743]: E0626 07:17:48.780963 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:17:50.835620 kubelet[2743]: I0626 07:17:50.835526 2743 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 26 07:17:50.838004 kubelet[2743]: E0626 07:17:50.837434 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:17:51.067149 containerd[1627]: time="2024-06-26T07:17:51.065900541Z" level=info msg="StopPodSandbox for \"c942e3b6edd04158e0b7153c7e29a782e0994f28365938f136ea7de8eaf5499c\"" Jun 26 07:17:51.067149 containerd[1627]: time="2024-06-26T07:17:51.065940807Z" level=info msg="StopPodSandbox for \"d08f56fee5c4ff66d8c534e754c0484eedc2966e11d58b6c76541284dd9cd72c\"" Jun 26 07:17:51.581671 containerd[1627]: 2024-06-26 07:17:51.188 [INFO][4045] k8s.go 608: Cleaning up netns ContainerID="d08f56fee5c4ff66d8c534e754c0484eedc2966e11d58b6c76541284dd9cd72c" Jun 26 07:17:51.581671 containerd[1627]: 2024-06-26 07:17:51.192 [INFO][4045] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="d08f56fee5c4ff66d8c534e754c0484eedc2966e11d58b6c76541284dd9cd72c" iface="eth0" netns="/var/run/netns/cni-b6b11587-3a2f-f5ad-6f82-acf4e1709a3f" Jun 26 07:17:51.581671 containerd[1627]: 2024-06-26 07:17:51.192 [INFO][4045] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="d08f56fee5c4ff66d8c534e754c0484eedc2966e11d58b6c76541284dd9cd72c" iface="eth0" netns="/var/run/netns/cni-b6b11587-3a2f-f5ad-6f82-acf4e1709a3f" Jun 26 07:17:51.581671 containerd[1627]: 2024-06-26 07:17:51.193 [INFO][4045] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="d08f56fee5c4ff66d8c534e754c0484eedc2966e11d58b6c76541284dd9cd72c" iface="eth0" netns="/var/run/netns/cni-b6b11587-3a2f-f5ad-6f82-acf4e1709a3f" Jun 26 07:17:51.581671 containerd[1627]: 2024-06-26 07:17:51.193 [INFO][4045] k8s.go 615: Releasing IP address(es) ContainerID="d08f56fee5c4ff66d8c534e754c0484eedc2966e11d58b6c76541284dd9cd72c" Jun 26 07:17:51.581671 containerd[1627]: 2024-06-26 07:17:51.193 [INFO][4045] utils.go 188: Calico CNI releasing IP address ContainerID="d08f56fee5c4ff66d8c534e754c0484eedc2966e11d58b6c76541284dd9cd72c" Jun 26 07:17:51.581671 containerd[1627]: 2024-06-26 07:17:51.536 [INFO][4061] ipam_plugin.go 411: Releasing address using handleID ContainerID="d08f56fee5c4ff66d8c534e754c0484eedc2966e11d58b6c76541284dd9cd72c" HandleID="k8s-pod-network.d08f56fee5c4ff66d8c534e754c0484eedc2966e11d58b6c76541284dd9cd72c" Workload="ci--4012.0.0--2--1603354b52-k8s-csi--node--driver--csr69-eth0" Jun 26 07:17:51.581671 containerd[1627]: 2024-06-26 07:17:51.537 [INFO][4061] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 26 07:17:51.581671 containerd[1627]: 2024-06-26 07:17:51.538 [INFO][4061] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 26 07:17:51.581671 containerd[1627]: 2024-06-26 07:17:51.561 [WARNING][4061] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="d08f56fee5c4ff66d8c534e754c0484eedc2966e11d58b6c76541284dd9cd72c" HandleID="k8s-pod-network.d08f56fee5c4ff66d8c534e754c0484eedc2966e11d58b6c76541284dd9cd72c" Workload="ci--4012.0.0--2--1603354b52-k8s-csi--node--driver--csr69-eth0" Jun 26 07:17:51.581671 containerd[1627]: 2024-06-26 07:17:51.561 [INFO][4061] ipam_plugin.go 439: Releasing address using workloadID ContainerID="d08f56fee5c4ff66d8c534e754c0484eedc2966e11d58b6c76541284dd9cd72c" HandleID="k8s-pod-network.d08f56fee5c4ff66d8c534e754c0484eedc2966e11d58b6c76541284dd9cd72c" Workload="ci--4012.0.0--2--1603354b52-k8s-csi--node--driver--csr69-eth0" Jun 26 07:17:51.581671 containerd[1627]: 2024-06-26 07:17:51.566 [INFO][4061] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 26 07:17:51.581671 containerd[1627]: 2024-06-26 07:17:51.570 [INFO][4045] k8s.go 621: Teardown processing complete. ContainerID="d08f56fee5c4ff66d8c534e754c0484eedc2966e11d58b6c76541284dd9cd72c" Jun 26 07:17:51.592568 containerd[1627]: time="2024-06-26T07:17:51.585839413Z" level=info msg="TearDown network for sandbox \"d08f56fee5c4ff66d8c534e754c0484eedc2966e11d58b6c76541284dd9cd72c\" successfully" Jun 26 07:17:51.592568 containerd[1627]: time="2024-06-26T07:17:51.585899402Z" level=info msg="StopPodSandbox for \"d08f56fee5c4ff66d8c534e754c0484eedc2966e11d58b6c76541284dd9cd72c\" returns successfully" Jun 26 07:17:51.603887 systemd[1]: run-netns-cni\x2db6b11587\x2d3a2f\x2df5ad\x2d6f82\x2dacf4e1709a3f.mount: Deactivated successfully. Jun 26 07:17:51.621613 containerd[1627]: time="2024-06-26T07:17:51.621545530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-csr69,Uid:13062869-9328-4eb6-9b1e-f48ab6dc9503,Namespace:calico-system,Attempt:1,}" Jun 26 07:17:51.628161 containerd[1627]: 2024-06-26 07:17:51.202 [INFO][4053] k8s.go 608: Cleaning up netns ContainerID="c942e3b6edd04158e0b7153c7e29a782e0994f28365938f136ea7de8eaf5499c" Jun 26 07:17:51.628161 containerd[1627]: 2024-06-26 07:17:51.205 [INFO][4053] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="c942e3b6edd04158e0b7153c7e29a782e0994f28365938f136ea7de8eaf5499c" iface="eth0" netns="/var/run/netns/cni-b6be2088-99fd-2373-e23c-30128b5312ae" Jun 26 07:17:51.628161 containerd[1627]: 2024-06-26 07:17:51.205 [INFO][4053] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="c942e3b6edd04158e0b7153c7e29a782e0994f28365938f136ea7de8eaf5499c" iface="eth0" netns="/var/run/netns/cni-b6be2088-99fd-2373-e23c-30128b5312ae" Jun 26 07:17:51.628161 containerd[1627]: 2024-06-26 07:17:51.205 [INFO][4053] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="c942e3b6edd04158e0b7153c7e29a782e0994f28365938f136ea7de8eaf5499c" iface="eth0" netns="/var/run/netns/cni-b6be2088-99fd-2373-e23c-30128b5312ae" Jun 26 07:17:51.628161 containerd[1627]: 2024-06-26 07:17:51.205 [INFO][4053] k8s.go 615: Releasing IP address(es) ContainerID="c942e3b6edd04158e0b7153c7e29a782e0994f28365938f136ea7de8eaf5499c" Jun 26 07:17:51.628161 containerd[1627]: 2024-06-26 07:17:51.205 [INFO][4053] utils.go 188: Calico CNI releasing IP address ContainerID="c942e3b6edd04158e0b7153c7e29a782e0994f28365938f136ea7de8eaf5499c" Jun 26 07:17:51.628161 containerd[1627]: 2024-06-26 07:17:51.536 [INFO][4062] ipam_plugin.go 411: Releasing address using handleID ContainerID="c942e3b6edd04158e0b7153c7e29a782e0994f28365938f136ea7de8eaf5499c" HandleID="k8s-pod-network.c942e3b6edd04158e0b7153c7e29a782e0994f28365938f136ea7de8eaf5499c" Workload="ci--4012.0.0--2--1603354b52-k8s-coredns--5dd5756b68--2ztk4-eth0" Jun 26 07:17:51.628161 containerd[1627]: 2024-06-26 07:17:51.538 [INFO][4062] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 26 07:17:51.628161 containerd[1627]: 2024-06-26 07:17:51.568 [INFO][4062] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 26 07:17:51.628161 containerd[1627]: 2024-06-26 07:17:51.590 [WARNING][4062] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="c942e3b6edd04158e0b7153c7e29a782e0994f28365938f136ea7de8eaf5499c" HandleID="k8s-pod-network.c942e3b6edd04158e0b7153c7e29a782e0994f28365938f136ea7de8eaf5499c" Workload="ci--4012.0.0--2--1603354b52-k8s-coredns--5dd5756b68--2ztk4-eth0" Jun 26 07:17:51.628161 containerd[1627]: 2024-06-26 07:17:51.590 [INFO][4062] ipam_plugin.go 439: Releasing address using workloadID ContainerID="c942e3b6edd04158e0b7153c7e29a782e0994f28365938f136ea7de8eaf5499c" HandleID="k8s-pod-network.c942e3b6edd04158e0b7153c7e29a782e0994f28365938f136ea7de8eaf5499c" Workload="ci--4012.0.0--2--1603354b52-k8s-coredns--5dd5756b68--2ztk4-eth0" Jun 26 07:17:51.628161 containerd[1627]: 2024-06-26 07:17:51.595 [INFO][4062] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 26 07:17:51.628161 containerd[1627]: 2024-06-26 07:17:51.611 [INFO][4053] k8s.go 621: Teardown processing complete. ContainerID="c942e3b6edd04158e0b7153c7e29a782e0994f28365938f136ea7de8eaf5499c" Jun 26 07:17:51.634695 containerd[1627]: time="2024-06-26T07:17:51.628854906Z" level=info msg="TearDown network for sandbox \"c942e3b6edd04158e0b7153c7e29a782e0994f28365938f136ea7de8eaf5499c\" successfully" Jun 26 07:17:51.634695 containerd[1627]: time="2024-06-26T07:17:51.633585312Z" level=info msg="StopPodSandbox for \"c942e3b6edd04158e0b7153c7e29a782e0994f28365938f136ea7de8eaf5499c\" returns successfully" Jun 26 07:17:51.640517 kubelet[2743]: E0626 07:17:51.638382 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:17:51.639322 systemd[1]: run-netns-cni\x2db6be2088\x2d99fd\x2d2373\x2de23c\x2d30128b5312ae.mount: Deactivated successfully. Jun 26 07:17:51.642294 containerd[1627]: time="2024-06-26T07:17:51.641780483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-2ztk4,Uid:682fa9e0-fbd0-428c-a73b-d24968366d72,Namespace:kube-system,Attempt:1,}" Jun 26 07:17:51.811222 kubelet[2743]: E0626 07:17:51.810071 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:17:52.068353 containerd[1627]: time="2024-06-26T07:17:52.067004043Z" level=info msg="StopPodSandbox for \"b5ad93a53b8bb3ab696b457709ee79a5aaf128b47f66f6c214d8936c7aa0e4f5\"" Jun 26 07:17:52.075023 containerd[1627]: time="2024-06-26T07:17:52.071612729Z" level=info msg="StopPodSandbox for \"3d9d6db6636bf77476cbc23d898ac7cb0678031819b216dce55afd5f7b643329\"" Jun 26 07:17:52.344793 systemd-networkd[1248]: cali2cc9ada6af5: Link UP Jun 26 07:17:52.346656 systemd-networkd[1248]: cali2cc9ada6af5: Gained carrier Jun 26 07:17:52.432380 containerd[1627]: 2024-06-26 07:17:51.838 [INFO][4092] utils.go 100: File /var/lib/calico/mtu does not exist Jun 26 07:17:52.432380 containerd[1627]: 2024-06-26 07:17:51.876 [INFO][4092] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4012.0.0--2--1603354b52-k8s-csi--node--driver--csr69-eth0 csi-node-driver- calico-system 13062869-9328-4eb6-9b1e-f48ab6dc9503 785 0 2024-06-26 07:17:21 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7d7f6c786c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ci-4012.0.0-2-1603354b52 csi-node-driver-csr69 eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali2cc9ada6af5 [] []}} ContainerID="4f3be2f45c8e69039005d0690c901784d51b9206b7a5f5547e1b1ec01ac909db" Namespace="calico-system" Pod="csi-node-driver-csr69" WorkloadEndpoint="ci--4012.0.0--2--1603354b52-k8s-csi--node--driver--csr69-" Jun 26 07:17:52.432380 containerd[1627]: 2024-06-26 07:17:51.876 [INFO][4092] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4f3be2f45c8e69039005d0690c901784d51b9206b7a5f5547e1b1ec01ac909db" Namespace="calico-system" Pod="csi-node-driver-csr69" WorkloadEndpoint="ci--4012.0.0--2--1603354b52-k8s-csi--node--driver--csr69-eth0" Jun 26 07:17:52.432380 containerd[1627]: 2024-06-26 07:17:52.056 [INFO][4119] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4f3be2f45c8e69039005d0690c901784d51b9206b7a5f5547e1b1ec01ac909db" HandleID="k8s-pod-network.4f3be2f45c8e69039005d0690c901784d51b9206b7a5f5547e1b1ec01ac909db" Workload="ci--4012.0.0--2--1603354b52-k8s-csi--node--driver--csr69-eth0" Jun 26 07:17:52.432380 containerd[1627]: 2024-06-26 07:17:52.108 [INFO][4119] ipam_plugin.go 264: Auto assigning IP ContainerID="4f3be2f45c8e69039005d0690c901784d51b9206b7a5f5547e1b1ec01ac909db" HandleID="k8s-pod-network.4f3be2f45c8e69039005d0690c901784d51b9206b7a5f5547e1b1ec01ac909db" Workload="ci--4012.0.0--2--1603354b52-k8s-csi--node--driver--csr69-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003a07d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4012.0.0-2-1603354b52", "pod":"csi-node-driver-csr69", "timestamp":"2024-06-26 07:17:52.056519401 +0000 UTC"}, Hostname:"ci-4012.0.0-2-1603354b52", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 26 07:17:52.432380 containerd[1627]: 2024-06-26 07:17:52.108 [INFO][4119] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 26 07:17:52.432380 containerd[1627]: 2024-06-26 07:17:52.108 [INFO][4119] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 26 07:17:52.432380 containerd[1627]: 2024-06-26 07:17:52.108 [INFO][4119] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4012.0.0-2-1603354b52' Jun 26 07:17:52.432380 containerd[1627]: 2024-06-26 07:17:52.128 [INFO][4119] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4f3be2f45c8e69039005d0690c901784d51b9206b7a5f5547e1b1ec01ac909db" host="ci-4012.0.0-2-1603354b52" Jun 26 07:17:52.432380 containerd[1627]: 2024-06-26 07:17:52.155 [INFO][4119] ipam.go 372: Looking up existing affinities for host host="ci-4012.0.0-2-1603354b52" Jun 26 07:17:52.432380 containerd[1627]: 2024-06-26 07:17:52.186 [INFO][4119] ipam.go 489: Trying affinity for 192.168.23.0/26 host="ci-4012.0.0-2-1603354b52" Jun 26 07:17:52.432380 containerd[1627]: 2024-06-26 07:17:52.195 [INFO][4119] ipam.go 155: Attempting to load block cidr=192.168.23.0/26 host="ci-4012.0.0-2-1603354b52" Jun 26 07:17:52.432380 containerd[1627]: 2024-06-26 07:17:52.202 [INFO][4119] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.23.0/26 host="ci-4012.0.0-2-1603354b52" Jun 26 07:17:52.432380 containerd[1627]: 2024-06-26 07:17:52.202 [INFO][4119] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.23.0/26 handle="k8s-pod-network.4f3be2f45c8e69039005d0690c901784d51b9206b7a5f5547e1b1ec01ac909db" host="ci-4012.0.0-2-1603354b52" Jun 26 07:17:52.432380 containerd[1627]: 2024-06-26 07:17:52.210 [INFO][4119] ipam.go 1685: Creating new handle: k8s-pod-network.4f3be2f45c8e69039005d0690c901784d51b9206b7a5f5547e1b1ec01ac909db Jun 26 07:17:52.432380 containerd[1627]: 2024-06-26 07:17:52.242 [INFO][4119] ipam.go 1203: Writing block in order to claim IPs block=192.168.23.0/26 handle="k8s-pod-network.4f3be2f45c8e69039005d0690c901784d51b9206b7a5f5547e1b1ec01ac909db" host="ci-4012.0.0-2-1603354b52" Jun 26 07:17:52.432380 containerd[1627]: 2024-06-26 07:17:52.273 [INFO][4119] ipam.go 1216: Successfully claimed IPs: [192.168.23.1/26] block=192.168.23.0/26 handle="k8s-pod-network.4f3be2f45c8e69039005d0690c901784d51b9206b7a5f5547e1b1ec01ac909db" host="ci-4012.0.0-2-1603354b52" Jun 26 07:17:52.432380 containerd[1627]: 2024-06-26 07:17:52.274 [INFO][4119] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.23.1/26] handle="k8s-pod-network.4f3be2f45c8e69039005d0690c901784d51b9206b7a5f5547e1b1ec01ac909db" host="ci-4012.0.0-2-1603354b52" Jun 26 07:17:52.432380 containerd[1627]: 2024-06-26 07:17:52.275 [INFO][4119] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 26 07:17:52.432380 containerd[1627]: 2024-06-26 07:17:52.275 [INFO][4119] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.23.1/26] IPv6=[] ContainerID="4f3be2f45c8e69039005d0690c901784d51b9206b7a5f5547e1b1ec01ac909db" HandleID="k8s-pod-network.4f3be2f45c8e69039005d0690c901784d51b9206b7a5f5547e1b1ec01ac909db" Workload="ci--4012.0.0--2--1603354b52-k8s-csi--node--driver--csr69-eth0" Jun 26 07:17:52.455801 containerd[1627]: 2024-06-26 07:17:52.283 [INFO][4092] k8s.go 386: Populated endpoint ContainerID="4f3be2f45c8e69039005d0690c901784d51b9206b7a5f5547e1b1ec01ac909db" Namespace="calico-system" Pod="csi-node-driver-csr69" WorkloadEndpoint="ci--4012.0.0--2--1603354b52-k8s-csi--node--driver--csr69-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--2--1603354b52-k8s-csi--node--driver--csr69-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"13062869-9328-4eb6-9b1e-f48ab6dc9503", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 7, 17, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-2-1603354b52", ContainerID:"", Pod:"csi-node-driver-csr69", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.23.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali2cc9ada6af5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 26 07:17:52.455801 containerd[1627]: 2024-06-26 07:17:52.283 [INFO][4092] k8s.go 387: Calico CNI using IPs: [192.168.23.1/32] ContainerID="4f3be2f45c8e69039005d0690c901784d51b9206b7a5f5547e1b1ec01ac909db" Namespace="calico-system" Pod="csi-node-driver-csr69" WorkloadEndpoint="ci--4012.0.0--2--1603354b52-k8s-csi--node--driver--csr69-eth0" Jun 26 07:17:52.455801 containerd[1627]: 2024-06-26 07:17:52.284 [INFO][4092] dataplane_linux.go 68: Setting the host side veth name to cali2cc9ada6af5 ContainerID="4f3be2f45c8e69039005d0690c901784d51b9206b7a5f5547e1b1ec01ac909db" Namespace="calico-system" Pod="csi-node-driver-csr69" WorkloadEndpoint="ci--4012.0.0--2--1603354b52-k8s-csi--node--driver--csr69-eth0" Jun 26 07:17:52.455801 containerd[1627]: 2024-06-26 07:17:52.344 [INFO][4092] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="4f3be2f45c8e69039005d0690c901784d51b9206b7a5f5547e1b1ec01ac909db" Namespace="calico-system" Pod="csi-node-driver-csr69" WorkloadEndpoint="ci--4012.0.0--2--1603354b52-k8s-csi--node--driver--csr69-eth0" Jun 26 07:17:52.455801 containerd[1627]: 2024-06-26 07:17:52.352 [INFO][4092] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4f3be2f45c8e69039005d0690c901784d51b9206b7a5f5547e1b1ec01ac909db" Namespace="calico-system" Pod="csi-node-driver-csr69" WorkloadEndpoint="ci--4012.0.0--2--1603354b52-k8s-csi--node--driver--csr69-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--2--1603354b52-k8s-csi--node--driver--csr69-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"13062869-9328-4eb6-9b1e-f48ab6dc9503", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 7, 17, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-2-1603354b52", ContainerID:"4f3be2f45c8e69039005d0690c901784d51b9206b7a5f5547e1b1ec01ac909db", Pod:"csi-node-driver-csr69", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.23.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali2cc9ada6af5", MAC:"5a:74:10:4b:59:08", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 26 07:17:52.455801 containerd[1627]: 2024-06-26 07:17:52.384 [INFO][4092] k8s.go 500: Wrote updated endpoint to datastore ContainerID="4f3be2f45c8e69039005d0690c901784d51b9206b7a5f5547e1b1ec01ac909db" Namespace="calico-system" Pod="csi-node-driver-csr69" WorkloadEndpoint="ci--4012.0.0--2--1603354b52-k8s-csi--node--driver--csr69-eth0" Jun 26 07:17:52.630852 systemd-networkd[1248]: calidcd2b1fb593: Link UP Jun 26 07:17:52.636445 systemd-networkd[1248]: calidcd2b1fb593: Gained carrier Jun 26 07:17:52.751713 containerd[1627]: 2024-06-26 07:17:51.854 [INFO][4096] utils.go 100: File /var/lib/calico/mtu does not exist Jun 26 07:17:52.751713 containerd[1627]: 2024-06-26 07:17:51.878 [INFO][4096] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4012.0.0--2--1603354b52-k8s-coredns--5dd5756b68--2ztk4-eth0 coredns-5dd5756b68- kube-system 682fa9e0-fbd0-428c-a73b-d24968366d72 786 0 2024-06-26 07:17:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4012.0.0-2-1603354b52 coredns-5dd5756b68-2ztk4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calidcd2b1fb593 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="9e56ef307e17a321ab07b40afe8600e18abd2221fcfd7de6e9e7d8a5448fdccc" Namespace="kube-system" Pod="coredns-5dd5756b68-2ztk4" WorkloadEndpoint="ci--4012.0.0--2--1603354b52-k8s-coredns--5dd5756b68--2ztk4-" Jun 26 07:17:52.751713 containerd[1627]: 2024-06-26 07:17:51.879 [INFO][4096] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9e56ef307e17a321ab07b40afe8600e18abd2221fcfd7de6e9e7d8a5448fdccc" Namespace="kube-system" Pod="coredns-5dd5756b68-2ztk4" WorkloadEndpoint="ci--4012.0.0--2--1603354b52-k8s-coredns--5dd5756b68--2ztk4-eth0" Jun 26 07:17:52.751713 containerd[1627]: 2024-06-26 07:17:52.261 [INFO][4120] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9e56ef307e17a321ab07b40afe8600e18abd2221fcfd7de6e9e7d8a5448fdccc" HandleID="k8s-pod-network.9e56ef307e17a321ab07b40afe8600e18abd2221fcfd7de6e9e7d8a5448fdccc" Workload="ci--4012.0.0--2--1603354b52-k8s-coredns--5dd5756b68--2ztk4-eth0" Jun 26 07:17:52.751713 containerd[1627]: 2024-06-26 07:17:52.400 [INFO][4120] ipam_plugin.go 264: Auto assigning IP ContainerID="9e56ef307e17a321ab07b40afe8600e18abd2221fcfd7de6e9e7d8a5448fdccc" HandleID="k8s-pod-network.9e56ef307e17a321ab07b40afe8600e18abd2221fcfd7de6e9e7d8a5448fdccc" Workload="ci--4012.0.0--2--1603354b52-k8s-coredns--5dd5756b68--2ztk4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005fe640), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4012.0.0-2-1603354b52", "pod":"coredns-5dd5756b68-2ztk4", "timestamp":"2024-06-26 07:17:52.261824989 +0000 UTC"}, Hostname:"ci-4012.0.0-2-1603354b52", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 26 07:17:52.751713 containerd[1627]: 2024-06-26 07:17:52.402 [INFO][4120] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 26 07:17:52.751713 containerd[1627]: 2024-06-26 07:17:52.404 [INFO][4120] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 26 07:17:52.751713 containerd[1627]: 2024-06-26 07:17:52.407 [INFO][4120] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4012.0.0-2-1603354b52' Jun 26 07:17:52.751713 containerd[1627]: 2024-06-26 07:17:52.420 [INFO][4120] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9e56ef307e17a321ab07b40afe8600e18abd2221fcfd7de6e9e7d8a5448fdccc" host="ci-4012.0.0-2-1603354b52" Jun 26 07:17:52.751713 containerd[1627]: 2024-06-26 07:17:52.479 [INFO][4120] ipam.go 372: Looking up existing affinities for host host="ci-4012.0.0-2-1603354b52" Jun 26 07:17:52.751713 containerd[1627]: 2024-06-26 07:17:52.504 [INFO][4120] ipam.go 489: Trying affinity for 192.168.23.0/26 host="ci-4012.0.0-2-1603354b52" Jun 26 07:17:52.751713 containerd[1627]: 2024-06-26 07:17:52.514 [INFO][4120] ipam.go 155: Attempting to load block cidr=192.168.23.0/26 host="ci-4012.0.0-2-1603354b52" Jun 26 07:17:52.751713 containerd[1627]: 2024-06-26 07:17:52.526 [INFO][4120] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.23.0/26 host="ci-4012.0.0-2-1603354b52" Jun 26 07:17:52.751713 containerd[1627]: 2024-06-26 07:17:52.527 [INFO][4120] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.23.0/26 handle="k8s-pod-network.9e56ef307e17a321ab07b40afe8600e18abd2221fcfd7de6e9e7d8a5448fdccc" host="ci-4012.0.0-2-1603354b52" Jun 26 07:17:52.751713 containerd[1627]: 2024-06-26 07:17:52.534 [INFO][4120] ipam.go 1685: Creating new handle: k8s-pod-network.9e56ef307e17a321ab07b40afe8600e18abd2221fcfd7de6e9e7d8a5448fdccc Jun 26 07:17:52.751713 containerd[1627]: 2024-06-26 07:17:52.545 [INFO][4120] ipam.go 1203: Writing block in order to claim IPs block=192.168.23.0/26 handle="k8s-pod-network.9e56ef307e17a321ab07b40afe8600e18abd2221fcfd7de6e9e7d8a5448fdccc" host="ci-4012.0.0-2-1603354b52" Jun 26 07:17:52.751713 containerd[1627]: 2024-06-26 07:17:52.565 [INFO][4120] ipam.go 1216: Successfully claimed IPs: [192.168.23.2/26] block=192.168.23.0/26 handle="k8s-pod-network.9e56ef307e17a321ab07b40afe8600e18abd2221fcfd7de6e9e7d8a5448fdccc" host="ci-4012.0.0-2-1603354b52" Jun 26 07:17:52.751713 containerd[1627]: 2024-06-26 07:17:52.565 [INFO][4120] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.23.2/26] handle="k8s-pod-network.9e56ef307e17a321ab07b40afe8600e18abd2221fcfd7de6e9e7d8a5448fdccc" host="ci-4012.0.0-2-1603354b52" Jun 26 07:17:52.751713 containerd[1627]: 2024-06-26 07:17:52.566 [INFO][4120] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 26 07:17:52.751713 containerd[1627]: 2024-06-26 07:17:52.566 [INFO][4120] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.23.2/26] IPv6=[] ContainerID="9e56ef307e17a321ab07b40afe8600e18abd2221fcfd7de6e9e7d8a5448fdccc" HandleID="k8s-pod-network.9e56ef307e17a321ab07b40afe8600e18abd2221fcfd7de6e9e7d8a5448fdccc" Workload="ci--4012.0.0--2--1603354b52-k8s-coredns--5dd5756b68--2ztk4-eth0" Jun 26 07:17:52.753142 containerd[1627]: 2024-06-26 07:17:52.600 [INFO][4096] k8s.go 386: Populated endpoint ContainerID="9e56ef307e17a321ab07b40afe8600e18abd2221fcfd7de6e9e7d8a5448fdccc" Namespace="kube-system" Pod="coredns-5dd5756b68-2ztk4" WorkloadEndpoint="ci--4012.0.0--2--1603354b52-k8s-coredns--5dd5756b68--2ztk4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--2--1603354b52-k8s-coredns--5dd5756b68--2ztk4-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"682fa9e0-fbd0-428c-a73b-d24968366d72", ResourceVersion:"786", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 7, 17, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-2-1603354b52", ContainerID:"", Pod:"coredns-5dd5756b68-2ztk4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.23.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidcd2b1fb593", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 26 07:17:52.753142 containerd[1627]: 2024-06-26 07:17:52.608 [INFO][4096] k8s.go 387: Calico CNI using IPs: [192.168.23.2/32] ContainerID="9e56ef307e17a321ab07b40afe8600e18abd2221fcfd7de6e9e7d8a5448fdccc" Namespace="kube-system" Pod="coredns-5dd5756b68-2ztk4" WorkloadEndpoint="ci--4012.0.0--2--1603354b52-k8s-coredns--5dd5756b68--2ztk4-eth0" Jun 26 07:17:52.753142 containerd[1627]: 2024-06-26 07:17:52.608 [INFO][4096] dataplane_linux.go 68: Setting the host side veth name to calidcd2b1fb593 ContainerID="9e56ef307e17a321ab07b40afe8600e18abd2221fcfd7de6e9e7d8a5448fdccc" Namespace="kube-system" Pod="coredns-5dd5756b68-2ztk4" WorkloadEndpoint="ci--4012.0.0--2--1603354b52-k8s-coredns--5dd5756b68--2ztk4-eth0" Jun 26 07:17:52.753142 containerd[1627]: 2024-06-26 07:17:52.649 [INFO][4096] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="9e56ef307e17a321ab07b40afe8600e18abd2221fcfd7de6e9e7d8a5448fdccc" Namespace="kube-system" Pod="coredns-5dd5756b68-2ztk4" WorkloadEndpoint="ci--4012.0.0--2--1603354b52-k8s-coredns--5dd5756b68--2ztk4-eth0" Jun 26 07:17:52.753142 containerd[1627]: 2024-06-26 07:17:52.678 [INFO][4096] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9e56ef307e17a321ab07b40afe8600e18abd2221fcfd7de6e9e7d8a5448fdccc" Namespace="kube-system" Pod="coredns-5dd5756b68-2ztk4" WorkloadEndpoint="ci--4012.0.0--2--1603354b52-k8s-coredns--5dd5756b68--2ztk4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--2--1603354b52-k8s-coredns--5dd5756b68--2ztk4-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"682fa9e0-fbd0-428c-a73b-d24968366d72", ResourceVersion:"786", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 7, 17, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-2-1603354b52", ContainerID:"9e56ef307e17a321ab07b40afe8600e18abd2221fcfd7de6e9e7d8a5448fdccc", Pod:"coredns-5dd5756b68-2ztk4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.23.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidcd2b1fb593", MAC:"d6:06:ae:07:77:26", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 26 07:17:52.753142 containerd[1627]: 2024-06-26 07:17:52.715 [INFO][4096] k8s.go 500: Wrote updated endpoint to datastore ContainerID="9e56ef307e17a321ab07b40afe8600e18abd2221fcfd7de6e9e7d8a5448fdccc" Namespace="kube-system" Pod="coredns-5dd5756b68-2ztk4" WorkloadEndpoint="ci--4012.0.0--2--1603354b52-k8s-coredns--5dd5756b68--2ztk4-eth0" Jun 26 07:17:52.822771 containerd[1627]: time="2024-06-26T07:17:52.822344770Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 26 07:17:52.822771 containerd[1627]: time="2024-06-26T07:17:52.822489727Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:17:52.822771 containerd[1627]: time="2024-06-26T07:17:52.822559518Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 26 07:17:52.822771 containerd[1627]: time="2024-06-26T07:17:52.822583411Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:17:53.051969 containerd[1627]: time="2024-06-26T07:17:53.041244144Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 26 07:17:53.051969 containerd[1627]: time="2024-06-26T07:17:53.047012128Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:17:53.051969 containerd[1627]: time="2024-06-26T07:17:53.047058061Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 26 07:17:53.051969 containerd[1627]: time="2024-06-26T07:17:53.047078264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:17:53.067718 containerd[1627]: 2024-06-26 07:17:52.623 [INFO][4164] k8s.go 608: Cleaning up netns ContainerID="b5ad93a53b8bb3ab696b457709ee79a5aaf128b47f66f6c214d8936c7aa0e4f5" Jun 26 07:17:53.067718 containerd[1627]: 2024-06-26 07:17:52.623 [INFO][4164] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="b5ad93a53b8bb3ab696b457709ee79a5aaf128b47f66f6c214d8936c7aa0e4f5" iface="eth0" netns="/var/run/netns/cni-8bc465c6-b92a-52b3-d7d0-5f58d864ca6c" Jun 26 07:17:53.067718 containerd[1627]: 2024-06-26 07:17:52.625 [INFO][4164] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="b5ad93a53b8bb3ab696b457709ee79a5aaf128b47f66f6c214d8936c7aa0e4f5" iface="eth0" netns="/var/run/netns/cni-8bc465c6-b92a-52b3-d7d0-5f58d864ca6c" Jun 26 07:17:53.067718 containerd[1627]: 2024-06-26 07:17:52.636 [INFO][4164] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="b5ad93a53b8bb3ab696b457709ee79a5aaf128b47f66f6c214d8936c7aa0e4f5" iface="eth0" netns="/var/run/netns/cni-8bc465c6-b92a-52b3-d7d0-5f58d864ca6c" Jun 26 07:17:53.067718 containerd[1627]: 2024-06-26 07:17:52.637 [INFO][4164] k8s.go 615: Releasing IP address(es) ContainerID="b5ad93a53b8bb3ab696b457709ee79a5aaf128b47f66f6c214d8936c7aa0e4f5" Jun 26 07:17:53.067718 containerd[1627]: 2024-06-26 07:17:52.637 [INFO][4164] utils.go 188: Calico CNI releasing IP address ContainerID="b5ad93a53b8bb3ab696b457709ee79a5aaf128b47f66f6c214d8936c7aa0e4f5" Jun 26 07:17:53.067718 containerd[1627]: 2024-06-26 07:17:52.922 [INFO][4209] ipam_plugin.go 411: Releasing address using handleID ContainerID="b5ad93a53b8bb3ab696b457709ee79a5aaf128b47f66f6c214d8936c7aa0e4f5" HandleID="k8s-pod-network.b5ad93a53b8bb3ab696b457709ee79a5aaf128b47f66f6c214d8936c7aa0e4f5" Workload="ci--4012.0.0--2--1603354b52-k8s-calico--kube--controllers--7bfb745d89--z8kg8-eth0" Jun 26 07:17:53.067718 containerd[1627]: 2024-06-26 07:17:52.922 [INFO][4209] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 26 07:17:53.067718 containerd[1627]: 2024-06-26 07:17:52.922 [INFO][4209] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 26 07:17:53.067718 containerd[1627]: 2024-06-26 07:17:52.987 [WARNING][4209] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="b5ad93a53b8bb3ab696b457709ee79a5aaf128b47f66f6c214d8936c7aa0e4f5" HandleID="k8s-pod-network.b5ad93a53b8bb3ab696b457709ee79a5aaf128b47f66f6c214d8936c7aa0e4f5" Workload="ci--4012.0.0--2--1603354b52-k8s-calico--kube--controllers--7bfb745d89--z8kg8-eth0" Jun 26 07:17:53.067718 containerd[1627]: 2024-06-26 07:17:52.988 [INFO][4209] ipam_plugin.go 439: Releasing address using workloadID ContainerID="b5ad93a53b8bb3ab696b457709ee79a5aaf128b47f66f6c214d8936c7aa0e4f5" HandleID="k8s-pod-network.b5ad93a53b8bb3ab696b457709ee79a5aaf128b47f66f6c214d8936c7aa0e4f5" Workload="ci--4012.0.0--2--1603354b52-k8s-calico--kube--controllers--7bfb745d89--z8kg8-eth0" Jun 26 07:17:53.067718 containerd[1627]: 2024-06-26 07:17:53.002 [INFO][4209] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 26 07:17:53.067718 containerd[1627]: 2024-06-26 07:17:53.043 [INFO][4164] k8s.go 621: Teardown processing complete. ContainerID="b5ad93a53b8bb3ab696b457709ee79a5aaf128b47f66f6c214d8936c7aa0e4f5" Jun 26 07:17:53.075640 containerd[1627]: time="2024-06-26T07:17:53.070792238Z" level=info msg="TearDown network for sandbox \"b5ad93a53b8bb3ab696b457709ee79a5aaf128b47f66f6c214d8936c7aa0e4f5\" successfully" Jun 26 07:17:53.075640 containerd[1627]: time="2024-06-26T07:17:53.074951393Z" level=info msg="StopPodSandbox for \"b5ad93a53b8bb3ab696b457709ee79a5aaf128b47f66f6c214d8936c7aa0e4f5\" returns successfully" Jun 26 07:17:53.087799 containerd[1627]: time="2024-06-26T07:17:53.087235411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bfb745d89-z8kg8,Uid:488cb136-f123-48bb-a389-28fb8d6e1e83,Namespace:calico-system,Attempt:1,}" Jun 26 07:17:53.088371 systemd[1]: run-netns-cni\x2d8bc465c6\x2db92a\x2d52b3\x2dd7d0\x2d5f58d864ca6c.mount: Deactivated successfully. Jun 26 07:17:53.357265 containerd[1627]: 2024-06-26 07:17:52.846 [INFO][4163] k8s.go 608: Cleaning up netns ContainerID="3d9d6db6636bf77476cbc23d898ac7cb0678031819b216dce55afd5f7b643329" Jun 26 07:17:53.357265 containerd[1627]: 2024-06-26 07:17:52.847 [INFO][4163] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="3d9d6db6636bf77476cbc23d898ac7cb0678031819b216dce55afd5f7b643329" iface="eth0" netns="/var/run/netns/cni-689ca036-00fe-11b1-249e-9cd69cddc0f1" Jun 26 07:17:53.357265 containerd[1627]: 2024-06-26 07:17:52.847 [INFO][4163] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="3d9d6db6636bf77476cbc23d898ac7cb0678031819b216dce55afd5f7b643329" iface="eth0" netns="/var/run/netns/cni-689ca036-00fe-11b1-249e-9cd69cddc0f1" Jun 26 07:17:53.357265 containerd[1627]: 2024-06-26 07:17:52.848 [INFO][4163] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="3d9d6db6636bf77476cbc23d898ac7cb0678031819b216dce55afd5f7b643329" iface="eth0" netns="/var/run/netns/cni-689ca036-00fe-11b1-249e-9cd69cddc0f1" Jun 26 07:17:53.357265 containerd[1627]: 2024-06-26 07:17:52.848 [INFO][4163] k8s.go 615: Releasing IP address(es) ContainerID="3d9d6db6636bf77476cbc23d898ac7cb0678031819b216dce55afd5f7b643329" Jun 26 07:17:53.357265 containerd[1627]: 2024-06-26 07:17:52.848 [INFO][4163] utils.go 188: Calico CNI releasing IP address ContainerID="3d9d6db6636bf77476cbc23d898ac7cb0678031819b216dce55afd5f7b643329" Jun 26 07:17:53.357265 containerd[1627]: 2024-06-26 07:17:53.197 [INFO][4238] ipam_plugin.go 411: Releasing address using handleID ContainerID="3d9d6db6636bf77476cbc23d898ac7cb0678031819b216dce55afd5f7b643329" HandleID="k8s-pod-network.3d9d6db6636bf77476cbc23d898ac7cb0678031819b216dce55afd5f7b643329" Workload="ci--4012.0.0--2--1603354b52-k8s-coredns--5dd5756b68--x5rld-eth0" Jun 26 07:17:53.357265 containerd[1627]: 2024-06-26 07:17:53.204 [INFO][4238] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 26 07:17:53.357265 containerd[1627]: 2024-06-26 07:17:53.204 [INFO][4238] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 26 07:17:53.357265 containerd[1627]: 2024-06-26 07:17:53.263 [WARNING][4238] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="3d9d6db6636bf77476cbc23d898ac7cb0678031819b216dce55afd5f7b643329" HandleID="k8s-pod-network.3d9d6db6636bf77476cbc23d898ac7cb0678031819b216dce55afd5f7b643329" Workload="ci--4012.0.0--2--1603354b52-k8s-coredns--5dd5756b68--x5rld-eth0" Jun 26 07:17:53.357265 containerd[1627]: 2024-06-26 07:17:53.263 [INFO][4238] ipam_plugin.go 439: Releasing address using workloadID ContainerID="3d9d6db6636bf77476cbc23d898ac7cb0678031819b216dce55afd5f7b643329" HandleID="k8s-pod-network.3d9d6db6636bf77476cbc23d898ac7cb0678031819b216dce55afd5f7b643329" Workload="ci--4012.0.0--2--1603354b52-k8s-coredns--5dd5756b68--x5rld-eth0" Jun 26 07:17:53.357265 containerd[1627]: 2024-06-26 07:17:53.289 [INFO][4238] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 26 07:17:53.357265 containerd[1627]: 2024-06-26 07:17:53.337 [INFO][4163] k8s.go 621: Teardown processing complete. ContainerID="3d9d6db6636bf77476cbc23d898ac7cb0678031819b216dce55afd5f7b643329" Jun 26 07:17:53.363755 containerd[1627]: time="2024-06-26T07:17:53.363091520Z" level=info msg="TearDown network for sandbox \"3d9d6db6636bf77476cbc23d898ac7cb0678031819b216dce55afd5f7b643329\" successfully" Jun 26 07:17:53.363755 containerd[1627]: time="2024-06-26T07:17:53.363158670Z" level=info msg="StopPodSandbox for \"3d9d6db6636bf77476cbc23d898ac7cb0678031819b216dce55afd5f7b643329\" returns successfully" Jun 26 07:17:53.369088 kubelet[2743]: E0626 07:17:53.368191 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:17:53.370483 containerd[1627]: time="2024-06-26T07:17:53.370128474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-x5rld,Uid:c348e5ac-b5ea-4ba1-8073-0aea0fe971e5,Namespace:kube-system,Attempt:1,}" Jun 26 07:17:53.441108 containerd[1627]: time="2024-06-26T07:17:53.440512813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-csr69,Uid:13062869-9328-4eb6-9b1e-f48ab6dc9503,Namespace:calico-system,Attempt:1,} returns sandbox id \"4f3be2f45c8e69039005d0690c901784d51b9206b7a5f5547e1b1ec01ac909db\"" Jun 26 07:17:53.463010 containerd[1627]: time="2024-06-26T07:17:53.462921496Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Jun 26 07:17:53.532998 containerd[1627]: time="2024-06-26T07:17:53.532838004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-2ztk4,Uid:682fa9e0-fbd0-428c-a73b-d24968366d72,Namespace:kube-system,Attempt:1,} returns sandbox id \"9e56ef307e17a321ab07b40afe8600e18abd2221fcfd7de6e9e7d8a5448fdccc\"" Jun 26 07:17:53.539353 kubelet[2743]: E0626 07:17:53.539271 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:17:53.664122 systemd-networkd[1248]: cali2cc9ada6af5: Gained IPv6LL Jun 26 07:17:53.676092 containerd[1627]: time="2024-06-26T07:17:53.676021641Z" level=info msg="CreateContainer within sandbox \"9e56ef307e17a321ab07b40afe8600e18abd2221fcfd7de6e9e7d8a5448fdccc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 26 07:17:53.776206 containerd[1627]: time="2024-06-26T07:17:53.776048127Z" level=info msg="CreateContainer within sandbox \"9e56ef307e17a321ab07b40afe8600e18abd2221fcfd7de6e9e7d8a5448fdccc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1b0d09918df18946f21a2e9f54992b492aa10b08aacdfcf69d98539206b35ca4\"" Jun 26 07:17:53.782780 containerd[1627]: time="2024-06-26T07:17:53.780807704Z" level=info msg="StartContainer for \"1b0d09918df18946f21a2e9f54992b492aa10b08aacdfcf69d98539206b35ca4\"" Jun 26 07:17:53.791836 systemd-networkd[1248]: calidcd2b1fb593: Gained IPv6LL Jun 26 07:17:53.884870 systemd-networkd[1248]: vxlan.calico: Link UP Jun 26 07:17:53.884883 systemd-networkd[1248]: vxlan.calico: Gained carrier Jun 26 07:17:53.890117 systemd[1]: run-netns-cni\x2d689ca036\x2d00fe\x2d11b1\x2d249e\x2d9cd69cddc0f1.mount: Deactivated successfully. Jun 26 07:17:54.106559 systemd[1]: run-containerd-runc-k8s.io-1b0d09918df18946f21a2e9f54992b492aa10b08aacdfcf69d98539206b35ca4-runc.Nyw1Aw.mount: Deactivated successfully. Jun 26 07:17:54.420454 containerd[1627]: time="2024-06-26T07:17:54.415789953Z" level=info msg="StartContainer for \"1b0d09918df18946f21a2e9f54992b492aa10b08aacdfcf69d98539206b35ca4\" returns successfully" Jun 26 07:17:54.464736 systemd-networkd[1248]: cali1f3ce07a97a: Link UP Jun 26 07:17:54.466902 systemd-networkd[1248]: cali1f3ce07a97a: Gained carrier Jun 26 07:17:54.586783 containerd[1627]: 2024-06-26 07:17:53.714 [INFO][4301] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4012.0.0--2--1603354b52-k8s-calico--kube--controllers--7bfb745d89--z8kg8-eth0 calico-kube-controllers-7bfb745d89- calico-system 488cb136-f123-48bb-a389-28fb8d6e1e83 799 0 2024-06-26 07:17:21 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7bfb745d89 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4012.0.0-2-1603354b52 calico-kube-controllers-7bfb745d89-z8kg8 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali1f3ce07a97a [] []}} ContainerID="80064a79734b545d54fdf2c4c6eeac83a8143522dd0af2987344e0fd7e64e925" Namespace="calico-system" Pod="calico-kube-controllers-7bfb745d89-z8kg8" WorkloadEndpoint="ci--4012.0.0--2--1603354b52-k8s-calico--kube--controllers--7bfb745d89--z8kg8-" Jun 26 07:17:54.586783 containerd[1627]: 2024-06-26 07:17:53.715 [INFO][4301] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="80064a79734b545d54fdf2c4c6eeac83a8143522dd0af2987344e0fd7e64e925" Namespace="calico-system" Pod="calico-kube-controllers-7bfb745d89-z8kg8" WorkloadEndpoint="ci--4012.0.0--2--1603354b52-k8s-calico--kube--controllers--7bfb745d89--z8kg8-eth0" Jun 26 07:17:54.586783 containerd[1627]: 2024-06-26 07:17:54.066 [INFO][4366] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="80064a79734b545d54fdf2c4c6eeac83a8143522dd0af2987344e0fd7e64e925" HandleID="k8s-pod-network.80064a79734b545d54fdf2c4c6eeac83a8143522dd0af2987344e0fd7e64e925" Workload="ci--4012.0.0--2--1603354b52-k8s-calico--kube--controllers--7bfb745d89--z8kg8-eth0" Jun 26 07:17:54.586783 containerd[1627]: 2024-06-26 07:17:54.209 [INFO][4366] ipam_plugin.go 264: Auto assigning IP ContainerID="80064a79734b545d54fdf2c4c6eeac83a8143522dd0af2987344e0fd7e64e925" HandleID="k8s-pod-network.80064a79734b545d54fdf2c4c6eeac83a8143522dd0af2987344e0fd7e64e925" Workload="ci--4012.0.0--2--1603354b52-k8s-calico--kube--controllers--7bfb745d89--z8kg8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000386f30), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4012.0.0-2-1603354b52", "pod":"calico-kube-controllers-7bfb745d89-z8kg8", "timestamp":"2024-06-26 07:17:54.066061806 +0000 UTC"}, Hostname:"ci-4012.0.0-2-1603354b52", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 26 07:17:54.586783 containerd[1627]: 2024-06-26 07:17:54.218 [INFO][4366] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 26 07:17:54.586783 containerd[1627]: 2024-06-26 07:17:54.219 [INFO][4366] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 26 07:17:54.586783 containerd[1627]: 2024-06-26 07:17:54.220 [INFO][4366] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4012.0.0-2-1603354b52' Jun 26 07:17:54.586783 containerd[1627]: 2024-06-26 07:17:54.237 [INFO][4366] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.80064a79734b545d54fdf2c4c6eeac83a8143522dd0af2987344e0fd7e64e925" host="ci-4012.0.0-2-1603354b52" Jun 26 07:17:54.586783 containerd[1627]: 2024-06-26 07:17:54.278 [INFO][4366] ipam.go 372: Looking up existing affinities for host host="ci-4012.0.0-2-1603354b52" Jun 26 07:17:54.586783 containerd[1627]: 2024-06-26 07:17:54.315 [INFO][4366] ipam.go 489: Trying affinity for 192.168.23.0/26 host="ci-4012.0.0-2-1603354b52" Jun 26 07:17:54.586783 containerd[1627]: 2024-06-26 07:17:54.350 [INFO][4366] ipam.go 155: Attempting to load block cidr=192.168.23.0/26 host="ci-4012.0.0-2-1603354b52" Jun 26 07:17:54.586783 containerd[1627]: 2024-06-26 07:17:54.399 [INFO][4366] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.23.0/26 host="ci-4012.0.0-2-1603354b52" Jun 26 07:17:54.586783 containerd[1627]: 2024-06-26 07:17:54.399 [INFO][4366] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.23.0/26 handle="k8s-pod-network.80064a79734b545d54fdf2c4c6eeac83a8143522dd0af2987344e0fd7e64e925" host="ci-4012.0.0-2-1603354b52" Jun 26 07:17:54.586783 containerd[1627]: 2024-06-26 07:17:54.404 [INFO][4366] ipam.go 1685: Creating new handle: k8s-pod-network.80064a79734b545d54fdf2c4c6eeac83a8143522dd0af2987344e0fd7e64e925 Jun 26 07:17:54.586783 containerd[1627]: 2024-06-26 07:17:54.413 [INFO][4366] ipam.go 1203: Writing block in order to claim IPs block=192.168.23.0/26 handle="k8s-pod-network.80064a79734b545d54fdf2c4c6eeac83a8143522dd0af2987344e0fd7e64e925" host="ci-4012.0.0-2-1603354b52" Jun 26 07:17:54.586783 containerd[1627]: 2024-06-26 07:17:54.435 [INFO][4366] ipam.go 1216: Successfully claimed IPs: [192.168.23.3/26] block=192.168.23.0/26 handle="k8s-pod-network.80064a79734b545d54fdf2c4c6eeac83a8143522dd0af2987344e0fd7e64e925" host="ci-4012.0.0-2-1603354b52" Jun 26 07:17:54.586783 containerd[1627]: 2024-06-26 07:17:54.436 [INFO][4366] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.23.3/26] handle="k8s-pod-network.80064a79734b545d54fdf2c4c6eeac83a8143522dd0af2987344e0fd7e64e925" host="ci-4012.0.0-2-1603354b52" Jun 26 07:17:54.586783 containerd[1627]: 2024-06-26 07:17:54.437 [INFO][4366] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 26 07:17:54.586783 containerd[1627]: 2024-06-26 07:17:54.437 [INFO][4366] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.23.3/26] IPv6=[] ContainerID="80064a79734b545d54fdf2c4c6eeac83a8143522dd0af2987344e0fd7e64e925" HandleID="k8s-pod-network.80064a79734b545d54fdf2c4c6eeac83a8143522dd0af2987344e0fd7e64e925" Workload="ci--4012.0.0--2--1603354b52-k8s-calico--kube--controllers--7bfb745d89--z8kg8-eth0" Jun 26 07:17:54.590439 containerd[1627]: 2024-06-26 07:17:54.443 [INFO][4301] k8s.go 386: Populated endpoint ContainerID="80064a79734b545d54fdf2c4c6eeac83a8143522dd0af2987344e0fd7e64e925" Namespace="calico-system" Pod="calico-kube-controllers-7bfb745d89-z8kg8" WorkloadEndpoint="ci--4012.0.0--2--1603354b52-k8s-calico--kube--controllers--7bfb745d89--z8kg8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--2--1603354b52-k8s-calico--kube--controllers--7bfb745d89--z8kg8-eth0", GenerateName:"calico-kube-controllers-7bfb745d89-", Namespace:"calico-system", SelfLink:"", UID:"488cb136-f123-48bb-a389-28fb8d6e1e83", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 7, 17, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7bfb745d89", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-2-1603354b52", ContainerID:"", Pod:"calico-kube-controllers-7bfb745d89-z8kg8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.23.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1f3ce07a97a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 26 07:17:54.590439 containerd[1627]: 2024-06-26 07:17:54.443 [INFO][4301] k8s.go 387: Calico CNI using IPs: [192.168.23.3/32] ContainerID="80064a79734b545d54fdf2c4c6eeac83a8143522dd0af2987344e0fd7e64e925" Namespace="calico-system" Pod="calico-kube-controllers-7bfb745d89-z8kg8" WorkloadEndpoint="ci--4012.0.0--2--1603354b52-k8s-calico--kube--controllers--7bfb745d89--z8kg8-eth0" Jun 26 07:17:54.590439 containerd[1627]: 2024-06-26 07:17:54.443 [INFO][4301] dataplane_linux.go 68: Setting the host side veth name to cali1f3ce07a97a ContainerID="80064a79734b545d54fdf2c4c6eeac83a8143522dd0af2987344e0fd7e64e925" Namespace="calico-system" Pod="calico-kube-controllers-7bfb745d89-z8kg8" WorkloadEndpoint="ci--4012.0.0--2--1603354b52-k8s-calico--kube--controllers--7bfb745d89--z8kg8-eth0" Jun 26 07:17:54.590439 containerd[1627]: 2024-06-26 07:17:54.468 [INFO][4301] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="80064a79734b545d54fdf2c4c6eeac83a8143522dd0af2987344e0fd7e64e925" Namespace="calico-system" Pod="calico-kube-controllers-7bfb745d89-z8kg8" WorkloadEndpoint="ci--4012.0.0--2--1603354b52-k8s-calico--kube--controllers--7bfb745d89--z8kg8-eth0" Jun 26 07:17:54.590439 containerd[1627]: 2024-06-26 07:17:54.474 [INFO][4301] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="80064a79734b545d54fdf2c4c6eeac83a8143522dd0af2987344e0fd7e64e925" Namespace="calico-system" Pod="calico-kube-controllers-7bfb745d89-z8kg8" WorkloadEndpoint="ci--4012.0.0--2--1603354b52-k8s-calico--kube--controllers--7bfb745d89--z8kg8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--2--1603354b52-k8s-calico--kube--controllers--7bfb745d89--z8kg8-eth0", GenerateName:"calico-kube-controllers-7bfb745d89-", Namespace:"calico-system", SelfLink:"", UID:"488cb136-f123-48bb-a389-28fb8d6e1e83", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 7, 17, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7bfb745d89", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-2-1603354b52", ContainerID:"80064a79734b545d54fdf2c4c6eeac83a8143522dd0af2987344e0fd7e64e925", Pod:"calico-kube-controllers-7bfb745d89-z8kg8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.23.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1f3ce07a97a", MAC:"22:1d:04:dd:57:ef", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 26 07:17:54.590439 containerd[1627]: 2024-06-26 07:17:54.569 [INFO][4301] k8s.go 500: Wrote updated endpoint to datastore ContainerID="80064a79734b545d54fdf2c4c6eeac83a8143522dd0af2987344e0fd7e64e925" Namespace="calico-system" Pod="calico-kube-controllers-7bfb745d89-z8kg8" WorkloadEndpoint="ci--4012.0.0--2--1603354b52-k8s-calico--kube--controllers--7bfb745d89--z8kg8-eth0" Jun 26 07:17:54.649771 systemd-networkd[1248]: cali8f3b735e19b: Link UP Jun 26 07:17:54.661863 systemd-networkd[1248]: cali8f3b735e19b: Gained carrier Jun 26 07:17:54.764787 containerd[1627]: 2024-06-26 07:17:53.803 [INFO][4336] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4012.0.0--2--1603354b52-k8s-coredns--5dd5756b68--x5rld-eth0 coredns-5dd5756b68- kube-system c348e5ac-b5ea-4ba1-8073-0aea0fe971e5 801 0 2024-06-26 07:17:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4012.0.0-2-1603354b52 coredns-5dd5756b68-x5rld eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8f3b735e19b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="0e4d26c45cec1e61d2c7a3d0e0e1f07d53a62e05949ae180dbe51c3d39579282" Namespace="kube-system" Pod="coredns-5dd5756b68-x5rld" WorkloadEndpoint="ci--4012.0.0--2--1603354b52-k8s-coredns--5dd5756b68--x5rld-" Jun 26 07:17:54.764787 containerd[1627]: 2024-06-26 07:17:53.803 [INFO][4336] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0e4d26c45cec1e61d2c7a3d0e0e1f07d53a62e05949ae180dbe51c3d39579282" Namespace="kube-system" Pod="coredns-5dd5756b68-x5rld" WorkloadEndpoint="ci--4012.0.0--2--1603354b52-k8s-coredns--5dd5756b68--x5rld-eth0" Jun 26 07:17:54.764787 containerd[1627]: 2024-06-26 07:17:54.316 [INFO][4379] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0e4d26c45cec1e61d2c7a3d0e0e1f07d53a62e05949ae180dbe51c3d39579282" HandleID="k8s-pod-network.0e4d26c45cec1e61d2c7a3d0e0e1f07d53a62e05949ae180dbe51c3d39579282" Workload="ci--4012.0.0--2--1603354b52-k8s-coredns--5dd5756b68--x5rld-eth0" Jun 26 07:17:54.764787 containerd[1627]: 2024-06-26 07:17:54.378 [INFO][4379] ipam_plugin.go 264: Auto assigning IP ContainerID="0e4d26c45cec1e61d2c7a3d0e0e1f07d53a62e05949ae180dbe51c3d39579282" HandleID="k8s-pod-network.0e4d26c45cec1e61d2c7a3d0e0e1f07d53a62e05949ae180dbe51c3d39579282" Workload="ci--4012.0.0--2--1603354b52-k8s-coredns--5dd5756b68--x5rld-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003628d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4012.0.0-2-1603354b52", "pod":"coredns-5dd5756b68-x5rld", "timestamp":"2024-06-26 07:17:54.316857611 +0000 UTC"}, Hostname:"ci-4012.0.0-2-1603354b52", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 26 07:17:54.764787 containerd[1627]: 2024-06-26 07:17:54.379 [INFO][4379] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 26 07:17:54.764787 containerd[1627]: 2024-06-26 07:17:54.437 [INFO][4379] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 26 07:17:54.764787 containerd[1627]: 2024-06-26 07:17:54.439 [INFO][4379] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4012.0.0-2-1603354b52' Jun 26 07:17:54.764787 containerd[1627]: 2024-06-26 07:17:54.444 [INFO][4379] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0e4d26c45cec1e61d2c7a3d0e0e1f07d53a62e05949ae180dbe51c3d39579282" host="ci-4012.0.0-2-1603354b52" Jun 26 07:17:54.764787 containerd[1627]: 2024-06-26 07:17:54.458 [INFO][4379] ipam.go 372: Looking up existing affinities for host host="ci-4012.0.0-2-1603354b52" Jun 26 07:17:54.764787 containerd[1627]: 2024-06-26 07:17:54.499 [INFO][4379] ipam.go 489: Trying affinity for 192.168.23.0/26 host="ci-4012.0.0-2-1603354b52" Jun 26 07:17:54.764787 containerd[1627]: 2024-06-26 07:17:54.542 [INFO][4379] ipam.go 155: Attempting to load block cidr=192.168.23.0/26 host="ci-4012.0.0-2-1603354b52" Jun 26 07:17:54.764787 containerd[1627]: 2024-06-26 07:17:54.550 [INFO][4379] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.23.0/26 host="ci-4012.0.0-2-1603354b52" Jun 26 07:17:54.764787 containerd[1627]: 2024-06-26 07:17:54.551 [INFO][4379] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.23.0/26 handle="k8s-pod-network.0e4d26c45cec1e61d2c7a3d0e0e1f07d53a62e05949ae180dbe51c3d39579282" host="ci-4012.0.0-2-1603354b52" Jun 26 07:17:54.764787 containerd[1627]: 2024-06-26 07:17:54.557 [INFO][4379] ipam.go 1685: Creating new handle: k8s-pod-network.0e4d26c45cec1e61d2c7a3d0e0e1f07d53a62e05949ae180dbe51c3d39579282 Jun 26 07:17:54.764787 containerd[1627]: 2024-06-26 07:17:54.572 [INFO][4379] ipam.go 1203: Writing block in order to claim IPs block=192.168.23.0/26 handle="k8s-pod-network.0e4d26c45cec1e61d2c7a3d0e0e1f07d53a62e05949ae180dbe51c3d39579282" host="ci-4012.0.0-2-1603354b52" Jun 26 07:17:54.764787 containerd[1627]: 2024-06-26 07:17:54.598 [INFO][4379] ipam.go 1216: Successfully claimed IPs: [192.168.23.4/26] block=192.168.23.0/26 handle="k8s-pod-network.0e4d26c45cec1e61d2c7a3d0e0e1f07d53a62e05949ae180dbe51c3d39579282" host="ci-4012.0.0-2-1603354b52" Jun 26 07:17:54.764787 containerd[1627]: 2024-06-26 07:17:54.598 [INFO][4379] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.23.4/26] handle="k8s-pod-network.0e4d26c45cec1e61d2c7a3d0e0e1f07d53a62e05949ae180dbe51c3d39579282" host="ci-4012.0.0-2-1603354b52" Jun 26 07:17:54.764787 containerd[1627]: 2024-06-26 07:17:54.598 [INFO][4379] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 26 07:17:54.764787 containerd[1627]: 2024-06-26 07:17:54.599 [INFO][4379] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.23.4/26] IPv6=[] ContainerID="0e4d26c45cec1e61d2c7a3d0e0e1f07d53a62e05949ae180dbe51c3d39579282" HandleID="k8s-pod-network.0e4d26c45cec1e61d2c7a3d0e0e1f07d53a62e05949ae180dbe51c3d39579282" Workload="ci--4012.0.0--2--1603354b52-k8s-coredns--5dd5756b68--x5rld-eth0" Jun 26 07:17:54.784763 containerd[1627]: 2024-06-26 07:17:54.611 [INFO][4336] k8s.go 386: Populated endpoint ContainerID="0e4d26c45cec1e61d2c7a3d0e0e1f07d53a62e05949ae180dbe51c3d39579282" Namespace="kube-system" Pod="coredns-5dd5756b68-x5rld" WorkloadEndpoint="ci--4012.0.0--2--1603354b52-k8s-coredns--5dd5756b68--x5rld-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--2--1603354b52-k8s-coredns--5dd5756b68--x5rld-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"c348e5ac-b5ea-4ba1-8073-0aea0fe971e5", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 7, 17, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-2-1603354b52", ContainerID:"", Pod:"coredns-5dd5756b68-x5rld", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.23.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8f3b735e19b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 26 07:17:54.784763 containerd[1627]: 2024-06-26 07:17:54.613 [INFO][4336] k8s.go 387: Calico CNI using IPs: [192.168.23.4/32] ContainerID="0e4d26c45cec1e61d2c7a3d0e0e1f07d53a62e05949ae180dbe51c3d39579282" Namespace="kube-system" Pod="coredns-5dd5756b68-x5rld" WorkloadEndpoint="ci--4012.0.0--2--1603354b52-k8s-coredns--5dd5756b68--x5rld-eth0" Jun 26 07:17:54.784763 containerd[1627]: 2024-06-26 07:17:54.614 [INFO][4336] dataplane_linux.go 68: Setting the host side veth name to cali8f3b735e19b ContainerID="0e4d26c45cec1e61d2c7a3d0e0e1f07d53a62e05949ae180dbe51c3d39579282" Namespace="kube-system" Pod="coredns-5dd5756b68-x5rld" WorkloadEndpoint="ci--4012.0.0--2--1603354b52-k8s-coredns--5dd5756b68--x5rld-eth0" Jun 26 07:17:54.784763 containerd[1627]: 2024-06-26 07:17:54.659 [INFO][4336] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="0e4d26c45cec1e61d2c7a3d0e0e1f07d53a62e05949ae180dbe51c3d39579282" Namespace="kube-system" Pod="coredns-5dd5756b68-x5rld" WorkloadEndpoint="ci--4012.0.0--2--1603354b52-k8s-coredns--5dd5756b68--x5rld-eth0" Jun 26 07:17:54.784763 containerd[1627]: 2024-06-26 07:17:54.663 [INFO][4336] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0e4d26c45cec1e61d2c7a3d0e0e1f07d53a62e05949ae180dbe51c3d39579282" Namespace="kube-system" Pod="coredns-5dd5756b68-x5rld" WorkloadEndpoint="ci--4012.0.0--2--1603354b52-k8s-coredns--5dd5756b68--x5rld-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--2--1603354b52-k8s-coredns--5dd5756b68--x5rld-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"c348e5ac-b5ea-4ba1-8073-0aea0fe971e5", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 7, 17, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-2-1603354b52", ContainerID:"0e4d26c45cec1e61d2c7a3d0e0e1f07d53a62e05949ae180dbe51c3d39579282", Pod:"coredns-5dd5756b68-x5rld", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.23.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8f3b735e19b", MAC:"fa:f1:7f:70:da:69", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 26 07:17:54.784763 containerd[1627]: 2024-06-26 07:17:54.734 [INFO][4336] k8s.go 500: Wrote updated endpoint to datastore ContainerID="0e4d26c45cec1e61d2c7a3d0e0e1f07d53a62e05949ae180dbe51c3d39579282" Namespace="kube-system" Pod="coredns-5dd5756b68-x5rld" WorkloadEndpoint="ci--4012.0.0--2--1603354b52-k8s-coredns--5dd5756b68--x5rld-eth0" Jun 26 07:17:54.830351 containerd[1627]: time="2024-06-26T07:17:54.829410556Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 26 07:17:54.830351 containerd[1627]: time="2024-06-26T07:17:54.829552264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:17:54.830351 containerd[1627]: time="2024-06-26T07:17:54.829638755Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 26 07:17:54.830351 containerd[1627]: time="2024-06-26T07:17:54.829707228Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:17:55.007028 systemd-networkd[1248]: vxlan.calico: Gained IPv6LL Jun 26 07:17:55.041609 containerd[1627]: time="2024-06-26T07:17:55.036812962Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 26 07:17:55.041609 containerd[1627]: time="2024-06-26T07:17:55.036954702Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:17:55.041609 containerd[1627]: time="2024-06-26T07:17:55.037019524Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 26 07:17:55.041609 containerd[1627]: time="2024-06-26T07:17:55.037043475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:17:55.096902 kubelet[2743]: E0626 07:17:55.096246 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:17:55.165721 kubelet[2743]: I0626 07:17:55.163707 2743 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-2ztk4" podStartSLOduration=42.163606712 podCreationTimestamp="2024-06-26 07:17:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-26 07:17:55.162909408 +0000 UTC m=+55.421494164" watchObservedRunningTime="2024-06-26 07:17:55.163606712 +0000 UTC m=+55.422191468" Jun 26 07:17:55.305949 containerd[1627]: time="2024-06-26T07:17:55.304765812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bfb745d89-z8kg8,Uid:488cb136-f123-48bb-a389-28fb8d6e1e83,Namespace:calico-system,Attempt:1,} returns sandbox id \"80064a79734b545d54fdf2c4c6eeac83a8143522dd0af2987344e0fd7e64e925\"" Jun 26 07:17:55.332120 containerd[1627]: time="2024-06-26T07:17:55.331177994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-x5rld,Uid:c348e5ac-b5ea-4ba1-8073-0aea0fe971e5,Namespace:kube-system,Attempt:1,} returns sandbox id \"0e4d26c45cec1e61d2c7a3d0e0e1f07d53a62e05949ae180dbe51c3d39579282\"" Jun 26 07:17:55.333526 kubelet[2743]: E0626 07:17:55.332314 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:17:55.359766 containerd[1627]: time="2024-06-26T07:17:55.359472170Z" level=info msg="CreateContainer within sandbox \"0e4d26c45cec1e61d2c7a3d0e0e1f07d53a62e05949ae180dbe51c3d39579282\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 26 07:17:55.420304 containerd[1627]: time="2024-06-26T07:17:55.420231412Z" level=info msg="CreateContainer within sandbox \"0e4d26c45cec1e61d2c7a3d0e0e1f07d53a62e05949ae180dbe51c3d39579282\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0ad951df8e9d988dc84caa95a2ad47fbcae83404e2e427e23f63dfb139abd2f3\"" Jun 26 07:17:55.431747 containerd[1627]: time="2024-06-26T07:17:55.429754771Z" level=info msg="StartContainer for \"0ad951df8e9d988dc84caa95a2ad47fbcae83404e2e427e23f63dfb139abd2f3\"" Jun 26 07:17:55.583334 systemd-networkd[1248]: cali1f3ce07a97a: Gained IPv6LL Jun 26 07:17:55.655290 containerd[1627]: time="2024-06-26T07:17:55.654735852Z" level=info msg="StartContainer for \"0ad951df8e9d988dc84caa95a2ad47fbcae83404e2e427e23f63dfb139abd2f3\" returns successfully" Jun 26 07:17:55.838418 systemd-networkd[1248]: cali8f3b735e19b: Gained IPv6LL Jun 26 07:17:56.116661 kubelet[2743]: E0626 07:17:56.115659 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:17:56.138899 kubelet[2743]: E0626 07:17:56.136085 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:17:56.166220 containerd[1627]: time="2024-06-26T07:17:56.166144290Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:17:56.174475 containerd[1627]: time="2024-06-26T07:17:56.174129166Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7641062" Jun 26 07:17:56.182408 kubelet[2743]: I0626 07:17:56.182331 2743 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-x5rld" podStartSLOduration=43.182274934 podCreationTimestamp="2024-06-26 07:17:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-26 07:17:56.147288738 +0000 UTC m=+56.405873502" watchObservedRunningTime="2024-06-26 07:17:56.182274934 +0000 UTC m=+56.440859690" Jun 26 07:17:56.192758 containerd[1627]: time="2024-06-26T07:17:56.190226014Z" level=info msg="ImageCreate event name:\"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:17:56.203721 containerd[1627]: time="2024-06-26T07:17:56.203314532Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:17:56.208184 containerd[1627]: time="2024-06-26T07:17:56.207600218Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"9088822\" in 2.74458302s" Jun 26 07:17:56.209122 containerd[1627]: time="2024-06-26T07:17:56.208665703Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\"" Jun 26 07:17:56.216573 containerd[1627]: time="2024-06-26T07:17:56.216095130Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Jun 26 07:17:56.235098 containerd[1627]: time="2024-06-26T07:17:56.234604168Z" level=info msg="CreateContainer within sandbox \"4f3be2f45c8e69039005d0690c901784d51b9206b7a5f5547e1b1ec01ac909db\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jun 26 07:17:56.346136 containerd[1627]: time="2024-06-26T07:17:56.346065420Z" level=info msg="CreateContainer within sandbox \"4f3be2f45c8e69039005d0690c901784d51b9206b7a5f5547e1b1ec01ac909db\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"c09935a3c147f84ce6f01fbc9d17ba721d95223dd64f73b7a900294ba589c12b\"" Jun 26 07:17:56.349953 containerd[1627]: time="2024-06-26T07:17:56.348973769Z" level=info msg="StartContainer for \"c09935a3c147f84ce6f01fbc9d17ba721d95223dd64f73b7a900294ba589c12b\"" Jun 26 07:17:56.357394 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1484191983.mount: Deactivated successfully. Jun 26 07:17:56.647721 containerd[1627]: time="2024-06-26T07:17:56.646635523Z" level=info msg="StartContainer for \"c09935a3c147f84ce6f01fbc9d17ba721d95223dd64f73b7a900294ba589c12b\" returns successfully" Jun 26 07:17:57.161779 kubelet[2743]: E0626 07:17:57.160601 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:17:57.161779 kubelet[2743]: E0626 07:17:57.161514 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:17:58.190573 kubelet[2743]: E0626 07:17:58.190465 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:17:59.164476 systemd[1]: Started sshd@7-144.126.218.72:22-147.75.109.163:32868.service - OpenSSH per-connection server daemon (147.75.109.163:32868). Jun 26 07:17:59.510162 systemd-journald[1167]: Under memory pressure, flushing caches. Jun 26 07:17:59.505857 systemd-resolved[1504]: Under memory pressure, flushing caches. Jun 26 07:17:59.505962 systemd-resolved[1504]: Flushed all caches. Jun 26 07:17:59.652468 sshd[4667]: Accepted publickey for core from 147.75.109.163 port 32868 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:17:59.702384 sshd[4667]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:17:59.784409 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 26 07:17:59.789892 systemd-logind[1593]: New session 8 of user core. Jun 26 07:18:01.069993 containerd[1627]: time="2024-06-26T07:18:01.065191681Z" level=info msg="StopPodSandbox for \"b5ad93a53b8bb3ab696b457709ee79a5aaf128b47f66f6c214d8936c7aa0e4f5\"" Jun 26 07:18:01.152148 sshd[4667]: pam_unix(sshd:session): session closed for user core Jun 26 07:18:01.166217 systemd[1]: sshd@7-144.126.218.72:22-147.75.109.163:32868.service: Deactivated successfully. Jun 26 07:18:01.185783 systemd[1]: session-8.scope: Deactivated successfully. Jun 26 07:18:01.197798 systemd-logind[1593]: Session 8 logged out. Waiting for processes to exit. Jun 26 07:18:01.209458 systemd-logind[1593]: Removed session 8. Jun 26 07:18:01.541321 systemd-journald[1167]: Under memory pressure, flushing caches. Jun 26 07:18:01.539380 systemd-resolved[1504]: Under memory pressure, flushing caches. Jun 26 07:18:01.539392 systemd-resolved[1504]: Flushed all caches. Jun 26 07:18:01.957808 containerd[1627]: 2024-06-26 07:18:01.553 [WARNING][4704] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b5ad93a53b8bb3ab696b457709ee79a5aaf128b47f66f6c214d8936c7aa0e4f5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--2--1603354b52-k8s-calico--kube--controllers--7bfb745d89--z8kg8-eth0", GenerateName:"calico-kube-controllers-7bfb745d89-", Namespace:"calico-system", SelfLink:"", UID:"488cb136-f123-48bb-a389-28fb8d6e1e83", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 7, 17, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7bfb745d89", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-2-1603354b52", ContainerID:"80064a79734b545d54fdf2c4c6eeac83a8143522dd0af2987344e0fd7e64e925", Pod:"calico-kube-controllers-7bfb745d89-z8kg8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.23.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1f3ce07a97a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 26 07:18:01.957808 containerd[1627]: 2024-06-26 07:18:01.559 [INFO][4704] k8s.go 608: Cleaning up netns ContainerID="b5ad93a53b8bb3ab696b457709ee79a5aaf128b47f66f6c214d8936c7aa0e4f5" Jun 26 07:18:01.957808 containerd[1627]: 2024-06-26 07:18:01.559 [INFO][4704] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="b5ad93a53b8bb3ab696b457709ee79a5aaf128b47f66f6c214d8936c7aa0e4f5" iface="eth0" netns="" Jun 26 07:18:01.957808 containerd[1627]: 2024-06-26 07:18:01.559 [INFO][4704] k8s.go 615: Releasing IP address(es) ContainerID="b5ad93a53b8bb3ab696b457709ee79a5aaf128b47f66f6c214d8936c7aa0e4f5" Jun 26 07:18:01.957808 containerd[1627]: 2024-06-26 07:18:01.559 [INFO][4704] utils.go 188: Calico CNI releasing IP address ContainerID="b5ad93a53b8bb3ab696b457709ee79a5aaf128b47f66f6c214d8936c7aa0e4f5" Jun 26 07:18:01.957808 containerd[1627]: 2024-06-26 07:18:01.739 [INFO][4714] ipam_plugin.go 411: Releasing address using handleID ContainerID="b5ad93a53b8bb3ab696b457709ee79a5aaf128b47f66f6c214d8936c7aa0e4f5" HandleID="k8s-pod-network.b5ad93a53b8bb3ab696b457709ee79a5aaf128b47f66f6c214d8936c7aa0e4f5" Workload="ci--4012.0.0--2--1603354b52-k8s-calico--kube--controllers--7bfb745d89--z8kg8-eth0" Jun 26 07:18:01.957808 containerd[1627]: 2024-06-26 07:18:01.740 [INFO][4714] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 26 07:18:01.957808 containerd[1627]: 2024-06-26 07:18:01.740 [INFO][4714] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 26 07:18:01.957808 containerd[1627]: 2024-06-26 07:18:01.805 [WARNING][4714] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="b5ad93a53b8bb3ab696b457709ee79a5aaf128b47f66f6c214d8936c7aa0e4f5" HandleID="k8s-pod-network.b5ad93a53b8bb3ab696b457709ee79a5aaf128b47f66f6c214d8936c7aa0e4f5" Workload="ci--4012.0.0--2--1603354b52-k8s-calico--kube--controllers--7bfb745d89--z8kg8-eth0" Jun 26 07:18:01.957808 containerd[1627]: 2024-06-26 07:18:01.805 [INFO][4714] ipam_plugin.go 439: Releasing address using workloadID ContainerID="b5ad93a53b8bb3ab696b457709ee79a5aaf128b47f66f6c214d8936c7aa0e4f5" HandleID="k8s-pod-network.b5ad93a53b8bb3ab696b457709ee79a5aaf128b47f66f6c214d8936c7aa0e4f5" Workload="ci--4012.0.0--2--1603354b52-k8s-calico--kube--controllers--7bfb745d89--z8kg8-eth0" Jun 26 07:18:01.957808 containerd[1627]: 2024-06-26 07:18:01.858 [INFO][4714] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 26 07:18:01.957808 containerd[1627]: 2024-06-26 07:18:01.907 [INFO][4704] k8s.go 621: Teardown processing complete. ContainerID="b5ad93a53b8bb3ab696b457709ee79a5aaf128b47f66f6c214d8936c7aa0e4f5" Jun 26 07:18:01.975057 containerd[1627]: time="2024-06-26T07:18:01.968559452Z" level=info msg="TearDown network for sandbox \"b5ad93a53b8bb3ab696b457709ee79a5aaf128b47f66f6c214d8936c7aa0e4f5\" successfully" Jun 26 07:18:01.975057 containerd[1627]: time="2024-06-26T07:18:01.968619380Z" level=info msg="StopPodSandbox for \"b5ad93a53b8bb3ab696b457709ee79a5aaf128b47f66f6c214d8936c7aa0e4f5\" returns successfully" Jun 26 07:18:01.975057 containerd[1627]: time="2024-06-26T07:18:01.970351250Z" level=info msg="RemovePodSandbox for \"b5ad93a53b8bb3ab696b457709ee79a5aaf128b47f66f6c214d8936c7aa0e4f5\"" Jun 26 07:18:01.975057 containerd[1627]: time="2024-06-26T07:18:01.970443930Z" level=info msg="Forcibly stopping sandbox \"b5ad93a53b8bb3ab696b457709ee79a5aaf128b47f66f6c214d8936c7aa0e4f5\"" Jun 26 07:18:02.481334 containerd[1627]: time="2024-06-26T07:18:02.477630804Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:18:02.487716 containerd[1627]: time="2024-06-26T07:18:02.486039219Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=33505793" Jun 26 07:18:02.537723 containerd[1627]: time="2024-06-26T07:18:02.533386487Z" level=info msg="ImageCreate event name:\"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:18:02.576554 containerd[1627]: time="2024-06-26T07:18:02.574154051Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:18:02.584929 containerd[1627]: time="2024-06-26T07:18:02.584704098Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"34953521\" in 6.36846378s" Jun 26 07:18:02.584929 containerd[1627]: time="2024-06-26T07:18:02.584804173Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\"" Jun 26 07:18:02.592862 containerd[1627]: time="2024-06-26T07:18:02.592067172Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Jun 26 07:18:02.655528 containerd[1627]: time="2024-06-26T07:18:02.655283453Z" level=info msg="CreateContainer within sandbox \"80064a79734b545d54fdf2c4c6eeac83a8143522dd0af2987344e0fd7e64e925\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jun 26 07:18:02.731942 containerd[1627]: 2024-06-26 07:18:02.489 [WARNING][4732] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b5ad93a53b8bb3ab696b457709ee79a5aaf128b47f66f6c214d8936c7aa0e4f5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--2--1603354b52-k8s-calico--kube--controllers--7bfb745d89--z8kg8-eth0", GenerateName:"calico-kube-controllers-7bfb745d89-", Namespace:"calico-system", SelfLink:"", UID:"488cb136-f123-48bb-a389-28fb8d6e1e83", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 7, 17, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7bfb745d89", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-2-1603354b52", ContainerID:"80064a79734b545d54fdf2c4c6eeac83a8143522dd0af2987344e0fd7e64e925", Pod:"calico-kube-controllers-7bfb745d89-z8kg8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.23.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1f3ce07a97a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 26 07:18:02.731942 containerd[1627]: 2024-06-26 07:18:02.496 [INFO][4732] k8s.go 608: Cleaning up netns ContainerID="b5ad93a53b8bb3ab696b457709ee79a5aaf128b47f66f6c214d8936c7aa0e4f5" Jun 26 07:18:02.731942 containerd[1627]: 2024-06-26 07:18:02.498 [INFO][4732] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="b5ad93a53b8bb3ab696b457709ee79a5aaf128b47f66f6c214d8936c7aa0e4f5" iface="eth0" netns="" Jun 26 07:18:02.731942 containerd[1627]: 2024-06-26 07:18:02.499 [INFO][4732] k8s.go 615: Releasing IP address(es) ContainerID="b5ad93a53b8bb3ab696b457709ee79a5aaf128b47f66f6c214d8936c7aa0e4f5" Jun 26 07:18:02.731942 containerd[1627]: 2024-06-26 07:18:02.499 [INFO][4732] utils.go 188: Calico CNI releasing IP address ContainerID="b5ad93a53b8bb3ab696b457709ee79a5aaf128b47f66f6c214d8936c7aa0e4f5" Jun 26 07:18:02.731942 containerd[1627]: 2024-06-26 07:18:02.681 [INFO][4738] ipam_plugin.go 411: Releasing address using handleID ContainerID="b5ad93a53b8bb3ab696b457709ee79a5aaf128b47f66f6c214d8936c7aa0e4f5" HandleID="k8s-pod-network.b5ad93a53b8bb3ab696b457709ee79a5aaf128b47f66f6c214d8936c7aa0e4f5" Workload="ci--4012.0.0--2--1603354b52-k8s-calico--kube--controllers--7bfb745d89--z8kg8-eth0" Jun 26 07:18:02.731942 containerd[1627]: 2024-06-26 07:18:02.681 [INFO][4738] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 26 07:18:02.731942 containerd[1627]: 2024-06-26 07:18:02.681 [INFO][4738] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 26 07:18:02.731942 containerd[1627]: 2024-06-26 07:18:02.710 [WARNING][4738] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="b5ad93a53b8bb3ab696b457709ee79a5aaf128b47f66f6c214d8936c7aa0e4f5" HandleID="k8s-pod-network.b5ad93a53b8bb3ab696b457709ee79a5aaf128b47f66f6c214d8936c7aa0e4f5" Workload="ci--4012.0.0--2--1603354b52-k8s-calico--kube--controllers--7bfb745d89--z8kg8-eth0" Jun 26 07:18:02.731942 containerd[1627]: 2024-06-26 07:18:02.710 [INFO][4738] ipam_plugin.go 439: Releasing address using workloadID ContainerID="b5ad93a53b8bb3ab696b457709ee79a5aaf128b47f66f6c214d8936c7aa0e4f5" HandleID="k8s-pod-network.b5ad93a53b8bb3ab696b457709ee79a5aaf128b47f66f6c214d8936c7aa0e4f5" Workload="ci--4012.0.0--2--1603354b52-k8s-calico--kube--controllers--7bfb745d89--z8kg8-eth0" Jun 26 07:18:02.731942 containerd[1627]: 2024-06-26 07:18:02.715 [INFO][4738] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 26 07:18:02.731942 containerd[1627]: 2024-06-26 07:18:02.722 [INFO][4732] k8s.go 621: Teardown processing complete. ContainerID="b5ad93a53b8bb3ab696b457709ee79a5aaf128b47f66f6c214d8936c7aa0e4f5" Jun 26 07:18:02.731942 containerd[1627]: time="2024-06-26T07:18:02.730546017Z" level=info msg="TearDown network for sandbox \"b5ad93a53b8bb3ab696b457709ee79a5aaf128b47f66f6c214d8936c7aa0e4f5\" successfully" Jun 26 07:18:02.786763 containerd[1627]: time="2024-06-26T07:18:02.786254691Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b5ad93a53b8bb3ab696b457709ee79a5aaf128b47f66f6c214d8936c7aa0e4f5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 26 07:18:02.787208 containerd[1627]: time="2024-06-26T07:18:02.787149840Z" level=info msg="RemovePodSandbox \"b5ad93a53b8bb3ab696b457709ee79a5aaf128b47f66f6c214d8936c7aa0e4f5\" returns successfully" Jun 26 07:18:02.796937 containerd[1627]: time="2024-06-26T07:18:02.796003869Z" level=info msg="StopPodSandbox for \"3d9d6db6636bf77476cbc23d898ac7cb0678031819b216dce55afd5f7b643329\"" Jun 26 07:18:02.816655 containerd[1627]: time="2024-06-26T07:18:02.816333615Z" level=info msg="CreateContainer within sandbox \"80064a79734b545d54fdf2c4c6eeac83a8143522dd0af2987344e0fd7e64e925\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"0c43e7c546787790f7c47d7f77e67706e7ba968ea61143c9b1887d6d5783a6d9\"" Jun 26 07:18:02.821854 containerd[1627]: time="2024-06-26T07:18:02.821653440Z" level=info msg="StartContainer for \"0c43e7c546787790f7c47d7f77e67706e7ba968ea61143c9b1887d6d5783a6d9\"" Jun 26 07:18:03.514536 containerd[1627]: 2024-06-26 07:18:03.241 [WARNING][4759] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3d9d6db6636bf77476cbc23d898ac7cb0678031819b216dce55afd5f7b643329" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--2--1603354b52-k8s-coredns--5dd5756b68--x5rld-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"c348e5ac-b5ea-4ba1-8073-0aea0fe971e5", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 7, 17, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-2-1603354b52", ContainerID:"0e4d26c45cec1e61d2c7a3d0e0e1f07d53a62e05949ae180dbe51c3d39579282", Pod:"coredns-5dd5756b68-x5rld", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.23.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8f3b735e19b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 26 07:18:03.514536 containerd[1627]: 2024-06-26 07:18:03.241 [INFO][4759] k8s.go 608: Cleaning up netns ContainerID="3d9d6db6636bf77476cbc23d898ac7cb0678031819b216dce55afd5f7b643329" Jun 26 07:18:03.514536 containerd[1627]: 2024-06-26 07:18:03.241 [INFO][4759] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="3d9d6db6636bf77476cbc23d898ac7cb0678031819b216dce55afd5f7b643329" iface="eth0" netns="" Jun 26 07:18:03.514536 containerd[1627]: 2024-06-26 07:18:03.241 [INFO][4759] k8s.go 615: Releasing IP address(es) ContainerID="3d9d6db6636bf77476cbc23d898ac7cb0678031819b216dce55afd5f7b643329" Jun 26 07:18:03.514536 containerd[1627]: 2024-06-26 07:18:03.241 [INFO][4759] utils.go 188: Calico CNI releasing IP address ContainerID="3d9d6db6636bf77476cbc23d898ac7cb0678031819b216dce55afd5f7b643329" Jun 26 07:18:03.514536 containerd[1627]: 2024-06-26 07:18:03.369 [INFO][4780] ipam_plugin.go 411: Releasing address using handleID ContainerID="3d9d6db6636bf77476cbc23d898ac7cb0678031819b216dce55afd5f7b643329" HandleID="k8s-pod-network.3d9d6db6636bf77476cbc23d898ac7cb0678031819b216dce55afd5f7b643329" Workload="ci--4012.0.0--2--1603354b52-k8s-coredns--5dd5756b68--x5rld-eth0" Jun 26 07:18:03.514536 containerd[1627]: 2024-06-26 07:18:03.370 [INFO][4780] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 26 07:18:03.514536 containerd[1627]: 2024-06-26 07:18:03.370 [INFO][4780] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 26 07:18:03.514536 containerd[1627]: 2024-06-26 07:18:03.456 [WARNING][4780] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="3d9d6db6636bf77476cbc23d898ac7cb0678031819b216dce55afd5f7b643329" HandleID="k8s-pod-network.3d9d6db6636bf77476cbc23d898ac7cb0678031819b216dce55afd5f7b643329" Workload="ci--4012.0.0--2--1603354b52-k8s-coredns--5dd5756b68--x5rld-eth0" Jun 26 07:18:03.514536 containerd[1627]: 2024-06-26 07:18:03.457 [INFO][4780] ipam_plugin.go 439: Releasing address using workloadID ContainerID="3d9d6db6636bf77476cbc23d898ac7cb0678031819b216dce55afd5f7b643329" HandleID="k8s-pod-network.3d9d6db6636bf77476cbc23d898ac7cb0678031819b216dce55afd5f7b643329" Workload="ci--4012.0.0--2--1603354b52-k8s-coredns--5dd5756b68--x5rld-eth0" Jun 26 07:18:03.514536 containerd[1627]: 2024-06-26 07:18:03.482 [INFO][4780] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 26 07:18:03.514536 containerd[1627]: 2024-06-26 07:18:03.486 [INFO][4759] k8s.go 621: Teardown processing complete. ContainerID="3d9d6db6636bf77476cbc23d898ac7cb0678031819b216dce55afd5f7b643329" Jun 26 07:18:03.529270 containerd[1627]: time="2024-06-26T07:18:03.514419665Z" level=info msg="TearDown network for sandbox \"3d9d6db6636bf77476cbc23d898ac7cb0678031819b216dce55afd5f7b643329\" successfully" Jun 26 07:18:03.529270 containerd[1627]: time="2024-06-26T07:18:03.527486641Z" level=info msg="StopPodSandbox for \"3d9d6db6636bf77476cbc23d898ac7cb0678031819b216dce55afd5f7b643329\" returns successfully" Jun 26 07:18:03.546310 containerd[1627]: time="2024-06-26T07:18:03.535616016Z" level=info msg="RemovePodSandbox for \"3d9d6db6636bf77476cbc23d898ac7cb0678031819b216dce55afd5f7b643329\"" Jun 26 07:18:03.546310 containerd[1627]: time="2024-06-26T07:18:03.535734514Z" level=info msg="Forcibly stopping sandbox \"3d9d6db6636bf77476cbc23d898ac7cb0678031819b216dce55afd5f7b643329\"" Jun 26 07:18:03.627747 containerd[1627]: time="2024-06-26T07:18:03.626019229Z" level=info msg="StartContainer for \"0c43e7c546787790f7c47d7f77e67706e7ba968ea61143c9b1887d6d5783a6d9\" returns successfully" Jun 26 07:18:03.932719 containerd[1627]: 2024-06-26 07:18:03.796 [WARNING][4814] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3d9d6db6636bf77476cbc23d898ac7cb0678031819b216dce55afd5f7b643329" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--2--1603354b52-k8s-coredns--5dd5756b68--x5rld-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"c348e5ac-b5ea-4ba1-8073-0aea0fe971e5", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 7, 17, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-2-1603354b52", ContainerID:"0e4d26c45cec1e61d2c7a3d0e0e1f07d53a62e05949ae180dbe51c3d39579282", Pod:"coredns-5dd5756b68-x5rld", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.23.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8f3b735e19b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 26 07:18:03.932719 containerd[1627]: 2024-06-26 07:18:03.796 [INFO][4814] k8s.go 608: Cleaning up netns ContainerID="3d9d6db6636bf77476cbc23d898ac7cb0678031819b216dce55afd5f7b643329" Jun 26 07:18:03.932719 containerd[1627]: 2024-06-26 07:18:03.797 [INFO][4814] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="3d9d6db6636bf77476cbc23d898ac7cb0678031819b216dce55afd5f7b643329" iface="eth0" netns="" Jun 26 07:18:03.932719 containerd[1627]: 2024-06-26 07:18:03.797 [INFO][4814] k8s.go 615: Releasing IP address(es) ContainerID="3d9d6db6636bf77476cbc23d898ac7cb0678031819b216dce55afd5f7b643329" Jun 26 07:18:03.932719 containerd[1627]: 2024-06-26 07:18:03.797 [INFO][4814] utils.go 188: Calico CNI releasing IP address ContainerID="3d9d6db6636bf77476cbc23d898ac7cb0678031819b216dce55afd5f7b643329" Jun 26 07:18:03.932719 containerd[1627]: 2024-06-26 07:18:03.870 [INFO][4821] ipam_plugin.go 411: Releasing address using handleID ContainerID="3d9d6db6636bf77476cbc23d898ac7cb0678031819b216dce55afd5f7b643329" HandleID="k8s-pod-network.3d9d6db6636bf77476cbc23d898ac7cb0678031819b216dce55afd5f7b643329" Workload="ci--4012.0.0--2--1603354b52-k8s-coredns--5dd5756b68--x5rld-eth0" Jun 26 07:18:03.932719 containerd[1627]: 2024-06-26 07:18:03.870 [INFO][4821] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 26 07:18:03.932719 containerd[1627]: 2024-06-26 07:18:03.870 [INFO][4821] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 26 07:18:03.932719 containerd[1627]: 2024-06-26 07:18:03.890 [WARNING][4821] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="3d9d6db6636bf77476cbc23d898ac7cb0678031819b216dce55afd5f7b643329" HandleID="k8s-pod-network.3d9d6db6636bf77476cbc23d898ac7cb0678031819b216dce55afd5f7b643329" Workload="ci--4012.0.0--2--1603354b52-k8s-coredns--5dd5756b68--x5rld-eth0" Jun 26 07:18:03.932719 containerd[1627]: 2024-06-26 07:18:03.890 [INFO][4821] ipam_plugin.go 439: Releasing address using workloadID ContainerID="3d9d6db6636bf77476cbc23d898ac7cb0678031819b216dce55afd5f7b643329" HandleID="k8s-pod-network.3d9d6db6636bf77476cbc23d898ac7cb0678031819b216dce55afd5f7b643329" Workload="ci--4012.0.0--2--1603354b52-k8s-coredns--5dd5756b68--x5rld-eth0" Jun 26 07:18:03.932719 containerd[1627]: 2024-06-26 07:18:03.902 [INFO][4821] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 26 07:18:03.932719 containerd[1627]: 2024-06-26 07:18:03.914 [INFO][4814] k8s.go 621: Teardown processing complete. ContainerID="3d9d6db6636bf77476cbc23d898ac7cb0678031819b216dce55afd5f7b643329" Jun 26 07:18:03.932719 containerd[1627]: time="2024-06-26T07:18:03.923942255Z" level=info msg="TearDown network for sandbox \"3d9d6db6636bf77476cbc23d898ac7cb0678031819b216dce55afd5f7b643329\" successfully" Jun 26 07:18:03.947938 containerd[1627]: time="2024-06-26T07:18:03.947833120Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3d9d6db6636bf77476cbc23d898ac7cb0678031819b216dce55afd5f7b643329\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 26 07:18:03.948161 containerd[1627]: time="2024-06-26T07:18:03.947961720Z" level=info msg="RemovePodSandbox \"3d9d6db6636bf77476cbc23d898ac7cb0678031819b216dce55afd5f7b643329\" returns successfully" Jun 26 07:18:03.950577 containerd[1627]: time="2024-06-26T07:18:03.950365249Z" level=info msg="StopPodSandbox for \"d08f56fee5c4ff66d8c534e754c0484eedc2966e11d58b6c76541284dd9cd72c\"" Jun 26 07:18:04.210914 kubelet[2743]: I0626 07:18:04.207287 2743 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7bfb745d89-z8kg8" podStartSLOduration=35.93151314 podCreationTimestamp="2024-06-26 07:17:21 +0000 UTC" firstStartedPulling="2024-06-26 07:17:55.311616303 +0000 UTC m=+55.570201051" lastFinishedPulling="2024-06-26 07:18:02.587313906 +0000 UTC m=+62.845898658" observedRunningTime="2024-06-26 07:18:04.207190789 +0000 UTC m=+64.465775550" watchObservedRunningTime="2024-06-26 07:18:04.207210747 +0000 UTC m=+64.465795512" Jun 26 07:18:04.477985 containerd[1627]: 2024-06-26 07:18:04.157 [WARNING][4839] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d08f56fee5c4ff66d8c534e754c0484eedc2966e11d58b6c76541284dd9cd72c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--2--1603354b52-k8s-csi--node--driver--csr69-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"13062869-9328-4eb6-9b1e-f48ab6dc9503", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 7, 17, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-2-1603354b52", ContainerID:"4f3be2f45c8e69039005d0690c901784d51b9206b7a5f5547e1b1ec01ac909db", Pod:"csi-node-driver-csr69", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.23.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali2cc9ada6af5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 26 07:18:04.477985 containerd[1627]: 2024-06-26 07:18:04.157 [INFO][4839] k8s.go 608: Cleaning up netns ContainerID="d08f56fee5c4ff66d8c534e754c0484eedc2966e11d58b6c76541284dd9cd72c" Jun 26 07:18:04.477985 containerd[1627]: 2024-06-26 07:18:04.162 [INFO][4839] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="d08f56fee5c4ff66d8c534e754c0484eedc2966e11d58b6c76541284dd9cd72c" iface="eth0" netns="" Jun 26 07:18:04.477985 containerd[1627]: 2024-06-26 07:18:04.162 [INFO][4839] k8s.go 615: Releasing IP address(es) ContainerID="d08f56fee5c4ff66d8c534e754c0484eedc2966e11d58b6c76541284dd9cd72c" Jun 26 07:18:04.477985 containerd[1627]: 2024-06-26 07:18:04.162 [INFO][4839] utils.go 188: Calico CNI releasing IP address ContainerID="d08f56fee5c4ff66d8c534e754c0484eedc2966e11d58b6c76541284dd9cd72c" Jun 26 07:18:04.477985 containerd[1627]: 2024-06-26 07:18:04.430 [INFO][4846] ipam_plugin.go 411: Releasing address using handleID ContainerID="d08f56fee5c4ff66d8c534e754c0484eedc2966e11d58b6c76541284dd9cd72c" HandleID="k8s-pod-network.d08f56fee5c4ff66d8c534e754c0484eedc2966e11d58b6c76541284dd9cd72c" Workload="ci--4012.0.0--2--1603354b52-k8s-csi--node--driver--csr69-eth0" Jun 26 07:18:04.477985 containerd[1627]: 2024-06-26 07:18:04.432 [INFO][4846] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 26 07:18:04.477985 containerd[1627]: 2024-06-26 07:18:04.432 [INFO][4846] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 26 07:18:04.477985 containerd[1627]: 2024-06-26 07:18:04.455 [WARNING][4846] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="d08f56fee5c4ff66d8c534e754c0484eedc2966e11d58b6c76541284dd9cd72c" HandleID="k8s-pod-network.d08f56fee5c4ff66d8c534e754c0484eedc2966e11d58b6c76541284dd9cd72c" Workload="ci--4012.0.0--2--1603354b52-k8s-csi--node--driver--csr69-eth0" Jun 26 07:18:04.477985 containerd[1627]: 2024-06-26 07:18:04.455 [INFO][4846] ipam_plugin.go 439: Releasing address using workloadID ContainerID="d08f56fee5c4ff66d8c534e754c0484eedc2966e11d58b6c76541284dd9cd72c" HandleID="k8s-pod-network.d08f56fee5c4ff66d8c534e754c0484eedc2966e11d58b6c76541284dd9cd72c" Workload="ci--4012.0.0--2--1603354b52-k8s-csi--node--driver--csr69-eth0" Jun 26 07:18:04.477985 containerd[1627]: 2024-06-26 07:18:04.468 [INFO][4846] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 26 07:18:04.477985 containerd[1627]: 2024-06-26 07:18:04.472 [INFO][4839] k8s.go 621: Teardown processing complete. ContainerID="d08f56fee5c4ff66d8c534e754c0484eedc2966e11d58b6c76541284dd9cd72c" Jun 26 07:18:04.481812 containerd[1627]: time="2024-06-26T07:18:04.478041858Z" level=info msg="TearDown network for sandbox \"d08f56fee5c4ff66d8c534e754c0484eedc2966e11d58b6c76541284dd9cd72c\" successfully" Jun 26 07:18:04.481812 containerd[1627]: time="2024-06-26T07:18:04.478083764Z" level=info msg="StopPodSandbox for \"d08f56fee5c4ff66d8c534e754c0484eedc2966e11d58b6c76541284dd9cd72c\" returns successfully" Jun 26 07:18:04.484429 containerd[1627]: time="2024-06-26T07:18:04.484368616Z" level=info msg="RemovePodSandbox for \"d08f56fee5c4ff66d8c534e754c0484eedc2966e11d58b6c76541284dd9cd72c\"" Jun 26 07:18:04.493326 containerd[1627]: time="2024-06-26T07:18:04.484724931Z" level=info msg="Forcibly stopping sandbox \"d08f56fee5c4ff66d8c534e754c0484eedc2966e11d58b6c76541284dd9cd72c\"" Jun 26 07:18:05.175226 containerd[1627]: 2024-06-26 07:18:04.895 [WARNING][4890] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d08f56fee5c4ff66d8c534e754c0484eedc2966e11d58b6c76541284dd9cd72c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--2--1603354b52-k8s-csi--node--driver--csr69-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"13062869-9328-4eb6-9b1e-f48ab6dc9503", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 7, 17, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-2-1603354b52", ContainerID:"4f3be2f45c8e69039005d0690c901784d51b9206b7a5f5547e1b1ec01ac909db", Pod:"csi-node-driver-csr69", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.23.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali2cc9ada6af5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 26 07:18:05.175226 containerd[1627]: 2024-06-26 07:18:04.896 [INFO][4890] k8s.go 608: Cleaning up netns ContainerID="d08f56fee5c4ff66d8c534e754c0484eedc2966e11d58b6c76541284dd9cd72c" Jun 26 07:18:05.175226 containerd[1627]: 2024-06-26 07:18:04.896 [INFO][4890] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="d08f56fee5c4ff66d8c534e754c0484eedc2966e11d58b6c76541284dd9cd72c" iface="eth0" netns="" Jun 26 07:18:05.175226 containerd[1627]: 2024-06-26 07:18:04.896 [INFO][4890] k8s.go 615: Releasing IP address(es) ContainerID="d08f56fee5c4ff66d8c534e754c0484eedc2966e11d58b6c76541284dd9cd72c" Jun 26 07:18:05.175226 containerd[1627]: 2024-06-26 07:18:04.896 [INFO][4890] utils.go 188: Calico CNI releasing IP address ContainerID="d08f56fee5c4ff66d8c534e754c0484eedc2966e11d58b6c76541284dd9cd72c" Jun 26 07:18:05.175226 containerd[1627]: 2024-06-26 07:18:05.102 [INFO][4898] ipam_plugin.go 411: Releasing address using handleID ContainerID="d08f56fee5c4ff66d8c534e754c0484eedc2966e11d58b6c76541284dd9cd72c" HandleID="k8s-pod-network.d08f56fee5c4ff66d8c534e754c0484eedc2966e11d58b6c76541284dd9cd72c" Workload="ci--4012.0.0--2--1603354b52-k8s-csi--node--driver--csr69-eth0" Jun 26 07:18:05.175226 containerd[1627]: 2024-06-26 07:18:05.109 [INFO][4898] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 26 07:18:05.175226 containerd[1627]: 2024-06-26 07:18:05.109 [INFO][4898] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 26 07:18:05.175226 containerd[1627]: 2024-06-26 07:18:05.139 [WARNING][4898] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="d08f56fee5c4ff66d8c534e754c0484eedc2966e11d58b6c76541284dd9cd72c" HandleID="k8s-pod-network.d08f56fee5c4ff66d8c534e754c0484eedc2966e11d58b6c76541284dd9cd72c" Workload="ci--4012.0.0--2--1603354b52-k8s-csi--node--driver--csr69-eth0" Jun 26 07:18:05.175226 containerd[1627]: 2024-06-26 07:18:05.140 [INFO][4898] ipam_plugin.go 439: Releasing address using workloadID ContainerID="d08f56fee5c4ff66d8c534e754c0484eedc2966e11d58b6c76541284dd9cd72c" HandleID="k8s-pod-network.d08f56fee5c4ff66d8c534e754c0484eedc2966e11d58b6c76541284dd9cd72c" Workload="ci--4012.0.0--2--1603354b52-k8s-csi--node--driver--csr69-eth0" Jun 26 07:18:05.175226 containerd[1627]: 2024-06-26 07:18:05.156 [INFO][4898] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 26 07:18:05.175226 containerd[1627]: 2024-06-26 07:18:05.165 [INFO][4890] k8s.go 621: Teardown processing complete. ContainerID="d08f56fee5c4ff66d8c534e754c0484eedc2966e11d58b6c76541284dd9cd72c" Jun 26 07:18:05.178244 containerd[1627]: time="2024-06-26T07:18:05.175274587Z" level=info msg="TearDown network for sandbox \"d08f56fee5c4ff66d8c534e754c0484eedc2966e11d58b6c76541284dd9cd72c\" successfully" Jun 26 07:18:05.219720 containerd[1627]: time="2024-06-26T07:18:05.218448440Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d08f56fee5c4ff66d8c534e754c0484eedc2966e11d58b6c76541284dd9cd72c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 26 07:18:05.219720 containerd[1627]: time="2024-06-26T07:18:05.218561759Z" level=info msg="RemovePodSandbox \"d08f56fee5c4ff66d8c534e754c0484eedc2966e11d58b6c76541284dd9cd72c\" returns successfully" Jun 26 07:18:05.220959 containerd[1627]: time="2024-06-26T07:18:05.220589585Z" level=info msg="StopPodSandbox for \"c942e3b6edd04158e0b7153c7e29a782e0994f28365938f136ea7de8eaf5499c\"" Jun 26 07:18:05.597158 containerd[1627]: 2024-06-26 07:18:05.464 [WARNING][4925] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c942e3b6edd04158e0b7153c7e29a782e0994f28365938f136ea7de8eaf5499c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--2--1603354b52-k8s-coredns--5dd5756b68--2ztk4-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"682fa9e0-fbd0-428c-a73b-d24968366d72", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 7, 17, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-2-1603354b52", ContainerID:"9e56ef307e17a321ab07b40afe8600e18abd2221fcfd7de6e9e7d8a5448fdccc", Pod:"coredns-5dd5756b68-2ztk4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.23.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidcd2b1fb593", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 26 07:18:05.597158 containerd[1627]: 2024-06-26 07:18:05.467 [INFO][4925] k8s.go 608: Cleaning up netns ContainerID="c942e3b6edd04158e0b7153c7e29a782e0994f28365938f136ea7de8eaf5499c" Jun 26 07:18:05.597158 containerd[1627]: 2024-06-26 07:18:05.467 [INFO][4925] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="c942e3b6edd04158e0b7153c7e29a782e0994f28365938f136ea7de8eaf5499c" iface="eth0" netns="" Jun 26 07:18:05.597158 containerd[1627]: 2024-06-26 07:18:05.468 [INFO][4925] k8s.go 615: Releasing IP address(es) ContainerID="c942e3b6edd04158e0b7153c7e29a782e0994f28365938f136ea7de8eaf5499c" Jun 26 07:18:05.597158 containerd[1627]: 2024-06-26 07:18:05.468 [INFO][4925] utils.go 188: Calico CNI releasing IP address ContainerID="c942e3b6edd04158e0b7153c7e29a782e0994f28365938f136ea7de8eaf5499c" Jun 26 07:18:05.597158 containerd[1627]: 2024-06-26 07:18:05.553 [INFO][4933] ipam_plugin.go 411: Releasing address using handleID ContainerID="c942e3b6edd04158e0b7153c7e29a782e0994f28365938f136ea7de8eaf5499c" HandleID="k8s-pod-network.c942e3b6edd04158e0b7153c7e29a782e0994f28365938f136ea7de8eaf5499c" Workload="ci--4012.0.0--2--1603354b52-k8s-coredns--5dd5756b68--2ztk4-eth0" Jun 26 07:18:05.597158 containerd[1627]: 2024-06-26 07:18:05.555 [INFO][4933] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 26 07:18:05.597158 containerd[1627]: 2024-06-26 07:18:05.555 [INFO][4933] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 26 07:18:05.597158 containerd[1627]: 2024-06-26 07:18:05.573 [WARNING][4933] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="c942e3b6edd04158e0b7153c7e29a782e0994f28365938f136ea7de8eaf5499c" HandleID="k8s-pod-network.c942e3b6edd04158e0b7153c7e29a782e0994f28365938f136ea7de8eaf5499c" Workload="ci--4012.0.0--2--1603354b52-k8s-coredns--5dd5756b68--2ztk4-eth0" Jun 26 07:18:05.597158 containerd[1627]: 2024-06-26 07:18:05.574 [INFO][4933] ipam_plugin.go 439: Releasing address using workloadID ContainerID="c942e3b6edd04158e0b7153c7e29a782e0994f28365938f136ea7de8eaf5499c" HandleID="k8s-pod-network.c942e3b6edd04158e0b7153c7e29a782e0994f28365938f136ea7de8eaf5499c" Workload="ci--4012.0.0--2--1603354b52-k8s-coredns--5dd5756b68--2ztk4-eth0" Jun 26 07:18:05.597158 containerd[1627]: 2024-06-26 07:18:05.581 [INFO][4933] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 26 07:18:05.597158 containerd[1627]: 2024-06-26 07:18:05.587 [INFO][4925] k8s.go 621: Teardown processing complete. ContainerID="c942e3b6edd04158e0b7153c7e29a782e0994f28365938f136ea7de8eaf5499c" Jun 26 07:18:05.599399 containerd[1627]: time="2024-06-26T07:18:05.597220289Z" level=info msg="TearDown network for sandbox \"c942e3b6edd04158e0b7153c7e29a782e0994f28365938f136ea7de8eaf5499c\" successfully" Jun 26 07:18:05.599399 containerd[1627]: time="2024-06-26T07:18:05.597259433Z" level=info msg="StopPodSandbox for \"c942e3b6edd04158e0b7153c7e29a782e0994f28365938f136ea7de8eaf5499c\" returns successfully" Jun 26 07:18:05.601125 containerd[1627]: time="2024-06-26T07:18:05.599638904Z" level=info msg="RemovePodSandbox for \"c942e3b6edd04158e0b7153c7e29a782e0994f28365938f136ea7de8eaf5499c\"" Jun 26 07:18:05.601125 containerd[1627]: time="2024-06-26T07:18:05.599726760Z" level=info msg="Forcibly stopping sandbox \"c942e3b6edd04158e0b7153c7e29a782e0994f28365938f136ea7de8eaf5499c\"" Jun 26 07:18:05.993243 containerd[1627]: 2024-06-26 07:18:05.758 [WARNING][4951] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c942e3b6edd04158e0b7153c7e29a782e0994f28365938f136ea7de8eaf5499c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--2--1603354b52-k8s-coredns--5dd5756b68--2ztk4-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"682fa9e0-fbd0-428c-a73b-d24968366d72", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 7, 17, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-2-1603354b52", ContainerID:"9e56ef307e17a321ab07b40afe8600e18abd2221fcfd7de6e9e7d8a5448fdccc", Pod:"coredns-5dd5756b68-2ztk4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.23.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidcd2b1fb593", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 26 07:18:05.993243 containerd[1627]: 2024-06-26 07:18:05.760 [INFO][4951] k8s.go 608: Cleaning up netns ContainerID="c942e3b6edd04158e0b7153c7e29a782e0994f28365938f136ea7de8eaf5499c" Jun 26 07:18:05.993243 containerd[1627]: 2024-06-26 07:18:05.761 [INFO][4951] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="c942e3b6edd04158e0b7153c7e29a782e0994f28365938f136ea7de8eaf5499c" iface="eth0" netns="" Jun 26 07:18:05.993243 containerd[1627]: 2024-06-26 07:18:05.761 [INFO][4951] k8s.go 615: Releasing IP address(es) ContainerID="c942e3b6edd04158e0b7153c7e29a782e0994f28365938f136ea7de8eaf5499c" Jun 26 07:18:05.993243 containerd[1627]: 2024-06-26 07:18:05.761 [INFO][4951] utils.go 188: Calico CNI releasing IP address ContainerID="c942e3b6edd04158e0b7153c7e29a782e0994f28365938f136ea7de8eaf5499c" Jun 26 07:18:05.993243 containerd[1627]: 2024-06-26 07:18:05.954 [INFO][4957] ipam_plugin.go 411: Releasing address using handleID ContainerID="c942e3b6edd04158e0b7153c7e29a782e0994f28365938f136ea7de8eaf5499c" HandleID="k8s-pod-network.c942e3b6edd04158e0b7153c7e29a782e0994f28365938f136ea7de8eaf5499c" Workload="ci--4012.0.0--2--1603354b52-k8s-coredns--5dd5756b68--2ztk4-eth0" Jun 26 07:18:05.993243 containerd[1627]: 2024-06-26 07:18:05.954 [INFO][4957] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 26 07:18:05.993243 containerd[1627]: 2024-06-26 07:18:05.955 [INFO][4957] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 26 07:18:05.993243 containerd[1627]: 2024-06-26 07:18:05.974 [WARNING][4957] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="c942e3b6edd04158e0b7153c7e29a782e0994f28365938f136ea7de8eaf5499c" HandleID="k8s-pod-network.c942e3b6edd04158e0b7153c7e29a782e0994f28365938f136ea7de8eaf5499c" Workload="ci--4012.0.0--2--1603354b52-k8s-coredns--5dd5756b68--2ztk4-eth0" Jun 26 07:18:05.993243 containerd[1627]: 2024-06-26 07:18:05.974 [INFO][4957] ipam_plugin.go 439: Releasing address using workloadID ContainerID="c942e3b6edd04158e0b7153c7e29a782e0994f28365938f136ea7de8eaf5499c" HandleID="k8s-pod-network.c942e3b6edd04158e0b7153c7e29a782e0994f28365938f136ea7de8eaf5499c" Workload="ci--4012.0.0--2--1603354b52-k8s-coredns--5dd5756b68--2ztk4-eth0" Jun 26 07:18:05.993243 containerd[1627]: 2024-06-26 07:18:05.978 [INFO][4957] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 26 07:18:05.993243 containerd[1627]: 2024-06-26 07:18:05.984 [INFO][4951] k8s.go 621: Teardown processing complete. ContainerID="c942e3b6edd04158e0b7153c7e29a782e0994f28365938f136ea7de8eaf5499c" Jun 26 07:18:06.002623 containerd[1627]: time="2024-06-26T07:18:05.997175335Z" level=info msg="TearDown network for sandbox \"c942e3b6edd04158e0b7153c7e29a782e0994f28365938f136ea7de8eaf5499c\" successfully" Jun 26 07:18:06.033142 containerd[1627]: time="2024-06-26T07:18:06.033058574Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:18:06.045012 containerd[1627]: time="2024-06-26T07:18:06.044196819Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c942e3b6edd04158e0b7153c7e29a782e0994f28365938f136ea7de8eaf5499c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 26 07:18:06.045012 containerd[1627]: time="2024-06-26T07:18:06.044326318Z" level=info msg="RemovePodSandbox \"c942e3b6edd04158e0b7153c7e29a782e0994f28365938f136ea7de8eaf5499c\" returns successfully" Jun 26 07:18:06.054942 containerd[1627]: time="2024-06-26T07:18:06.054630385Z" level=info msg="ImageCreate event name:\"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:18:06.074970 containerd[1627]: time="2024-06-26T07:18:06.072902069Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:18:06.077463 containerd[1627]: time="2024-06-26T07:18:06.077211315Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"11595367\" in 3.485070794s" Jun 26 07:18:06.077463 containerd[1627]: time="2024-06-26T07:18:06.077283756Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\"" Jun 26 07:18:06.090725 containerd[1627]: time="2024-06-26T07:18:06.045612010Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=10147655" Jun 26 07:18:06.090725 containerd[1627]: time="2024-06-26T07:18:06.089082619Z" level=info msg="CreateContainer within sandbox \"4f3be2f45c8e69039005d0690c901784d51b9206b7a5f5547e1b1ec01ac909db\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jun 26 07:18:06.150411 containerd[1627]: time="2024-06-26T07:18:06.150301790Z" level=info msg="CreateContainer within sandbox \"4f3be2f45c8e69039005d0690c901784d51b9206b7a5f5547e1b1ec01ac909db\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"8396d36d53ff8a4db71db001116a77f1f3f7716841609352135da6925c0b5bd0\"" Jun 26 07:18:06.153014 containerd[1627]: time="2024-06-26T07:18:06.151442855Z" level=info msg="StartContainer for \"8396d36d53ff8a4db71db001116a77f1f3f7716841609352135da6925c0b5bd0\"" Jun 26 07:18:06.185160 systemd[1]: Started sshd@8-144.126.218.72:22-147.75.109.163:59246.service - OpenSSH per-connection server daemon (147.75.109.163:59246). Jun 26 07:18:06.561040 sshd[4967]: Accepted publickey for core from 147.75.109.163 port 59246 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:18:06.565484 sshd[4967]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:18:06.603508 systemd-logind[1593]: New session 9 of user core. Jun 26 07:18:06.611247 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 26 07:18:06.643648 containerd[1627]: time="2024-06-26T07:18:06.642513160Z" level=info msg="StartContainer for \"8396d36d53ff8a4db71db001116a77f1f3f7716841609352135da6925c0b5bd0\" returns successfully" Jun 26 07:18:07.450593 sshd[4967]: pam_unix(sshd:session): session closed for user core Jun 26 07:18:07.463260 systemd[1]: sshd@8-144.126.218.72:22-147.75.109.163:59246.service: Deactivated successfully. Jun 26 07:18:07.479101 systemd-logind[1593]: Session 9 logged out. Waiting for processes to exit. Jun 26 07:18:07.481302 systemd[1]: session-9.scope: Deactivated successfully. Jun 26 07:18:07.492794 systemd-journald[1167]: Under memory pressure, flushing caches. Jun 26 07:18:07.491318 systemd-resolved[1504]: Under memory pressure, flushing caches. Jun 26 07:18:07.491384 systemd-resolved[1504]: Flushed all caches. Jun 26 07:18:07.493605 systemd-logind[1593]: Removed session 9. Jun 26 07:18:07.799100 kubelet[2743]: I0626 07:18:07.798597 2743 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jun 26 07:18:07.808653 kubelet[2743]: I0626 07:18:07.808556 2743 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jun 26 07:18:12.468418 systemd[1]: Started sshd@9-144.126.218.72:22-147.75.109.163:59256.service - OpenSSH per-connection server daemon (147.75.109.163:59256). Jun 26 07:18:12.577902 sshd[5034]: Accepted publickey for core from 147.75.109.163 port 59256 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:18:12.581089 sshd[5034]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:18:12.592186 systemd-logind[1593]: New session 10 of user core. Jun 26 07:18:12.597505 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 26 07:18:12.966190 sshd[5034]: pam_unix(sshd:session): session closed for user core Jun 26 07:18:12.990904 systemd-logind[1593]: Session 10 logged out. Waiting for processes to exit. Jun 26 07:18:12.991598 systemd[1]: sshd@9-144.126.218.72:22-147.75.109.163:59256.service: Deactivated successfully. Jun 26 07:18:13.009243 systemd[1]: session-10.scope: Deactivated successfully. Jun 26 07:18:13.012135 systemd-logind[1593]: Removed session 10. Jun 26 07:18:15.492092 systemd-journald[1167]: Under memory pressure, flushing caches. Jun 26 07:18:15.486242 systemd-resolved[1504]: Under memory pressure, flushing caches. Jun 26 07:18:15.486253 systemd-resolved[1504]: Flushed all caches. Jun 26 07:18:17.140086 kubelet[2743]: E0626 07:18:17.140016 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:18:17.167805 kubelet[2743]: I0626 07:18:17.167741 2743 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-csr69" podStartSLOduration=43.538612993 podCreationTimestamp="2024-06-26 07:17:21 +0000 UTC" firstStartedPulling="2024-06-26 07:17:53.454451603 +0000 UTC m=+53.713036358" lastFinishedPulling="2024-06-26 07:18:06.083480263 +0000 UTC m=+66.342065017" observedRunningTime="2024-06-26 07:18:07.381079593 +0000 UTC m=+67.639664365" watchObservedRunningTime="2024-06-26 07:18:17.167641652 +0000 UTC m=+77.426226412" Jun 26 07:18:17.536914 systemd-journald[1167]: Under memory pressure, flushing caches. Jun 26 07:18:17.534385 systemd-resolved[1504]: Under memory pressure, flushing caches. Jun 26 07:18:17.534395 systemd-resolved[1504]: Flushed all caches. Jun 26 07:18:17.982227 systemd[1]: Started sshd@10-144.126.218.72:22-147.75.109.163:54498.service - OpenSSH per-connection server daemon (147.75.109.163:54498). Jun 26 07:18:18.072109 sshd[5086]: Accepted publickey for core from 147.75.109.163 port 54498 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:18:18.075382 sshd[5086]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:18:18.085869 systemd-logind[1593]: New session 11 of user core. Jun 26 07:18:18.094784 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 26 07:18:18.427990 sshd[5086]: pam_unix(sshd:session): session closed for user core Jun 26 07:18:18.441351 systemd[1]: Started sshd@11-144.126.218.72:22-147.75.109.163:54500.service - OpenSSH per-connection server daemon (147.75.109.163:54500). Jun 26 07:18:18.443238 systemd[1]: sshd@10-144.126.218.72:22-147.75.109.163:54498.service: Deactivated successfully. Jun 26 07:18:18.451289 systemd-logind[1593]: Session 11 logged out. Waiting for processes to exit. Jun 26 07:18:18.452053 systemd[1]: session-11.scope: Deactivated successfully. Jun 26 07:18:18.456310 systemd-logind[1593]: Removed session 11. Jun 26 07:18:18.514127 sshd[5098]: Accepted publickey for core from 147.75.109.163 port 54500 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:18:18.517286 sshd[5098]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:18:18.527739 systemd-logind[1593]: New session 12 of user core. Jun 26 07:18:18.537363 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 26 07:18:19.161590 sshd[5098]: pam_unix(sshd:session): session closed for user core Jun 26 07:18:19.179713 systemd[1]: Started sshd@12-144.126.218.72:22-147.75.109.163:54502.service - OpenSSH per-connection server daemon (147.75.109.163:54502). Jun 26 07:18:19.185407 systemd[1]: sshd@11-144.126.218.72:22-147.75.109.163:54500.service: Deactivated successfully. Jun 26 07:18:19.227795 systemd-logind[1593]: Session 12 logged out. Waiting for processes to exit. Jun 26 07:18:19.229009 systemd[1]: session-12.scope: Deactivated successfully. Jun 26 07:18:19.232480 systemd-logind[1593]: Removed session 12. Jun 26 07:18:19.389231 sshd[5110]: Accepted publickey for core from 147.75.109.163 port 54502 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:18:19.394803 sshd[5110]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:18:19.404532 systemd-logind[1593]: New session 13 of user core. Jun 26 07:18:19.411651 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 26 07:18:19.587462 systemd-journald[1167]: Under memory pressure, flushing caches. Jun 26 07:18:19.583859 systemd-resolved[1504]: Under memory pressure, flushing caches. Jun 26 07:18:19.583871 systemd-resolved[1504]: Flushed all caches. Jun 26 07:18:19.697770 sshd[5110]: pam_unix(sshd:session): session closed for user core Jun 26 07:18:19.705970 systemd[1]: sshd@12-144.126.218.72:22-147.75.109.163:54502.service: Deactivated successfully. Jun 26 07:18:19.719365 systemd[1]: session-13.scope: Deactivated successfully. Jun 26 07:18:19.719644 systemd-logind[1593]: Session 13 logged out. Waiting for processes to exit. Jun 26 07:18:19.722104 systemd-logind[1593]: Removed session 13. Jun 26 07:18:22.065930 kubelet[2743]: E0626 07:18:22.065880 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:18:22.069286 kubelet[2743]: E0626 07:18:22.069168 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:18:24.724537 systemd[1]: Started sshd@13-144.126.218.72:22-147.75.109.163:54504.service - OpenSSH per-connection server daemon (147.75.109.163:54504). Jun 26 07:18:24.833481 sshd[5132]: Accepted publickey for core from 147.75.109.163 port 54504 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:18:24.836972 sshd[5132]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:18:24.881996 systemd-logind[1593]: New session 14 of user core. Jun 26 07:18:24.896296 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 26 07:18:25.410929 sshd[5132]: pam_unix(sshd:session): session closed for user core Jun 26 07:18:25.418412 systemd[1]: sshd@13-144.126.218.72:22-147.75.109.163:54504.service: Deactivated successfully. Jun 26 07:18:25.429204 systemd[1]: session-14.scope: Deactivated successfully. Jun 26 07:18:25.430399 systemd-logind[1593]: Session 14 logged out. Waiting for processes to exit. Jun 26 07:18:25.436715 systemd-logind[1593]: Removed session 14. Jun 26 07:18:25.544865 systemd-journald[1167]: Under memory pressure, flushing caches. Jun 26 07:18:25.539039 systemd-resolved[1504]: Under memory pressure, flushing caches. Jun 26 07:18:25.539062 systemd-resolved[1504]: Flushed all caches. Jun 26 07:18:27.063359 kubelet[2743]: E0626 07:18:27.063311 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:18:30.424156 systemd[1]: Started sshd@14-144.126.218.72:22-147.75.109.163:37560.service - OpenSSH per-connection server daemon (147.75.109.163:37560). Jun 26 07:18:30.546009 sshd[5152]: Accepted publickey for core from 147.75.109.163 port 37560 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:18:30.549346 sshd[5152]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:18:30.562087 systemd-logind[1593]: New session 15 of user core. Jun 26 07:18:30.565774 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 26 07:18:31.121276 sshd[5152]: pam_unix(sshd:session): session closed for user core Jun 26 07:18:31.134137 systemd-logind[1593]: Session 15 logged out. Waiting for processes to exit. Jun 26 07:18:31.134408 systemd[1]: sshd@14-144.126.218.72:22-147.75.109.163:37560.service: Deactivated successfully. Jun 26 07:18:31.141645 systemd[1]: session-15.scope: Deactivated successfully. Jun 26 07:18:31.144219 systemd-logind[1593]: Removed session 15. Jun 26 07:18:35.065897 kubelet[2743]: E0626 07:18:35.065676 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:18:36.131426 systemd[1]: Started sshd@15-144.126.218.72:22-147.75.109.163:37118.service - OpenSSH per-connection server daemon (147.75.109.163:37118). Jun 26 07:18:36.266834 sshd[5173]: Accepted publickey for core from 147.75.109.163 port 37118 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:18:36.269508 sshd[5173]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:18:36.285227 systemd-logind[1593]: New session 16 of user core. Jun 26 07:18:36.296411 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 26 07:18:36.668433 systemd[1]: run-containerd-runc-k8s.io-0c43e7c546787790f7c47d7f77e67706e7ba968ea61143c9b1887d6d5783a6d9-runc.jFrQDR.mount: Deactivated successfully. Jun 26 07:18:36.865319 sshd[5173]: pam_unix(sshd:session): session closed for user core Jun 26 07:18:36.885275 systemd[1]: sshd@15-144.126.218.72:22-147.75.109.163:37118.service: Deactivated successfully. Jun 26 07:18:36.930922 systemd[1]: session-16.scope: Deactivated successfully. Jun 26 07:18:36.935384 systemd-logind[1593]: Session 16 logged out. Waiting for processes to exit. Jun 26 07:18:36.937302 systemd-logind[1593]: Removed session 16. Jun 26 07:18:41.881277 systemd[1]: Started sshd@16-144.126.218.72:22-147.75.109.163:37126.service - OpenSSH per-connection server daemon (147.75.109.163:37126). Jun 26 07:18:41.976085 sshd[5212]: Accepted publickey for core from 147.75.109.163 port 37126 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:18:41.979144 sshd[5212]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:18:41.987517 systemd-logind[1593]: New session 17 of user core. Jun 26 07:18:41.994324 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 26 07:18:42.213997 sshd[5212]: pam_unix(sshd:session): session closed for user core Jun 26 07:18:42.226142 systemd[1]: Started sshd@17-144.126.218.72:22-147.75.109.163:37134.service - OpenSSH per-connection server daemon (147.75.109.163:37134). Jun 26 07:18:42.237184 systemd[1]: sshd@16-144.126.218.72:22-147.75.109.163:37126.service: Deactivated successfully. Jun 26 07:18:42.248226 systemd[1]: session-17.scope: Deactivated successfully. Jun 26 07:18:42.251115 systemd-logind[1593]: Session 17 logged out. Waiting for processes to exit. Jun 26 07:18:42.253939 systemd-logind[1593]: Removed session 17. Jun 26 07:18:42.291322 sshd[5222]: Accepted publickey for core from 147.75.109.163 port 37134 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:18:42.294216 sshd[5222]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:18:42.303832 systemd-logind[1593]: New session 18 of user core. Jun 26 07:18:42.311284 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 26 07:18:42.825093 sshd[5222]: pam_unix(sshd:session): session closed for user core Jun 26 07:18:42.840330 systemd[1]: Started sshd@18-144.126.218.72:22-147.75.109.163:37138.service - OpenSSH per-connection server daemon (147.75.109.163:37138). Jun 26 07:18:42.841710 systemd[1]: sshd@17-144.126.218.72:22-147.75.109.163:37134.service: Deactivated successfully. Jun 26 07:18:42.854805 systemd-logind[1593]: Session 18 logged out. Waiting for processes to exit. Jun 26 07:18:42.856270 systemd[1]: session-18.scope: Deactivated successfully. Jun 26 07:18:42.862304 systemd-logind[1593]: Removed session 18. Jun 26 07:18:42.970139 sshd[5235]: Accepted publickey for core from 147.75.109.163 port 37138 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:18:42.973178 sshd[5235]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:18:42.984311 systemd-logind[1593]: New session 19 of user core. Jun 26 07:18:42.992446 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 26 07:18:44.636166 sshd[5235]: pam_unix(sshd:session): session closed for user core Jun 26 07:18:44.651782 systemd[1]: Started sshd@19-144.126.218.72:22-147.75.109.163:37150.service - OpenSSH per-connection server daemon (147.75.109.163:37150). Jun 26 07:18:44.658594 systemd[1]: sshd@18-144.126.218.72:22-147.75.109.163:37138.service: Deactivated successfully. Jun 26 07:18:44.682318 systemd[1]: session-19.scope: Deactivated successfully. Jun 26 07:18:44.683825 systemd-logind[1593]: Session 19 logged out. Waiting for processes to exit. Jun 26 07:18:44.690501 systemd-logind[1593]: Removed session 19. Jun 26 07:18:44.800719 sshd[5262]: Accepted publickey for core from 147.75.109.163 port 37150 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:18:44.803169 sshd[5262]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:18:44.823672 systemd-logind[1593]: New session 20 of user core. Jun 26 07:18:44.835357 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 26 07:18:45.505620 systemd-journald[1167]: Under memory pressure, flushing caches. Jun 26 07:18:45.502148 systemd-resolved[1504]: Under memory pressure, flushing caches. Jun 26 07:18:45.502161 systemd-resolved[1504]: Flushed all caches. Jun 26 07:18:46.815325 kubelet[2743]: I0626 07:18:46.815250 2743 topology_manager.go:215] "Topology Admit Handler" podUID="b07e35d1-ab47-4a6c-963d-2a685ea6bc5e" podNamespace="calico-apiserver" podName="calico-apiserver-899b7bb85-mdhfb" Jun 26 07:18:46.916623 kubelet[2743]: I0626 07:18:46.916451 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b07e35d1-ab47-4a6c-963d-2a685ea6bc5e-calico-apiserver-certs\") pod \"calico-apiserver-899b7bb85-mdhfb\" (UID: \"b07e35d1-ab47-4a6c-963d-2a685ea6bc5e\") " pod="calico-apiserver/calico-apiserver-899b7bb85-mdhfb" Jun 26 07:18:46.916623 kubelet[2743]: I0626 07:18:46.916518 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgg8l\" (UniqueName: \"kubernetes.io/projected/b07e35d1-ab47-4a6c-963d-2a685ea6bc5e-kube-api-access-kgg8l\") pod \"calico-apiserver-899b7bb85-mdhfb\" (UID: \"b07e35d1-ab47-4a6c-963d-2a685ea6bc5e\") " pod="calico-apiserver/calico-apiserver-899b7bb85-mdhfb" Jun 26 07:18:47.003430 sshd[5262]: pam_unix(sshd:session): session closed for user core Jun 26 07:18:47.025325 systemd[1]: Started sshd@20-144.126.218.72:22-147.75.109.163:51762.service - OpenSSH per-connection server daemon (147.75.109.163:51762). Jun 26 07:18:47.105340 systemd[1]: sshd@19-144.126.218.72:22-147.75.109.163:37150.service: Deactivated successfully. Jun 26 07:18:47.115714 kubelet[2743]: E0626 07:18:47.034915 2743 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Jun 26 07:18:47.141392 systemd[1]: session-20.scope: Deactivated successfully. Jun 26 07:18:47.167761 systemd-logind[1593]: Session 20 logged out. Waiting for processes to exit. Jun 26 07:18:47.190623 systemd-logind[1593]: Removed session 20. Jun 26 07:18:47.216848 kubelet[2743]: E0626 07:18:47.216085 2743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b07e35d1-ab47-4a6c-963d-2a685ea6bc5e-calico-apiserver-certs podName:b07e35d1-ab47-4a6c-963d-2a685ea6bc5e nodeName:}" failed. No retries permitted until 2024-06-26 07:18:47.616625593 +0000 UTC m=+107.875210347 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/b07e35d1-ab47-4a6c-963d-2a685ea6bc5e-calico-apiserver-certs") pod "calico-apiserver-899b7bb85-mdhfb" (UID: "b07e35d1-ab47-4a6c-963d-2a685ea6bc5e") : secret "calico-apiserver-certs" not found Jun 26 07:18:47.363602 sshd[5289]: Accepted publickey for core from 147.75.109.163 port 51762 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:18:47.369875 sshd[5289]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:18:47.436445 systemd-logind[1593]: New session 21 of user core. Jun 26 07:18:47.438715 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 26 07:18:47.561733 systemd-journald[1167]: Under memory pressure, flushing caches. Jun 26 07:18:47.551789 systemd-resolved[1504]: Under memory pressure, flushing caches. Jun 26 07:18:47.551805 systemd-resolved[1504]: Flushed all caches. Jun 26 07:18:47.763484 containerd[1627]: time="2024-06-26T07:18:47.763403979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-899b7bb85-mdhfb,Uid:b07e35d1-ab47-4a6c-963d-2a685ea6bc5e,Namespace:calico-apiserver,Attempt:0,}" Jun 26 07:18:47.919997 sshd[5289]: pam_unix(sshd:session): session closed for user core Jun 26 07:18:47.938247 systemd[1]: sshd@20-144.126.218.72:22-147.75.109.163:51762.service: Deactivated successfully. Jun 26 07:18:47.955736 systemd[1]: session-21.scope: Deactivated successfully. Jun 26 07:18:47.957460 systemd-logind[1593]: Session 21 logged out. Waiting for processes to exit. Jun 26 07:18:47.966832 systemd-logind[1593]: Removed session 21. Jun 26 07:18:48.623274 systemd-networkd[1248]: cali7e8caca4978: Link UP Jun 26 07:18:48.627967 systemd-networkd[1248]: cali7e8caca4978: Gained carrier Jun 26 07:18:48.702711 containerd[1627]: 2024-06-26 07:18:48.213 [INFO][5320] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4012.0.0--2--1603354b52-k8s-calico--apiserver--899b7bb85--mdhfb-eth0 calico-apiserver-899b7bb85- calico-apiserver b07e35d1-ab47-4a6c-963d-2a685ea6bc5e 1190 0 2024-06-26 07:18:46 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:899b7bb85 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4012.0.0-2-1603354b52 calico-apiserver-899b7bb85-mdhfb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7e8caca4978 [] []}} ContainerID="2539bbbc74597cef80f876c03c41fa707dcc9eef22fa028d25968466fc3f5697" Namespace="calico-apiserver" Pod="calico-apiserver-899b7bb85-mdhfb" WorkloadEndpoint="ci--4012.0.0--2--1603354b52-k8s-calico--apiserver--899b7bb85--mdhfb-" Jun 26 07:18:48.702711 containerd[1627]: 2024-06-26 07:18:48.215 [INFO][5320] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2539bbbc74597cef80f876c03c41fa707dcc9eef22fa028d25968466fc3f5697" Namespace="calico-apiserver" Pod="calico-apiserver-899b7bb85-mdhfb" WorkloadEndpoint="ci--4012.0.0--2--1603354b52-k8s-calico--apiserver--899b7bb85--mdhfb-eth0" Jun 26 07:18:48.702711 containerd[1627]: 2024-06-26 07:18:48.430 [INFO][5335] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2539bbbc74597cef80f876c03c41fa707dcc9eef22fa028d25968466fc3f5697" HandleID="k8s-pod-network.2539bbbc74597cef80f876c03c41fa707dcc9eef22fa028d25968466fc3f5697" Workload="ci--4012.0.0--2--1603354b52-k8s-calico--apiserver--899b7bb85--mdhfb-eth0" Jun 26 07:18:48.702711 containerd[1627]: 2024-06-26 07:18:48.453 [INFO][5335] ipam_plugin.go 264: Auto assigning IP ContainerID="2539bbbc74597cef80f876c03c41fa707dcc9eef22fa028d25968466fc3f5697" HandleID="k8s-pod-network.2539bbbc74597cef80f876c03c41fa707dcc9eef22fa028d25968466fc3f5697" Workload="ci--4012.0.0--2--1603354b52-k8s-calico--apiserver--899b7bb85--mdhfb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051c60), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4012.0.0-2-1603354b52", "pod":"calico-apiserver-899b7bb85-mdhfb", "timestamp":"2024-06-26 07:18:48.430419461 +0000 UTC"}, Hostname:"ci-4012.0.0-2-1603354b52", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 26 07:18:48.702711 containerd[1627]: 2024-06-26 07:18:48.453 [INFO][5335] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 26 07:18:48.702711 containerd[1627]: 2024-06-26 07:18:48.453 [INFO][5335] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 26 07:18:48.702711 containerd[1627]: 2024-06-26 07:18:48.453 [INFO][5335] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4012.0.0-2-1603354b52' Jun 26 07:18:48.702711 containerd[1627]: 2024-06-26 07:18:48.460 [INFO][5335] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2539bbbc74597cef80f876c03c41fa707dcc9eef22fa028d25968466fc3f5697" host="ci-4012.0.0-2-1603354b52" Jun 26 07:18:48.702711 containerd[1627]: 2024-06-26 07:18:48.476 [INFO][5335] ipam.go 372: Looking up existing affinities for host host="ci-4012.0.0-2-1603354b52" Jun 26 07:18:48.702711 containerd[1627]: 2024-06-26 07:18:48.500 [INFO][5335] ipam.go 489: Trying affinity for 192.168.23.0/26 host="ci-4012.0.0-2-1603354b52" Jun 26 07:18:48.702711 containerd[1627]: 2024-06-26 07:18:48.506 [INFO][5335] ipam.go 155: Attempting to load block cidr=192.168.23.0/26 host="ci-4012.0.0-2-1603354b52" Jun 26 07:18:48.702711 containerd[1627]: 2024-06-26 07:18:48.516 [INFO][5335] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.23.0/26 host="ci-4012.0.0-2-1603354b52" Jun 26 07:18:48.702711 containerd[1627]: 2024-06-26 07:18:48.517 [INFO][5335] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.23.0/26 handle="k8s-pod-network.2539bbbc74597cef80f876c03c41fa707dcc9eef22fa028d25968466fc3f5697" host="ci-4012.0.0-2-1603354b52" Jun 26 07:18:48.702711 containerd[1627]: 2024-06-26 07:18:48.528 [INFO][5335] ipam.go 1685: Creating new handle: k8s-pod-network.2539bbbc74597cef80f876c03c41fa707dcc9eef22fa028d25968466fc3f5697 Jun 26 07:18:48.702711 containerd[1627]: 2024-06-26 07:18:48.558 [INFO][5335] ipam.go 1203: Writing block in order to claim IPs block=192.168.23.0/26 handle="k8s-pod-network.2539bbbc74597cef80f876c03c41fa707dcc9eef22fa028d25968466fc3f5697" host="ci-4012.0.0-2-1603354b52" Jun 26 07:18:48.702711 containerd[1627]: 2024-06-26 07:18:48.582 [INFO][5335] ipam.go 1216: Successfully claimed IPs: [192.168.23.5/26] block=192.168.23.0/26 handle="k8s-pod-network.2539bbbc74597cef80f876c03c41fa707dcc9eef22fa028d25968466fc3f5697" host="ci-4012.0.0-2-1603354b52" Jun 26 07:18:48.702711 containerd[1627]: 2024-06-26 07:18:48.582 [INFO][5335] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.23.5/26] handle="k8s-pod-network.2539bbbc74597cef80f876c03c41fa707dcc9eef22fa028d25968466fc3f5697" host="ci-4012.0.0-2-1603354b52" Jun 26 07:18:48.702711 containerd[1627]: 2024-06-26 07:18:48.583 [INFO][5335] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 26 07:18:48.702711 containerd[1627]: 2024-06-26 07:18:48.583 [INFO][5335] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.23.5/26] IPv6=[] ContainerID="2539bbbc74597cef80f876c03c41fa707dcc9eef22fa028d25968466fc3f5697" HandleID="k8s-pod-network.2539bbbc74597cef80f876c03c41fa707dcc9eef22fa028d25968466fc3f5697" Workload="ci--4012.0.0--2--1603354b52-k8s-calico--apiserver--899b7bb85--mdhfb-eth0" Jun 26 07:18:48.704605 containerd[1627]: 2024-06-26 07:18:48.595 [INFO][5320] k8s.go 386: Populated endpoint ContainerID="2539bbbc74597cef80f876c03c41fa707dcc9eef22fa028d25968466fc3f5697" Namespace="calico-apiserver" Pod="calico-apiserver-899b7bb85-mdhfb" WorkloadEndpoint="ci--4012.0.0--2--1603354b52-k8s-calico--apiserver--899b7bb85--mdhfb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--2--1603354b52-k8s-calico--apiserver--899b7bb85--mdhfb-eth0", GenerateName:"calico-apiserver-899b7bb85-", Namespace:"calico-apiserver", SelfLink:"", UID:"b07e35d1-ab47-4a6c-963d-2a685ea6bc5e", ResourceVersion:"1190", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 7, 18, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"899b7bb85", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-2-1603354b52", ContainerID:"", Pod:"calico-apiserver-899b7bb85-mdhfb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.23.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7e8caca4978", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 26 07:18:48.704605 containerd[1627]: 2024-06-26 07:18:48.617 [INFO][5320] k8s.go 387: Calico CNI using IPs: [192.168.23.5/32] ContainerID="2539bbbc74597cef80f876c03c41fa707dcc9eef22fa028d25968466fc3f5697" Namespace="calico-apiserver" Pod="calico-apiserver-899b7bb85-mdhfb" WorkloadEndpoint="ci--4012.0.0--2--1603354b52-k8s-calico--apiserver--899b7bb85--mdhfb-eth0" Jun 26 07:18:48.704605 containerd[1627]: 2024-06-26 07:18:48.617 [INFO][5320] dataplane_linux.go 68: Setting the host side veth name to cali7e8caca4978 ContainerID="2539bbbc74597cef80f876c03c41fa707dcc9eef22fa028d25968466fc3f5697" Namespace="calico-apiserver" Pod="calico-apiserver-899b7bb85-mdhfb" WorkloadEndpoint="ci--4012.0.0--2--1603354b52-k8s-calico--apiserver--899b7bb85--mdhfb-eth0" Jun 26 07:18:48.704605 containerd[1627]: 2024-06-26 07:18:48.632 [INFO][5320] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="2539bbbc74597cef80f876c03c41fa707dcc9eef22fa028d25968466fc3f5697" Namespace="calico-apiserver" Pod="calico-apiserver-899b7bb85-mdhfb" WorkloadEndpoint="ci--4012.0.0--2--1603354b52-k8s-calico--apiserver--899b7bb85--mdhfb-eth0" Jun 26 07:18:48.704605 containerd[1627]: 2024-06-26 07:18:48.633 [INFO][5320] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2539bbbc74597cef80f876c03c41fa707dcc9eef22fa028d25968466fc3f5697" Namespace="calico-apiserver" Pod="calico-apiserver-899b7bb85-mdhfb" WorkloadEndpoint="ci--4012.0.0--2--1603354b52-k8s-calico--apiserver--899b7bb85--mdhfb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--2--1603354b52-k8s-calico--apiserver--899b7bb85--mdhfb-eth0", GenerateName:"calico-apiserver-899b7bb85-", Namespace:"calico-apiserver", SelfLink:"", UID:"b07e35d1-ab47-4a6c-963d-2a685ea6bc5e", ResourceVersion:"1190", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 7, 18, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"899b7bb85", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-2-1603354b52", ContainerID:"2539bbbc74597cef80f876c03c41fa707dcc9eef22fa028d25968466fc3f5697", Pod:"calico-apiserver-899b7bb85-mdhfb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.23.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7e8caca4978", MAC:"82:78:ef:d1:96:13", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 26 07:18:48.704605 containerd[1627]: 2024-06-26 07:18:48.695 [INFO][5320] k8s.go 500: Wrote updated endpoint to datastore ContainerID="2539bbbc74597cef80f876c03c41fa707dcc9eef22fa028d25968466fc3f5697" Namespace="calico-apiserver" Pod="calico-apiserver-899b7bb85-mdhfb" WorkloadEndpoint="ci--4012.0.0--2--1603354b52-k8s-calico--apiserver--899b7bb85--mdhfb-eth0" Jun 26 07:18:48.935048 containerd[1627]: time="2024-06-26T07:18:48.933543934Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 26 07:18:48.935048 containerd[1627]: time="2024-06-26T07:18:48.933645866Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:18:48.935048 containerd[1627]: time="2024-06-26T07:18:48.933674515Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 26 07:18:48.935048 containerd[1627]: time="2024-06-26T07:18:48.933727468Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:18:49.161150 containerd[1627]: time="2024-06-26T07:18:49.160974147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-899b7bb85-mdhfb,Uid:b07e35d1-ab47-4a6c-963d-2a685ea6bc5e,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"2539bbbc74597cef80f876c03c41fa707dcc9eef22fa028d25968466fc3f5697\"" Jun 26 07:18:49.172084 containerd[1627]: time="2024-06-26T07:18:49.171539911Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jun 26 07:18:50.367152 systemd-networkd[1248]: cali7e8caca4978: Gained IPv6LL Jun 26 07:18:52.936542 systemd[1]: Started sshd@21-144.126.218.72:22-147.75.109.163:51764.service - OpenSSH per-connection server daemon (147.75.109.163:51764). Jun 26 07:18:53.218284 sshd[5422]: Accepted publickey for core from 147.75.109.163 port 51764 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:18:53.225358 sshd[5422]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:18:53.243846 systemd-logind[1593]: New session 22 of user core. Jun 26 07:18:53.249613 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 26 07:18:53.436706 containerd[1627]: time="2024-06-26T07:18:53.436570898Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:18:53.441992 containerd[1627]: time="2024-06-26T07:18:53.441798905Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=40421260" Jun 26 07:18:53.463783 containerd[1627]: time="2024-06-26T07:18:53.458879136Z" level=info msg="ImageCreate event name:\"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:18:53.475993 containerd[1627]: time="2024-06-26T07:18:53.474994143Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:18:53.480463 containerd[1627]: time="2024-06-26T07:18:53.480375799Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"41869036\" in 4.308768441s" Jun 26 07:18:53.480463 containerd[1627]: time="2024-06-26T07:18:53.480464654Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\"" Jun 26 07:18:53.488321 containerd[1627]: time="2024-06-26T07:18:53.488050539Z" level=info msg="CreateContainer within sandbox \"2539bbbc74597cef80f876c03c41fa707dcc9eef22fa028d25968466fc3f5697\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 26 07:18:53.516032 systemd-journald[1167]: Under memory pressure, flushing caches. Jun 26 07:18:53.505338 systemd-resolved[1504]: Under memory pressure, flushing caches. Jun 26 07:18:53.505436 systemd-resolved[1504]: Flushed all caches. Jun 26 07:18:53.586302 containerd[1627]: time="2024-06-26T07:18:53.586239307Z" level=info msg="CreateContainer within sandbox \"2539bbbc74597cef80f876c03c41fa707dcc9eef22fa028d25968466fc3f5697\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"23391c620a52c26949055869040c623831889ec1afa4950d8edccb57123db8fb\"" Jun 26 07:18:53.587613 containerd[1627]: time="2024-06-26T07:18:53.587565077Z" level=info msg="StartContainer for \"23391c620a52c26949055869040c623831889ec1afa4950d8edccb57123db8fb\"" Jun 26 07:18:54.023377 containerd[1627]: time="2024-06-26T07:18:54.023307373Z" level=info msg="StartContainer for \"23391c620a52c26949055869040c623831889ec1afa4950d8edccb57123db8fb\" returns successfully" Jun 26 07:18:54.254007 sshd[5422]: pam_unix(sshd:session): session closed for user core Jun 26 07:18:54.284494 systemd[1]: sshd@21-144.126.218.72:22-147.75.109.163:51764.service: Deactivated successfully. Jun 26 07:18:54.299170 systemd[1]: session-22.scope: Deactivated successfully. Jun 26 07:18:54.303348 systemd-logind[1593]: Session 22 logged out. Waiting for processes to exit. Jun 26 07:18:54.309602 systemd-logind[1593]: Removed session 22. Jun 26 07:18:55.555915 systemd-journald[1167]: Under memory pressure, flushing caches. Jun 26 07:18:55.551334 systemd-resolved[1504]: Under memory pressure, flushing caches. Jun 26 07:18:55.551345 systemd-resolved[1504]: Flushed all caches. Jun 26 07:18:55.633502 kubelet[2743]: I0626 07:18:55.633322 2743 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-899b7bb85-mdhfb" podStartSLOduration=5.317654258 podCreationTimestamp="2024-06-26 07:18:46 +0000 UTC" firstStartedPulling="2024-06-26 07:18:49.165997192 +0000 UTC m=+109.424581941" lastFinishedPulling="2024-06-26 07:18:53.48157604 +0000 UTC m=+113.740160785" observedRunningTime="2024-06-26 07:18:54.725274551 +0000 UTC m=+114.983859313" watchObservedRunningTime="2024-06-26 07:18:55.633233102 +0000 UTC m=+115.891817858" Jun 26 07:18:59.228845 systemd[1]: Started sshd@22-144.126.218.72:22-147.75.109.163:46050.service - OpenSSH per-connection server daemon (147.75.109.163:46050). Jun 26 07:18:59.371801 sshd[5491]: Accepted publickey for core from 147.75.109.163 port 46050 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:18:59.378048 sshd[5491]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:18:59.389753 systemd-logind[1593]: New session 23 of user core. Jun 26 07:18:59.400627 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 26 07:18:59.828847 sshd[5491]: pam_unix(sshd:session): session closed for user core Jun 26 07:18:59.840522 systemd[1]: sshd@22-144.126.218.72:22-147.75.109.163:46050.service: Deactivated successfully. Jun 26 07:18:59.860892 systemd-logind[1593]: Session 23 logged out. Waiting for processes to exit. Jun 26 07:18:59.861743 systemd[1]: session-23.scope: Deactivated successfully. Jun 26 07:18:59.866100 systemd-logind[1593]: Removed session 23. Jun 26 07:19:04.841614 systemd[1]: Started sshd@23-144.126.218.72:22-147.75.109.163:46052.service - OpenSSH per-connection server daemon (147.75.109.163:46052). Jun 26 07:19:05.020261 sshd[5510]: Accepted publickey for core from 147.75.109.163 port 46052 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:19:05.023378 sshd[5510]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:19:05.035166 systemd-logind[1593]: New session 24 of user core. Jun 26 07:19:05.042225 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 26 07:19:05.362672 sshd[5510]: pam_unix(sshd:session): session closed for user core Jun 26 07:19:05.368995 systemd-logind[1593]: Session 24 logged out. Waiting for processes to exit. Jun 26 07:19:05.369367 systemd[1]: sshd@23-144.126.218.72:22-147.75.109.163:46052.service: Deactivated successfully. Jun 26 07:19:05.380191 systemd[1]: session-24.scope: Deactivated successfully. Jun 26 07:19:05.382938 systemd-logind[1593]: Removed session 24. Jun 26 07:19:10.072593 kubelet[2743]: E0626 07:19:10.072447 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:19:10.376399 systemd[1]: Started sshd@24-144.126.218.72:22-147.75.109.163:45142.service - OpenSSH per-connection server daemon (147.75.109.163:45142). Jun 26 07:19:10.467545 sshd[5552]: Accepted publickey for core from 147.75.109.163 port 45142 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:19:10.468303 sshd[5552]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:19:10.486314 systemd-logind[1593]: New session 25 of user core. Jun 26 07:19:10.494040 systemd[1]: Started session-25.scope - Session 25 of User core. Jun 26 07:19:10.823454 sshd[5552]: pam_unix(sshd:session): session closed for user core Jun 26 07:19:10.850611 systemd[1]: sshd@24-144.126.218.72:22-147.75.109.163:45142.service: Deactivated successfully. Jun 26 07:19:10.859256 systemd-logind[1593]: Session 25 logged out. Waiting for processes to exit. Jun 26 07:19:10.860580 systemd[1]: session-25.scope: Deactivated successfully. Jun 26 07:19:10.864758 systemd-logind[1593]: Removed session 25. Jun 26 07:19:15.839234 systemd[1]: Started sshd@25-144.126.218.72:22-147.75.109.163:45154.service - OpenSSH per-connection server daemon (147.75.109.163:45154). Jun 26 07:19:16.025547 sshd[5574]: Accepted publickey for core from 147.75.109.163 port 45154 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:19:16.036849 sshd[5574]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:19:16.048124 systemd-logind[1593]: New session 26 of user core. Jun 26 07:19:16.055092 systemd[1]: Started session-26.scope - Session 26 of User core. Jun 26 07:19:16.320107 sshd[5574]: pam_unix(sshd:session): session closed for user core Jun 26 07:19:16.328766 systemd[1]: sshd@25-144.126.218.72:22-147.75.109.163:45154.service: Deactivated successfully. Jun 26 07:19:16.349155 systemd[1]: session-26.scope: Deactivated successfully. Jun 26 07:19:16.351599 systemd-logind[1593]: Session 26 logged out. Waiting for processes to exit. Jun 26 07:19:16.382770 systemd-logind[1593]: Removed session 26. Jun 26 07:19:21.344231 systemd[1]: Started sshd@26-144.126.218.72:22-147.75.109.163:36468.service - OpenSSH per-connection server daemon (147.75.109.163:36468). Jun 26 07:19:21.694953 sshd[5619]: Accepted publickey for core from 147.75.109.163 port 36468 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:19:21.700144 sshd[5619]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:19:21.717192 systemd-logind[1593]: New session 27 of user core. Jun 26 07:19:21.726110 systemd[1]: Started session-27.scope - Session 27 of User core. Jun 26 07:19:22.065742 kubelet[2743]: E0626 07:19:22.065614 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:19:22.130001 sshd[5619]: pam_unix(sshd:session): session closed for user core Jun 26 07:19:22.136618 systemd[1]: sshd@26-144.126.218.72:22-147.75.109.163:36468.service: Deactivated successfully. Jun 26 07:19:22.144085 systemd[1]: session-27.scope: Deactivated successfully. Jun 26 07:19:22.147841 systemd-logind[1593]: Session 27 logged out. Waiting for processes to exit. Jun 26 07:19:22.151382 systemd-logind[1593]: Removed session 27.