Jul 2 00:17:14.119352 kernel: Linux version 6.6.36-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Mon Jul 1 22:47:51 -00 2024 Jul 2 00:17:14.119399 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=7cbbc16c4aaa626caa51ed60a6754ae638f7b2b87370c3f4fc6a9772b7874a8b Jul 2 00:17:14.119433 kernel: BIOS-provided physical RAM map: Jul 2 00:17:14.119447 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 2 00:17:14.119457 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 2 00:17:14.119469 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 2 00:17:14.119482 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffd7fff] usable Jul 2 00:17:14.119493 kernel: BIOS-e820: [mem 0x000000007ffd8000-0x000000007fffffff] reserved Jul 2 00:17:14.119504 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 2 00:17:14.119521 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 2 00:17:14.119532 kernel: NX (Execute Disable) protection: active Jul 2 00:17:14.119543 kernel: APIC: Static calls initialized Jul 2 00:17:14.119564 kernel: SMBIOS 2.8 present. Jul 2 00:17:14.119575 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Jul 2 00:17:14.119588 kernel: Hypervisor detected: KVM Jul 2 00:17:14.119605 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 2 00:17:14.119620 kernel: kvm-clock: using sched offset of 3285439638 cycles Jul 2 00:17:14.119633 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 2 00:17:14.119646 kernel: tsc: Detected 2494.140 MHz processor Jul 2 00:17:14.119659 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 2 00:17:14.119673 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 2 00:17:14.119684 kernel: last_pfn = 0x7ffd8 max_arch_pfn = 0x400000000 Jul 2 00:17:14.120215 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 2 00:17:14.120229 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 2 00:17:14.120251 kernel: ACPI: Early table checksum verification disabled Jul 2 00:17:14.120264 kernel: ACPI: RSDP 0x00000000000F5A50 000014 (v00 BOCHS ) Jul 2 00:17:14.120277 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:17:14.120290 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:17:14.120303 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:17:14.120316 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jul 2 00:17:14.120327 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:17:14.120338 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:17:14.120350 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:17:14.120366 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:17:14.120378 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Jul 2 00:17:14.120391 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Jul 2 00:17:14.120405 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jul 2 00:17:14.120417 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Jul 2 00:17:14.120430 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Jul 2 00:17:14.120443 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Jul 2 00:17:14.120464 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Jul 2 00:17:14.120480 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jul 2 00:17:14.120494 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jul 2 00:17:14.120508 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jul 2 00:17:14.120522 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jul 2 00:17:14.120536 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffd7fff] -> [mem 0x00000000-0x7ffd7fff] Jul 2 00:17:14.120550 kernel: NODE_DATA(0) allocated [mem 0x7ffd2000-0x7ffd7fff] Jul 2 00:17:14.120568 kernel: Zone ranges: Jul 2 00:17:14.120582 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 2 00:17:14.120595 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffd7fff] Jul 2 00:17:14.120608 kernel: Normal empty Jul 2 00:17:14.120622 kernel: Movable zone start for each node Jul 2 00:17:14.120632 kernel: Early memory node ranges Jul 2 00:17:14.120640 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 2 00:17:14.120649 kernel: node 0: [mem 0x0000000000100000-0x000000007ffd7fff] Jul 2 00:17:14.120657 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffd7fff] Jul 2 00:17:14.120673 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 2 00:17:14.120687 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 2 00:17:14.121325 kernel: On node 0, zone DMA32: 40 pages in unavailable ranges Jul 2 00:17:14.121340 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 2 00:17:14.121353 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 2 00:17:14.121367 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 2 00:17:14.121380 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 2 00:17:14.121394 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 2 00:17:14.121408 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 2 00:17:14.121431 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 2 00:17:14.121465 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 2 00:17:14.121479 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 2 00:17:14.121492 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 2 00:17:14.121505 kernel: TSC deadline timer available Jul 2 00:17:14.121520 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jul 2 00:17:14.121533 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 2 00:17:14.121546 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Jul 2 00:17:14.121559 kernel: Booting paravirtualized kernel on KVM Jul 2 00:17:14.121578 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 2 00:17:14.121592 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jul 2 00:17:14.121605 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Jul 2 00:17:14.121618 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Jul 2 00:17:14.121630 kernel: pcpu-alloc: [0] 0 1 Jul 2 00:17:14.121643 kernel: kvm-guest: PV spinlocks disabled, no host support Jul 2 00:17:14.121660 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=7cbbc16c4aaa626caa51ed60a6754ae638f7b2b87370c3f4fc6a9772b7874a8b Jul 2 00:17:14.121674 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 00:17:14.121797 kernel: random: crng init done Jul 2 00:17:14.121811 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 2 00:17:14.121825 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 2 00:17:14.121837 kernel: Fallback order for Node 0: 0 Jul 2 00:17:14.121850 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515800 Jul 2 00:17:14.121863 kernel: Policy zone: DMA32 Jul 2 00:17:14.121876 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 00:17:14.121886 kernel: Memory: 1965048K/2096600K available (12288K kernel code, 2303K rwdata, 22640K rodata, 49328K init, 2016K bss, 131292K reserved, 0K cma-reserved) Jul 2 00:17:14.121900 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 2 00:17:14.121919 kernel: Kernel/User page tables isolation: enabled Jul 2 00:17:14.121932 kernel: ftrace: allocating 37658 entries in 148 pages Jul 2 00:17:14.121941 kernel: ftrace: allocated 148 pages with 3 groups Jul 2 00:17:14.121950 kernel: Dynamic Preempt: voluntary Jul 2 00:17:14.121962 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 2 00:17:14.121975 kernel: rcu: RCU event tracing is enabled. Jul 2 00:17:14.121986 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 2 00:17:14.121999 kernel: Trampoline variant of Tasks RCU enabled. Jul 2 00:17:14.122011 kernel: Rude variant of Tasks RCU enabled. Jul 2 00:17:14.122028 kernel: Tracing variant of Tasks RCU enabled. Jul 2 00:17:14.122039 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 00:17:14.122051 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 2 00:17:14.122065 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jul 2 00:17:14.122078 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 2 00:17:14.122091 kernel: Console: colour VGA+ 80x25 Jul 2 00:17:14.122104 kernel: printk: console [tty0] enabled Jul 2 00:17:14.122118 kernel: printk: console [ttyS0] enabled Jul 2 00:17:14.122132 kernel: ACPI: Core revision 20230628 Jul 2 00:17:14.122145 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 2 00:17:14.122160 kernel: APIC: Switch to symmetric I/O mode setup Jul 2 00:17:14.122173 kernel: x2apic enabled Jul 2 00:17:14.122184 kernel: APIC: Switched APIC routing to: physical x2apic Jul 2 00:17:14.122195 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 2 00:17:14.122206 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Jul 2 00:17:14.122216 kernel: Calibrating delay loop (skipped) preset value.. 4988.28 BogoMIPS (lpj=2494140) Jul 2 00:17:14.122229 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jul 2 00:17:14.122239 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jul 2 00:17:14.122261 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 2 00:17:14.122271 kernel: Spectre V2 : Mitigation: Retpolines Jul 2 00:17:14.122281 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jul 2 00:17:14.122298 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jul 2 00:17:14.122307 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jul 2 00:17:14.122320 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 2 00:17:14.122332 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 2 00:17:14.122346 kernel: MDS: Mitigation: Clear CPU buffers Jul 2 00:17:14.122360 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jul 2 00:17:14.122379 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 2 00:17:14.122394 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 2 00:17:14.122407 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 2 00:17:14.122420 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 2 00:17:14.122435 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jul 2 00:17:14.122449 kernel: Freeing SMP alternatives memory: 32K Jul 2 00:17:14.122462 kernel: pid_max: default: 32768 minimum: 301 Jul 2 00:17:14.122477 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Jul 2 00:17:14.122499 kernel: SELinux: Initializing. Jul 2 00:17:14.122514 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 2 00:17:14.122528 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 2 00:17:14.122544 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Jul 2 00:17:14.122558 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jul 2 00:17:14.122571 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jul 2 00:17:14.122581 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jul 2 00:17:14.122590 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Jul 2 00:17:14.122604 kernel: signal: max sigframe size: 1776 Jul 2 00:17:14.122613 kernel: rcu: Hierarchical SRCU implementation. Jul 2 00:17:14.122623 kernel: rcu: Max phase no-delay instances is 400. Jul 2 00:17:14.122632 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 2 00:17:14.122643 kernel: smp: Bringing up secondary CPUs ... Jul 2 00:17:14.122656 kernel: smpboot: x86: Booting SMP configuration: Jul 2 00:17:14.122669 kernel: .... node #0, CPUs: #1 Jul 2 00:17:14.122684 kernel: smp: Brought up 1 node, 2 CPUs Jul 2 00:17:14.122714 kernel: smpboot: Max logical packages: 1 Jul 2 00:17:14.122729 kernel: smpboot: Total of 2 processors activated (9976.56 BogoMIPS) Jul 2 00:17:14.122746 kernel: devtmpfs: initialized Jul 2 00:17:14.122758 kernel: x86/mm: Memory block size: 128MB Jul 2 00:17:14.122771 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 00:17:14.122786 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 2 00:17:14.122798 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 00:17:14.122812 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 00:17:14.122825 kernel: audit: initializing netlink subsys (disabled) Jul 2 00:17:14.122841 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 00:17:14.122855 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 2 00:17:14.122870 kernel: audit: type=2000 audit(1719879432.925:1): state=initialized audit_enabled=0 res=1 Jul 2 00:17:14.122879 kernel: cpuidle: using governor menu Jul 2 00:17:14.122889 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 00:17:14.122899 kernel: dca service started, version 1.12.1 Jul 2 00:17:14.122908 kernel: PCI: Using configuration type 1 for base access Jul 2 00:17:14.122923 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 2 00:17:14.122938 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 00:17:14.122953 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 2 00:17:14.122966 kernel: ACPI: Added _OSI(Module Device) Jul 2 00:17:14.122980 kernel: ACPI: Added _OSI(Processor Device) Jul 2 00:17:14.122990 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 00:17:14.122999 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 00:17:14.123008 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 2 00:17:14.123017 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 2 00:17:14.123026 kernel: ACPI: Interpreter enabled Jul 2 00:17:14.123035 kernel: ACPI: PM: (supports S0 S5) Jul 2 00:17:14.123049 kernel: ACPI: Using IOAPIC for interrupt routing Jul 2 00:17:14.123061 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 2 00:17:14.123075 kernel: PCI: Using E820 reservations for host bridge windows Jul 2 00:17:14.123088 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jul 2 00:17:14.123100 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 2 00:17:14.123456 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jul 2 00:17:14.123592 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jul 2 00:17:14.123771 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jul 2 00:17:14.123792 kernel: acpiphp: Slot [3] registered Jul 2 00:17:14.123819 kernel: acpiphp: Slot [4] registered Jul 2 00:17:14.123833 kernel: acpiphp: Slot [5] registered Jul 2 00:17:14.123848 kernel: acpiphp: Slot [6] registered Jul 2 00:17:14.123873 kernel: acpiphp: Slot [7] registered Jul 2 00:17:14.123887 kernel: acpiphp: Slot [8] registered Jul 2 00:17:14.123901 kernel: acpiphp: Slot [9] registered Jul 2 00:17:14.123916 kernel: acpiphp: Slot [10] registered Jul 2 00:17:14.123931 kernel: acpiphp: Slot [11] registered Jul 2 00:17:14.123946 kernel: acpiphp: Slot [12] registered Jul 2 00:17:14.123967 kernel: acpiphp: Slot [13] registered Jul 2 00:17:14.123983 kernel: acpiphp: Slot [14] registered Jul 2 00:17:14.124001 kernel: acpiphp: Slot [15] registered Jul 2 00:17:14.124018 kernel: acpiphp: Slot [16] registered Jul 2 00:17:14.124033 kernel: acpiphp: Slot [17] registered Jul 2 00:17:14.124047 kernel: acpiphp: Slot [18] registered Jul 2 00:17:14.124063 kernel: acpiphp: Slot [19] registered Jul 2 00:17:14.124079 kernel: acpiphp: Slot [20] registered Jul 2 00:17:14.124094 kernel: acpiphp: Slot [21] registered Jul 2 00:17:14.124109 kernel: acpiphp: Slot [22] registered Jul 2 00:17:14.124132 kernel: acpiphp: Slot [23] registered Jul 2 00:17:14.124148 kernel: acpiphp: Slot [24] registered Jul 2 00:17:14.124166 kernel: acpiphp: Slot [25] registered Jul 2 00:17:14.124184 kernel: acpiphp: Slot [26] registered Jul 2 00:17:14.124218 kernel: acpiphp: Slot [27] registered Jul 2 00:17:14.124235 kernel: acpiphp: Slot [28] registered Jul 2 00:17:14.124252 kernel: acpiphp: Slot [29] registered Jul 2 00:17:14.124269 kernel: acpiphp: Slot [30] registered Jul 2 00:17:14.124285 kernel: acpiphp: Slot [31] registered Jul 2 00:17:14.124306 kernel: PCI host bridge to bus 0000:00 Jul 2 00:17:14.124529 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 2 00:17:14.124698 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 2 00:17:14.124875 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 2 00:17:14.125026 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jul 2 00:17:14.125178 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jul 2 00:17:14.125325 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 2 00:17:14.125585 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jul 2 00:17:14.125927 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jul 2 00:17:14.126101 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jul 2 00:17:14.126253 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Jul 2 00:17:14.126395 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jul 2 00:17:14.126530 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jul 2 00:17:14.128817 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jul 2 00:17:14.129085 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jul 2 00:17:14.129288 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Jul 2 00:17:14.129466 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Jul 2 00:17:14.129662 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jul 2 00:17:14.129881 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jul 2 00:17:14.130043 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jul 2 00:17:14.130222 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jul 2 00:17:14.130364 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jul 2 00:17:14.130496 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Jul 2 00:17:14.130650 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Jul 2 00:17:14.132905 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jul 2 00:17:14.133078 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 2 00:17:14.133247 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jul 2 00:17:14.133395 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Jul 2 00:17:14.133559 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Jul 2 00:17:14.133769 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Jul 2 00:17:14.133938 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jul 2 00:17:14.134080 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Jul 2 00:17:14.134214 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Jul 2 00:17:14.134392 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Jul 2 00:17:14.134565 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Jul 2 00:17:14.134773 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Jul 2 00:17:14.134880 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Jul 2 00:17:14.134979 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Jul 2 00:17:14.135146 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Jul 2 00:17:14.135297 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Jul 2 00:17:14.135476 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Jul 2 00:17:14.135638 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Jul 2 00:17:14.136927 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Jul 2 00:17:14.137087 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Jul 2 00:17:14.137197 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Jul 2 00:17:14.137303 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Jul 2 00:17:14.137421 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Jul 2 00:17:14.137620 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Jul 2 00:17:14.137980 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Jul 2 00:17:14.138002 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 2 00:17:14.138013 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 2 00:17:14.138023 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 2 00:17:14.138033 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 2 00:17:14.138043 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jul 2 00:17:14.138062 kernel: iommu: Default domain type: Translated Jul 2 00:17:14.138071 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 2 00:17:14.138081 kernel: PCI: Using ACPI for IRQ routing Jul 2 00:17:14.138091 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 2 00:17:14.138101 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 2 00:17:14.138110 kernel: e820: reserve RAM buffer [mem 0x7ffd8000-0x7fffffff] Jul 2 00:17:14.139816 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jul 2 00:17:14.141146 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jul 2 00:17:14.141309 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 2 00:17:14.141345 kernel: vgaarb: loaded Jul 2 00:17:14.141358 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 2 00:17:14.141369 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 2 00:17:14.141380 kernel: clocksource: Switched to clocksource kvm-clock Jul 2 00:17:14.141393 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 00:17:14.141407 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 00:17:14.141422 kernel: pnp: PnP ACPI init Jul 2 00:17:14.141453 kernel: pnp: PnP ACPI: found 4 devices Jul 2 00:17:14.141468 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 2 00:17:14.141489 kernel: NET: Registered PF_INET protocol family Jul 2 00:17:14.141505 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 2 00:17:14.141518 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jul 2 00:17:14.141532 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 00:17:14.141546 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 2 00:17:14.141559 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jul 2 00:17:14.141573 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jul 2 00:17:14.141587 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 2 00:17:14.141601 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 2 00:17:14.141620 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 00:17:14.141635 kernel: NET: Registered PF_XDP protocol family Jul 2 00:17:14.141871 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 2 00:17:14.142014 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 2 00:17:14.142140 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 2 00:17:14.142280 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jul 2 00:17:14.142407 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jul 2 00:17:14.142553 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jul 2 00:17:14.142792 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 2 00:17:14.142818 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jul 2 00:17:14.142982 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7b0 took 42968 usecs Jul 2 00:17:14.143001 kernel: PCI: CLS 0 bytes, default 64 Jul 2 00:17:14.143013 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 2 00:17:14.143023 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Jul 2 00:17:14.143033 kernel: Initialise system trusted keyrings Jul 2 00:17:14.143043 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jul 2 00:17:14.143060 kernel: Key type asymmetric registered Jul 2 00:17:14.143070 kernel: Asymmetric key parser 'x509' registered Jul 2 00:17:14.143084 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 2 00:17:14.143098 kernel: io scheduler mq-deadline registered Jul 2 00:17:14.143108 kernel: io scheduler kyber registered Jul 2 00:17:14.143119 kernel: io scheduler bfq registered Jul 2 00:17:14.143133 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 2 00:17:14.143143 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jul 2 00:17:14.143155 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jul 2 00:17:14.143169 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jul 2 00:17:14.143187 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 00:17:14.143198 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 2 00:17:14.143213 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 2 00:17:14.143223 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 2 00:17:14.143233 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 2 00:17:14.143416 kernel: rtc_cmos 00:03: RTC can wake from S4 Jul 2 00:17:14.143440 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Jul 2 00:17:14.143587 kernel: rtc_cmos 00:03: registered as rtc0 Jul 2 00:17:14.143745 kernel: rtc_cmos 00:03: setting system clock to 2024-07-02T00:17:13 UTC (1719879433) Jul 2 00:17:14.143860 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jul 2 00:17:14.143877 kernel: intel_pstate: CPU model not supported Jul 2 00:17:14.143892 kernel: NET: Registered PF_INET6 protocol family Jul 2 00:17:14.143906 kernel: Segment Routing with IPv6 Jul 2 00:17:14.143919 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 00:17:14.143932 kernel: NET: Registered PF_PACKET protocol family Jul 2 00:17:14.143946 kernel: Key type dns_resolver registered Jul 2 00:17:14.143971 kernel: IPI shorthand broadcast: enabled Jul 2 00:17:14.143985 kernel: sched_clock: Marking stable (1173007448, 97558923)->(1302553438, -31987067) Jul 2 00:17:14.144000 kernel: registered taskstats version 1 Jul 2 00:17:14.144012 kernel: Loading compiled-in X.509 certificates Jul 2 00:17:14.144025 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.36-flatcar: be1ede902d88b56c26cc000ff22391c78349d771' Jul 2 00:17:14.144038 kernel: Key type .fscrypt registered Jul 2 00:17:14.144053 kernel: Key type fscrypt-provisioning registered Jul 2 00:17:14.144067 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 2 00:17:14.144082 kernel: ima: Allocated hash algorithm: sha1 Jul 2 00:17:14.144097 kernel: ima: No architecture policies found Jul 2 00:17:14.144107 kernel: clk: Disabling unused clocks Jul 2 00:17:14.144117 kernel: Freeing unused kernel image (initmem) memory: 49328K Jul 2 00:17:14.144126 kernel: Write protecting the kernel read-only data: 36864k Jul 2 00:17:14.144136 kernel: Freeing unused kernel image (rodata/data gap) memory: 1936K Jul 2 00:17:14.144185 kernel: Run /init as init process Jul 2 00:17:14.144200 kernel: with arguments: Jul 2 00:17:14.144214 kernel: /init Jul 2 00:17:14.144227 kernel: with environment: Jul 2 00:17:14.144246 kernel: HOME=/ Jul 2 00:17:14.144260 kernel: TERM=linux Jul 2 00:17:14.144274 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 00:17:14.144294 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 00:17:14.144313 systemd[1]: Detected virtualization kvm. Jul 2 00:17:14.144328 systemd[1]: Detected architecture x86-64. Jul 2 00:17:14.144344 systemd[1]: Running in initrd. Jul 2 00:17:14.144360 systemd[1]: No hostname configured, using default hostname. Jul 2 00:17:14.144381 systemd[1]: Hostname set to . Jul 2 00:17:14.144397 systemd[1]: Initializing machine ID from VM UUID. Jul 2 00:17:14.144412 systemd[1]: Queued start job for default target initrd.target. Jul 2 00:17:14.144422 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 00:17:14.144432 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 00:17:14.144444 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 2 00:17:14.144454 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 00:17:14.144473 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 2 00:17:14.144487 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 2 00:17:14.144503 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 2 00:17:14.144517 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 2 00:17:14.144534 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 00:17:14.144549 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 00:17:14.144563 systemd[1]: Reached target paths.target - Path Units. Jul 2 00:17:14.144582 systemd[1]: Reached target slices.target - Slice Units. Jul 2 00:17:14.144598 systemd[1]: Reached target swap.target - Swaps. Jul 2 00:17:14.144613 systemd[1]: Reached target timers.target - Timer Units. Jul 2 00:17:14.144650 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 00:17:14.144665 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 00:17:14.144681 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 2 00:17:14.144702 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 2 00:17:14.146821 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 00:17:14.146838 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 00:17:14.146855 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 00:17:14.146867 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 00:17:14.146878 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 2 00:17:14.146889 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 00:17:14.146899 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 2 00:17:14.146919 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 00:17:14.146930 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 00:17:14.146940 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 00:17:14.146950 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:17:14.146961 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 2 00:17:14.146972 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 00:17:14.147029 systemd-journald[184]: Collecting audit messages is disabled. Jul 2 00:17:14.147070 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 00:17:14.147088 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 2 00:17:14.147108 systemd-journald[184]: Journal started Jul 2 00:17:14.147139 systemd-journald[184]: Runtime Journal (/run/log/journal/2f7317247188444a9db4c0b393b38626) is 4.9M, max 39.3M, 34.4M free. Jul 2 00:17:14.132231 systemd-modules-load[185]: Inserted module 'overlay' Jul 2 00:17:14.155716 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 00:17:14.195167 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 00:17:14.199928 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 00:17:14.199979 kernel: Bridge firewalling registered Jul 2 00:17:14.200773 systemd-modules-load[185]: Inserted module 'br_netfilter' Jul 2 00:17:14.202216 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 00:17:14.203275 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:17:14.225238 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:17:14.227965 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 00:17:14.239305 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 00:17:14.250129 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 00:17:14.269571 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:17:14.272158 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 00:17:14.273212 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:17:14.284031 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 2 00:17:14.288957 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 00:17:14.299401 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 00:17:14.329212 dracut-cmdline[216]: dracut-dracut-053 Jul 2 00:17:14.339818 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=7cbbc16c4aaa626caa51ed60a6754ae638f7b2b87370c3f4fc6a9772b7874a8b Jul 2 00:17:14.383762 systemd-resolved[219]: Positive Trust Anchors: Jul 2 00:17:14.383799 systemd-resolved[219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 00:17:14.383862 systemd-resolved[219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 00:17:14.390462 systemd-resolved[219]: Defaulting to hostname 'linux'. Jul 2 00:17:14.393281 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 00:17:14.394627 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 00:17:14.560101 kernel: SCSI subsystem initialized Jul 2 00:17:14.565957 kernel: Loading iSCSI transport class v2.0-870. Jul 2 00:17:14.584377 kernel: iscsi: registered transport (tcp) Jul 2 00:17:14.620898 kernel: iscsi: registered transport (qla4xxx) Jul 2 00:17:14.621031 kernel: QLogic iSCSI HBA Driver Jul 2 00:17:14.712521 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 2 00:17:14.721055 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 2 00:17:14.781828 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 00:17:14.781943 kernel: device-mapper: uevent: version 1.0.3 Jul 2 00:17:14.783753 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 2 00:17:14.848776 kernel: raid6: avx2x4 gen() 14283 MB/s Jul 2 00:17:14.865780 kernel: raid6: avx2x2 gen() 14677 MB/s Jul 2 00:17:14.882952 kernel: raid6: avx2x1 gen() 12654 MB/s Jul 2 00:17:14.883067 kernel: raid6: using algorithm avx2x2 gen() 14677 MB/s Jul 2 00:17:14.900964 kernel: raid6: .... xor() 10892 MB/s, rmw enabled Jul 2 00:17:14.901091 kernel: raid6: using avx2x2 recovery algorithm Jul 2 00:17:14.967762 kernel: xor: automatically using best checksumming function avx Jul 2 00:17:15.234789 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 2 00:17:15.257784 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 2 00:17:15.266075 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 00:17:15.304047 systemd-udevd[402]: Using default interface naming scheme 'v255'. Jul 2 00:17:15.313327 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 00:17:15.324259 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 2 00:17:15.357128 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation Jul 2 00:17:15.416748 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 00:17:15.425674 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 00:17:15.536761 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 00:17:15.544340 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 2 00:17:15.598806 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 2 00:17:15.601072 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 00:17:15.602575 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 00:17:15.604555 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 00:17:15.615207 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 2 00:17:15.658808 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 2 00:17:15.664745 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Jul 2 00:17:15.813359 kernel: cryptd: max_cpu_qlen set to 1000 Jul 2 00:17:15.813427 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jul 2 00:17:15.813751 kernel: scsi host0: Virtio SCSI HBA Jul 2 00:17:15.813958 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 2 00:17:15.813981 kernel: GPT:9289727 != 125829119 Jul 2 00:17:15.814002 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 2 00:17:15.814023 kernel: GPT:9289727 != 125829119 Jul 2 00:17:15.814044 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 2 00:17:15.814075 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 00:17:15.814096 kernel: AVX2 version of gcm_enc/dec engaged. Jul 2 00:17:15.814117 kernel: AES CTR mode by8 optimization enabled Jul 2 00:17:15.814137 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Jul 2 00:17:15.832870 kernel: ACPI: bus type USB registered Jul 2 00:17:15.832913 kernel: usbcore: registered new interface driver usbfs Jul 2 00:17:15.832935 kernel: usbcore: registered new interface driver hub Jul 2 00:17:15.832955 kernel: usbcore: registered new device driver usb Jul 2 00:17:15.832976 kernel: virtio_blk virtio5: [vdb] 968 512-byte logical blocks (496 kB/484 KiB) Jul 2 00:17:15.804402 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 00:17:15.804584 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:17:15.806804 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:17:15.807470 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:17:15.807765 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:17:15.808399 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:17:15.821177 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:17:15.881784 kernel: libata version 3.00 loaded. Jul 2 00:17:15.928006 kernel: BTRFS: device fsid 2fd636b8-f582-46f8-bde2-15e56e3958c1 devid 1 transid 35 /dev/vda3 scanned by (udev-worker) (461) Jul 2 00:17:15.937751 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (457) Jul 2 00:17:15.946209 kernel: ata_piix 0000:00:01.1: version 2.13 Jul 2 00:17:15.978162 kernel: scsi host1: ata_piix Jul 2 00:17:15.978435 kernel: scsi host2: ata_piix Jul 2 00:17:15.978630 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Jul 2 00:17:15.978650 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Jul 2 00:17:15.957188 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 2 00:17:15.989945 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:17:16.011123 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Jul 2 00:17:16.011447 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Jul 2 00:17:16.011648 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Jul 2 00:17:16.011834 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Jul 2 00:17:16.012004 kernel: hub 1-0:1.0: USB hub found Jul 2 00:17:16.012214 kernel: hub 1-0:1.0: 2 ports detected Jul 2 00:17:16.022676 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 2 00:17:16.049228 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 2 00:17:16.081629 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 2 00:17:16.091368 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 2 00:17:16.102040 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 2 00:17:16.104983 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:17:16.119813 disk-uuid[540]: Primary Header is updated. Jul 2 00:17:16.119813 disk-uuid[540]: Secondary Entries is updated. Jul 2 00:17:16.119813 disk-uuid[540]: Secondary Header is updated. Jul 2 00:17:16.129763 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 00:17:16.153516 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:17:17.146773 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 00:17:17.148905 disk-uuid[541]: The operation has completed successfully. Jul 2 00:17:17.226306 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 00:17:17.226516 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 2 00:17:17.259163 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 2 00:17:17.277924 sh[563]: Success Jul 2 00:17:17.304849 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jul 2 00:17:17.411778 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 2 00:17:17.418353 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 2 00:17:17.431408 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 2 00:17:17.456935 kernel: BTRFS info (device dm-0): first mount of filesystem 2fd636b8-f582-46f8-bde2-15e56e3958c1 Jul 2 00:17:17.457076 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:17:17.457112 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 2 00:17:17.458479 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 2 00:17:17.459980 kernel: BTRFS info (device dm-0): using free space tree Jul 2 00:17:17.473251 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 2 00:17:17.474989 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 2 00:17:17.486135 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 2 00:17:17.491058 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 2 00:17:17.504308 kernel: BTRFS info (device vda6): first mount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:17:17.504439 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:17:17.504464 kernel: BTRFS info (device vda6): using free space tree Jul 2 00:17:17.516808 kernel: BTRFS info (device vda6): auto enabling async discard Jul 2 00:17:17.533988 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 2 00:17:17.535719 kernel: BTRFS info (device vda6): last unmount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:17:17.549954 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 2 00:17:17.559330 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 2 00:17:17.720383 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 00:17:17.736370 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 00:17:17.786642 systemd-networkd[749]: lo: Link UP Jul 2 00:17:17.787570 systemd-networkd[749]: lo: Gained carrier Jul 2 00:17:17.792629 systemd-networkd[749]: Enumeration completed Jul 2 00:17:17.793867 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 00:17:17.794409 systemd-networkd[749]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jul 2 00:17:17.794416 systemd-networkd[749]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Jul 2 00:17:17.796003 systemd[1]: Reached target network.target - Network. Jul 2 00:17:17.796081 systemd-networkd[749]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:17:17.796087 systemd-networkd[749]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 00:17:17.798157 systemd-networkd[749]: eth0: Link UP Jul 2 00:17:17.798164 systemd-networkd[749]: eth0: Gained carrier Jul 2 00:17:17.798183 systemd-networkd[749]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jul 2 00:17:17.806365 ignition[655]: Ignition 2.18.0 Jul 2 00:17:17.806379 ignition[655]: Stage: fetch-offline Jul 2 00:17:17.807216 systemd-networkd[749]: eth1: Link UP Jul 2 00:17:17.806457 ignition[655]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:17:17.807223 systemd-networkd[749]: eth1: Gained carrier Jul 2 00:17:17.806472 ignition[655]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 2 00:17:17.807245 systemd-networkd[749]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:17:17.807918 ignition[655]: parsed url from cmdline: "" Jul 2 00:17:17.810996 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 00:17:17.807927 ignition[655]: no config URL provided Jul 2 00:17:17.807946 ignition[655]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 00:17:17.807972 ignition[655]: no config at "/usr/lib/ignition/user.ign" Jul 2 00:17:17.807983 ignition[655]: failed to fetch config: resource requires networking Jul 2 00:17:17.808302 ignition[655]: Ignition finished successfully Jul 2 00:17:17.830903 systemd-networkd[749]: eth1: DHCPv4 address 10.124.0.12/20 acquired from 169.254.169.253 Jul 2 00:17:17.832991 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 2 00:17:17.835944 systemd-networkd[749]: eth0: DHCPv4 address 64.227.97.255/20, gateway 64.227.96.1 acquired from 169.254.169.253 Jul 2 00:17:17.868404 ignition[758]: Ignition 2.18.0 Jul 2 00:17:17.868434 ignition[758]: Stage: fetch Jul 2 00:17:17.868911 ignition[758]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:17:17.868942 ignition[758]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 2 00:17:17.869182 ignition[758]: parsed url from cmdline: "" Jul 2 00:17:17.869190 ignition[758]: no config URL provided Jul 2 00:17:17.869202 ignition[758]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 00:17:17.869243 ignition[758]: no config at "/usr/lib/ignition/user.ign" Jul 2 00:17:17.869296 ignition[758]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Jul 2 00:17:17.901158 ignition[758]: GET result: OK Jul 2 00:17:17.902505 ignition[758]: parsing config with SHA512: ec4dddce75ade43d8e6126aca60c91443681330a1bdce5e589e9c91a84e3b272b38c63a32b7f291a146062b0a4521109b45e2fc4924a8288e86195b436f4fa06 Jul 2 00:17:17.911261 unknown[758]: fetched base config from "system" Jul 2 00:17:17.911282 unknown[758]: fetched base config from "system" Jul 2 00:17:17.911294 unknown[758]: fetched user config from "digitalocean" Jul 2 00:17:17.913096 ignition[758]: fetch: fetch complete Jul 2 00:17:17.913107 ignition[758]: fetch: fetch passed Jul 2 00:17:17.913213 ignition[758]: Ignition finished successfully Jul 2 00:17:17.915819 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 2 00:17:17.924144 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 2 00:17:17.963976 ignition[766]: Ignition 2.18.0 Jul 2 00:17:17.963995 ignition[766]: Stage: kargs Jul 2 00:17:17.964278 ignition[766]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:17:17.964292 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 2 00:17:17.966082 ignition[766]: kargs: kargs passed Jul 2 00:17:17.966183 ignition[766]: Ignition finished successfully Jul 2 00:17:17.967879 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 2 00:17:17.982523 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 2 00:17:18.011819 ignition[773]: Ignition 2.18.0 Jul 2 00:17:18.011837 ignition[773]: Stage: disks Jul 2 00:17:18.012168 ignition[773]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:17:18.012184 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 2 00:17:18.013488 ignition[773]: disks: disks passed Jul 2 00:17:18.015324 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 2 00:17:18.013580 ignition[773]: Ignition finished successfully Jul 2 00:17:18.018133 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 2 00:17:18.018973 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 2 00:17:18.023787 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 00:17:18.025959 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 00:17:18.026794 systemd[1]: Reached target basic.target - Basic System. Jul 2 00:17:18.039278 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 2 00:17:18.075482 systemd-fsck[782]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 2 00:17:18.084803 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 2 00:17:18.095872 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 2 00:17:18.258745 kernel: EXT4-fs (vda9): mounted filesystem c5a17c06-b440-4aab-a0fa-5b60bb1d8586 r/w with ordered data mode. Quota mode: none. Jul 2 00:17:18.260337 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 2 00:17:18.262544 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 2 00:17:18.282414 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 00:17:18.287021 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 2 00:17:18.295043 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Jul 2 00:17:18.306204 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (790) Jul 2 00:17:18.306273 kernel: BTRFS info (device vda6): first mount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:17:18.306297 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:17:18.306318 kernel: BTRFS info (device vda6): using free space tree Jul 2 00:17:18.312050 kernel: BTRFS info (device vda6): auto enabling async discard Jul 2 00:17:18.315319 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 2 00:17:18.316938 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 00:17:18.317003 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 00:17:18.328584 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 00:17:18.330631 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 2 00:17:18.341066 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 2 00:17:18.439829 coreos-metadata[808]: Jul 02 00:17:18.438 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jul 2 00:17:18.450727 initrd-setup-root[820]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 00:17:18.456352 coreos-metadata[808]: Jul 02 00:17:18.453 INFO Fetch successful Jul 2 00:17:18.457289 coreos-metadata[792]: Jul 02 00:17:18.456 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jul 2 00:17:18.465392 initrd-setup-root[827]: cut: /sysroot/etc/group: No such file or directory Jul 2 00:17:18.468316 coreos-metadata[792]: Jul 02 00:17:18.468 INFO Fetch successful Jul 2 00:17:18.470789 coreos-metadata[808]: Jul 02 00:17:18.470 INFO wrote hostname ci-3975.1.1-8-31c642c6eb to /sysroot/etc/hostname Jul 2 00:17:18.472786 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 2 00:17:18.480420 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Jul 2 00:17:18.481396 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Jul 2 00:17:18.484064 initrd-setup-root[835]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 00:17:18.490322 initrd-setup-root[843]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 00:17:18.671623 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 2 00:17:18.686421 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 2 00:17:18.693970 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 2 00:17:18.715160 kernel: BTRFS info (device vda6): last unmount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:17:18.707871 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 2 00:17:18.760111 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 2 00:17:18.764303 ignition[911]: INFO : Ignition 2.18.0 Jul 2 00:17:18.764303 ignition[911]: INFO : Stage: mount Jul 2 00:17:18.766087 ignition[911]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:17:18.766087 ignition[911]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 2 00:17:18.766087 ignition[911]: INFO : mount: mount passed Jul 2 00:17:18.766087 ignition[911]: INFO : Ignition finished successfully Jul 2 00:17:18.767497 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 2 00:17:18.778074 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 2 00:17:18.800116 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 00:17:18.814930 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (924) Jul 2 00:17:18.818107 kernel: BTRFS info (device vda6): first mount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:17:18.818227 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:17:18.819800 kernel: BTRFS info (device vda6): using free space tree Jul 2 00:17:18.827260 kernel: BTRFS info (device vda6): auto enabling async discard Jul 2 00:17:18.833318 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 00:17:18.871620 ignition[940]: INFO : Ignition 2.18.0 Jul 2 00:17:18.872761 ignition[940]: INFO : Stage: files Jul 2 00:17:18.872761 ignition[940]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:17:18.872761 ignition[940]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 2 00:17:18.875139 ignition[940]: DEBUG : files: compiled without relabeling support, skipping Jul 2 00:17:18.876130 ignition[940]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 00:17:18.876130 ignition[940]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 00:17:18.881884 ignition[940]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 00:17:18.882876 ignition[940]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 00:17:18.882876 ignition[940]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 00:17:18.882773 unknown[940]: wrote ssh authorized keys file for user: core Jul 2 00:17:18.887451 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 00:17:18.888584 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 2 00:17:18.903130 systemd-networkd[749]: eth1: Gained IPv6LL Jul 2 00:17:18.924254 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 2 00:17:18.984245 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 00:17:18.984245 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 2 00:17:18.987598 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 00:17:18.987598 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 2 00:17:18.987598 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 2 00:17:18.987598 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 00:17:18.987598 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 00:17:18.987598 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 00:17:18.987598 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 00:17:18.987598 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 00:17:18.987598 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 00:17:18.987598 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jul 2 00:17:18.987598 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jul 2 00:17:18.987598 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jul 2 00:17:18.987598 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jul 2 00:17:19.342324 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 2 00:17:19.672025 systemd-networkd[749]: eth0: Gained IPv6LL Jul 2 00:17:19.699239 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jul 2 00:17:19.699239 ignition[940]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 2 00:17:19.702174 ignition[940]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 00:17:19.702174 ignition[940]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 00:17:19.702174 ignition[940]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 2 00:17:19.702174 ignition[940]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 2 00:17:19.702174 ignition[940]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 2 00:17:19.702174 ignition[940]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 00:17:19.702174 ignition[940]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 00:17:19.702174 ignition[940]: INFO : files: files passed Jul 2 00:17:19.702174 ignition[940]: INFO : Ignition finished successfully Jul 2 00:17:19.703707 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 2 00:17:19.716071 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 2 00:17:19.719390 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 2 00:17:19.724299 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 00:17:19.725314 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 2 00:17:19.750605 initrd-setup-root-after-ignition[970]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:17:19.750605 initrd-setup-root-after-ignition[970]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:17:19.753823 initrd-setup-root-after-ignition[974]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:17:19.756925 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 00:17:19.759169 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 2 00:17:19.778053 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 2 00:17:19.816149 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 00:17:19.816306 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 2 00:17:19.818144 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 2 00:17:19.818848 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 2 00:17:19.820003 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 2 00:17:19.831913 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 2 00:17:19.851326 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 00:17:19.858189 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 2 00:17:19.883904 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 2 00:17:19.885759 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 00:17:19.887427 systemd[1]: Stopped target timers.target - Timer Units. Jul 2 00:17:19.888309 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 00:17:19.888524 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 00:17:19.889873 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 2 00:17:19.890664 systemd[1]: Stopped target basic.target - Basic System. Jul 2 00:17:19.891988 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 2 00:17:19.892942 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 00:17:19.894195 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 2 00:17:19.895211 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 2 00:17:19.896238 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 00:17:19.897592 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 2 00:17:19.898910 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 2 00:17:19.900235 systemd[1]: Stopped target swap.target - Swaps. Jul 2 00:17:19.901282 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 00:17:19.901824 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 2 00:17:19.903021 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 2 00:17:19.904186 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 00:17:19.905202 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 2 00:17:19.905399 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 00:17:19.907144 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 00:17:19.907470 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 2 00:17:19.908835 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 00:17:19.909283 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 00:17:19.910531 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 00:17:19.910784 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 2 00:17:19.911567 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 2 00:17:19.911853 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 2 00:17:19.927255 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 2 00:17:19.929223 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 00:17:19.929729 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 00:17:19.934769 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 2 00:17:19.935900 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 00:17:19.936749 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 00:17:19.938808 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 00:17:19.939502 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 00:17:19.945773 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 00:17:19.950002 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 2 00:17:19.973727 ignition[994]: INFO : Ignition 2.18.0 Jul 2 00:17:19.973727 ignition[994]: INFO : Stage: umount Jul 2 00:17:19.973727 ignition[994]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:17:19.973727 ignition[994]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 2 00:17:19.999051 ignition[994]: INFO : umount: umount passed Jul 2 00:17:19.999051 ignition[994]: INFO : Ignition finished successfully Jul 2 00:17:19.980022 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 00:17:19.981027 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 00:17:19.981211 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 2 00:17:19.998159 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 00:17:19.998397 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 2 00:17:19.999999 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 00:17:20.000121 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 2 00:17:20.000942 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 2 00:17:20.001036 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 2 00:17:20.002016 systemd[1]: Stopped target network.target - Network. Jul 2 00:17:20.002758 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 00:17:20.002855 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 00:17:20.008187 systemd[1]: Stopped target paths.target - Path Units. Jul 2 00:17:20.008560 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 00:17:20.013863 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 00:17:20.015849 systemd[1]: Stopped target slices.target - Slice Units. Jul 2 00:17:20.041898 systemd[1]: Stopped target sockets.target - Socket Units. Jul 2 00:17:20.045944 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 00:17:20.046077 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 00:17:20.048951 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 00:17:20.049083 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 00:17:20.050450 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 00:17:20.050592 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 2 00:17:20.051385 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 2 00:17:20.051486 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 2 00:17:20.053047 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 2 00:17:20.054166 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 2 00:17:20.055612 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 00:17:20.055843 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 2 00:17:20.058141 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 00:17:20.058269 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 2 00:17:20.059077 systemd-networkd[749]: eth0: DHCPv6 lease lost Jul 2 00:17:20.061923 systemd-networkd[749]: eth1: DHCPv6 lease lost Jul 2 00:17:20.066037 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 00:17:20.066228 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 2 00:17:20.070115 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 00:17:20.070310 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 2 00:17:20.072575 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 00:17:20.072658 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 2 00:17:20.079991 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 2 00:17:20.080441 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 00:17:20.080540 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 00:17:20.081137 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 00:17:20.081209 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:17:20.082096 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 00:17:20.082155 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 2 00:17:20.084043 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 2 00:17:20.084115 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 00:17:20.085279 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 00:17:20.101400 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 00:17:20.101861 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 00:17:20.104230 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 00:17:20.104320 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 2 00:17:20.104996 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 00:17:20.105050 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 00:17:20.106151 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 00:17:20.106231 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 2 00:17:20.108913 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 00:17:20.108978 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 2 00:17:20.111295 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 00:17:20.111372 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:17:20.117016 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 2 00:17:20.118009 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 2 00:17:20.118098 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 00:17:20.120255 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:17:20.120332 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:17:20.121639 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 00:17:20.122650 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 2 00:17:20.140934 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 00:17:20.141138 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 2 00:17:20.143667 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 2 00:17:20.151060 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 2 00:17:20.165233 systemd[1]: Switching root. Jul 2 00:17:20.194735 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Jul 2 00:17:20.194866 systemd-journald[184]: Journal stopped Jul 2 00:17:21.850215 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 00:17:21.850346 kernel: SELinux: policy capability open_perms=1 Jul 2 00:17:21.850368 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 00:17:21.850387 kernel: SELinux: policy capability always_check_network=0 Jul 2 00:17:21.850401 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 00:17:21.850417 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 00:17:21.850430 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 00:17:21.850442 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 00:17:21.850465 kernel: audit: type=1403 audit(1719879440.409:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 2 00:17:21.850492 systemd[1]: Successfully loaded SELinux policy in 52.107ms. Jul 2 00:17:21.850519 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 21.229ms. Jul 2 00:17:21.850537 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 00:17:21.850553 systemd[1]: Detected virtualization kvm. Jul 2 00:17:21.850629 systemd[1]: Detected architecture x86-64. Jul 2 00:17:21.850664 systemd[1]: Detected first boot. Jul 2 00:17:21.850684 systemd[1]: Hostname set to . Jul 2 00:17:21.851863 systemd[1]: Initializing machine ID from VM UUID. Jul 2 00:17:21.851894 zram_generator::config[1037]: No configuration found. Jul 2 00:17:21.851918 systemd[1]: Populated /etc with preset unit settings. Jul 2 00:17:21.851932 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 2 00:17:21.851945 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 2 00:17:21.851958 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 2 00:17:21.851972 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 2 00:17:21.851996 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 2 00:17:21.852008 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 2 00:17:21.852022 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 2 00:17:21.852038 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 2 00:17:21.852055 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 2 00:17:21.852073 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 2 00:17:21.852091 systemd[1]: Created slice user.slice - User and Session Slice. Jul 2 00:17:21.852375 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 00:17:21.852411 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 00:17:21.852430 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 2 00:17:21.852443 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 2 00:17:21.852474 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 2 00:17:21.852493 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 00:17:21.852511 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 2 00:17:21.852529 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 00:17:21.852546 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 2 00:17:21.852564 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 2 00:17:21.852581 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 2 00:17:21.852598 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 2 00:17:21.852611 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 00:17:21.852624 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 00:17:21.852639 systemd[1]: Reached target slices.target - Slice Units. Jul 2 00:17:21.852652 systemd[1]: Reached target swap.target - Swaps. Jul 2 00:17:21.852665 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 2 00:17:21.852678 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 2 00:17:21.852703 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 00:17:21.852717 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 00:17:21.852734 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 00:17:21.852747 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 2 00:17:21.852759 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 2 00:17:21.852816 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 2 00:17:21.852829 systemd[1]: Mounting media.mount - External Media Directory... Jul 2 00:17:21.852842 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:17:21.852854 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 2 00:17:21.852867 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 2 00:17:21.852879 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 2 00:17:21.852908 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 00:17:21.852921 systemd[1]: Reached target machines.target - Containers. Jul 2 00:17:21.852935 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 2 00:17:21.852955 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:17:21.852974 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 00:17:21.852993 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 2 00:17:21.853009 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 00:17:21.853021 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 00:17:21.853038 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 00:17:21.853052 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 2 00:17:21.853064 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 00:17:21.853078 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 00:17:21.853090 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 2 00:17:21.853103 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 2 00:17:21.853116 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 2 00:17:21.853129 systemd[1]: Stopped systemd-fsck-usr.service. Jul 2 00:17:21.853150 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 00:17:21.853168 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 00:17:21.853187 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 2 00:17:21.853201 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 2 00:17:21.853219 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 00:17:21.853232 systemd[1]: verity-setup.service: Deactivated successfully. Jul 2 00:17:21.853244 systemd[1]: Stopped verity-setup.service. Jul 2 00:17:21.853258 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:17:21.853270 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 2 00:17:21.853286 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 2 00:17:21.853299 systemd[1]: Mounted media.mount - External Media Directory. Jul 2 00:17:21.853313 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 2 00:17:21.853326 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 2 00:17:21.853343 kernel: fuse: init (API version 7.39) Jul 2 00:17:21.853371 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 2 00:17:21.853388 kernel: loop: module loaded Jul 2 00:17:21.853418 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 00:17:21.853435 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 00:17:21.853452 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 2 00:17:21.853469 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:17:21.853493 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 00:17:21.853510 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:17:21.853526 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 00:17:21.853543 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 00:17:21.853564 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 2 00:17:21.853585 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:17:21.853603 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 00:17:21.853622 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 00:17:21.853640 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 2 00:17:21.853664 kernel: ACPI: bus type drm_connector registered Jul 2 00:17:21.853682 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 2 00:17:21.864786 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 00:17:21.864820 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 00:17:21.864835 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 00:17:21.864849 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 00:17:21.864863 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 2 00:17:21.864876 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 2 00:17:21.864899 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 2 00:17:21.864912 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 00:17:21.864962 systemd-journald[1105]: Collecting audit messages is disabled. Jul 2 00:17:21.864994 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 00:17:21.865007 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 2 00:17:21.865020 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 2 00:17:21.865041 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 2 00:17:21.865062 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:17:21.865076 systemd-journald[1105]: Journal started Jul 2 00:17:21.865101 systemd-journald[1105]: Runtime Journal (/run/log/journal/2f7317247188444a9db4c0b393b38626) is 4.9M, max 39.3M, 34.4M free. Jul 2 00:17:21.390218 systemd[1]: Queued start job for default target multi-user.target. Jul 2 00:17:21.414527 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 2 00:17:21.415196 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 2 00:17:21.882909 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 2 00:17:21.883013 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 00:17:21.886739 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 2 00:17:21.904321 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 2 00:17:21.904424 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 00:17:21.903543 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 2 00:17:21.912860 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 2 00:17:21.949241 kernel: loop0: detected capacity change from 0 to 211296 Jul 2 00:17:21.949320 kernel: block loop0: the capability attribute has been deprecated. Jul 2 00:17:21.948659 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 2 00:17:21.973830 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 2 00:17:21.987724 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 00:17:21.993158 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 2 00:17:22.001014 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 2 00:17:22.002136 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:17:22.015933 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 2 00:17:22.020670 kernel: loop1: detected capacity change from 0 to 80568 Jul 2 00:17:22.020219 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 2 00:17:22.031987 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 2 00:17:22.066345 systemd-journald[1105]: Time spent on flushing to /var/log/journal/2f7317247188444a9db4c0b393b38626 is 87.017ms for 992 entries. Jul 2 00:17:22.066345 systemd-journald[1105]: System Journal (/var/log/journal/2f7317247188444a9db4c0b393b38626) is 8.0M, max 195.6M, 187.6M free. Jul 2 00:17:22.168081 systemd-journald[1105]: Received client request to flush runtime journal. Jul 2 00:17:22.172847 kernel: loop2: detected capacity change from 0 to 8 Jul 2 00:17:22.172890 kernel: loop3: detected capacity change from 0 to 139904 Jul 2 00:17:22.130553 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 00:17:22.132388 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 2 00:17:22.144964 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 00:17:22.156254 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 2 00:17:22.158106 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 2 00:17:22.172908 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 00:17:22.173875 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 2 00:17:22.179715 kernel: loop4: detected capacity change from 0 to 211296 Jul 2 00:17:22.215751 kernel: loop5: detected capacity change from 0 to 80568 Jul 2 00:17:22.216316 udevadm[1173]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 2 00:17:22.256662 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. Jul 2 00:17:22.257673 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. Jul 2 00:17:22.278578 kernel: loop6: detected capacity change from 0 to 8 Jul 2 00:17:22.288020 kernel: loop7: detected capacity change from 0 to 139904 Jul 2 00:17:22.283810 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 00:17:22.313670 (sd-merge)[1177]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Jul 2 00:17:22.314583 (sd-merge)[1177]: Merged extensions into '/usr'. Jul 2 00:17:22.321056 systemd[1]: Reloading requested from client PID 1139 ('systemd-sysext') (unit systemd-sysext.service)... Jul 2 00:17:22.321075 systemd[1]: Reloading... Jul 2 00:17:22.465721 zram_generator::config[1205]: No configuration found. Jul 2 00:17:22.736140 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:17:22.793774 ldconfig[1132]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 00:17:22.839811 systemd[1]: Reloading finished in 518 ms. Jul 2 00:17:22.875998 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 2 00:17:22.879547 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 2 00:17:22.892991 systemd[1]: Starting ensure-sysext.service... Jul 2 00:17:22.907986 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 00:17:22.934860 systemd[1]: Reloading requested from client PID 1248 ('systemctl') (unit ensure-sysext.service)... Jul 2 00:17:22.934895 systemd[1]: Reloading... Jul 2 00:17:23.000776 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 00:17:23.001465 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 2 00:17:23.004284 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 00:17:23.008184 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Jul 2 00:17:23.008860 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Jul 2 00:17:23.022635 systemd-tmpfiles[1249]: Detected autofs mount point /boot during canonicalization of boot. Jul 2 00:17:23.023490 systemd-tmpfiles[1249]: Skipping /boot Jul 2 00:17:23.057942 systemd-tmpfiles[1249]: Detected autofs mount point /boot during canonicalization of boot. Jul 2 00:17:23.058096 systemd-tmpfiles[1249]: Skipping /boot Jul 2 00:17:23.111742 zram_generator::config[1288]: No configuration found. Jul 2 00:17:23.245677 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:17:23.310005 systemd[1]: Reloading finished in 374 ms. Jul 2 00:17:23.323088 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 2 00:17:23.329513 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 00:17:23.343029 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 00:17:23.347918 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 2 00:17:23.359989 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 2 00:17:23.362984 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 00:17:23.368965 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 00:17:23.372008 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 2 00:17:23.380358 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:17:23.380556 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:17:23.386029 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 00:17:23.398100 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 00:17:23.402301 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 00:17:23.403564 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:17:23.403725 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:17:23.417369 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 2 00:17:23.421142 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:17:23.421351 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:17:23.421579 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:17:23.421683 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:17:23.425790 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:17:23.426060 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:17:23.432067 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 00:17:23.433299 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:17:23.433531 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:17:23.449077 systemd[1]: Finished ensure-sysext.service. Jul 2 00:17:23.450255 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:17:23.450486 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 00:17:23.466296 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 2 00:17:23.473535 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 2 00:17:23.486790 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 2 00:17:23.497512 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 2 00:17:23.499557 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:17:23.500191 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 00:17:23.503599 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 00:17:23.515103 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:17:23.515341 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 00:17:23.517118 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 00:17:23.517525 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 00:17:23.521503 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 00:17:23.524723 augenrules[1352]: No rules Jul 2 00:17:23.529297 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 00:17:23.533505 systemd-udevd[1329]: Using default interface naming scheme 'v255'. Jul 2 00:17:23.544225 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 2 00:17:23.552605 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 2 00:17:23.553562 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 00:17:23.555493 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 2 00:17:23.581937 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 00:17:23.588557 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 00:17:23.694745 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1381) Jul 2 00:17:23.740801 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1373) Jul 2 00:17:23.741864 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Jul 2 00:17:23.742951 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:17:23.743929 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:17:23.752999 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 00:17:23.763974 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 00:17:23.775254 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 00:17:23.775986 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:17:23.776046 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 00:17:23.776069 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:17:23.776674 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:17:23.778002 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 00:17:23.779521 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 2 00:17:23.796079 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:17:23.796269 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 00:17:23.798901 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 00:17:23.800028 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 2 00:17:23.802369 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:17:23.803362 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 00:17:23.807642 systemd[1]: Reached target time-set.target - System Time Set. Jul 2 00:17:23.808607 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 00:17:23.818273 kernel: ISO 9660 Extensions: RRIP_1991A Jul 2 00:17:23.818479 systemd-resolved[1328]: Positive Trust Anchors: Jul 2 00:17:23.818494 systemd-resolved[1328]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 00:17:23.818529 systemd-resolved[1328]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 00:17:23.819530 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Jul 2 00:17:23.825519 systemd-resolved[1328]: Using system hostname 'ci-3975.1.1-8-31c642c6eb'. Jul 2 00:17:23.828914 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 00:17:23.830036 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 00:17:23.841843 systemd-networkd[1370]: lo: Link UP Jul 2 00:17:23.842335 systemd-networkd[1370]: lo: Gained carrier Jul 2 00:17:23.848484 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 2 00:17:23.858158 systemd-networkd[1370]: Enumeration completed Jul 2 00:17:23.861655 systemd-networkd[1370]: eth0: Configuring with /run/systemd/network/10-32:f6:07:b8:c8:34.network. Jul 2 00:17:23.862625 systemd-networkd[1370]: eth1: Configuring with /run/systemd/network/10-16:48:85:72:9e:e3.network. Jul 2 00:17:23.864864 systemd-networkd[1370]: eth0: Link UP Jul 2 00:17:23.864874 systemd-networkd[1370]: eth0: Gained carrier Jul 2 00:17:23.866038 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 2 00:17:23.866570 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 00:17:23.867267 systemd[1]: Reached target network.target - Network. Jul 2 00:17:23.869520 systemd-networkd[1370]: eth1: Link UP Jul 2 00:17:23.871006 systemd-networkd[1370]: eth1: Gained carrier Jul 2 00:17:23.876938 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 2 00:17:23.881637 systemd-timesyncd[1344]: Network configuration changed, trying to establish connection. Jul 2 00:17:23.902877 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 2 00:17:23.955791 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 2 00:17:23.963769 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jul 2 00:17:23.971728 kernel: ACPI: button: Power Button [PWRF] Jul 2 00:17:24.025741 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jul 2 00:17:24.044751 kernel: mousedev: PS/2 mouse device common for all mice Jul 2 00:17:24.051135 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:17:24.085721 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jul 2 00:17:24.088735 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jul 2 00:17:24.096820 kernel: Console: switching to colour dummy device 80x25 Jul 2 00:17:24.096945 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jul 2 00:17:24.096965 kernel: [drm] features: -context_init Jul 2 00:17:24.100725 kernel: [drm] number of scanouts: 1 Jul 2 00:17:24.102727 kernel: [drm] number of cap sets: 0 Jul 2 00:17:24.106742 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jul 2 00:17:24.120550 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jul 2 00:17:24.120637 kernel: Console: switching to colour frame buffer device 128x48 Jul 2 00:17:24.129728 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jul 2 00:17:24.133200 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:17:24.133520 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:17:24.152463 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:17:24.231264 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:17:24.231484 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:17:24.234146 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:17:24.309378 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:17:24.311736 kernel: EDAC MC: Ver: 3.0.0 Jul 2 00:17:24.335444 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 2 00:17:24.342999 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 2 00:17:24.372996 lvm[1429]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 00:17:24.418157 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 2 00:17:24.419652 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 00:17:24.419804 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 00:17:24.419985 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 2 00:17:24.420085 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 2 00:17:24.420356 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 2 00:17:24.420525 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 2 00:17:24.420596 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 2 00:17:24.420678 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 00:17:24.421269 systemd[1]: Reached target paths.target - Path Units. Jul 2 00:17:24.423245 systemd[1]: Reached target timers.target - Timer Units. Jul 2 00:17:24.425686 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 2 00:17:24.427926 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 2 00:17:24.435486 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 2 00:17:24.451973 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 2 00:17:24.453358 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 2 00:17:24.456857 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 00:17:24.459421 systemd[1]: Reached target basic.target - Basic System. Jul 2 00:17:24.460184 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 2 00:17:24.460212 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 2 00:17:24.461208 lvm[1433]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 00:17:24.468107 systemd[1]: Starting containerd.service - containerd container runtime... Jul 2 00:17:24.473683 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 2 00:17:24.485129 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 2 00:17:24.490346 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 2 00:17:24.494625 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 2 00:17:24.496595 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 2 00:17:24.502036 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 2 00:17:24.517536 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 2 00:17:24.527921 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 2 00:17:24.536089 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 2 00:17:24.548884 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 2 00:17:24.553968 jq[1437]: false Jul 2 00:17:24.554336 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 2 00:17:24.555231 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 2 00:17:24.563187 systemd[1]: Starting update-engine.service - Update Engine... Jul 2 00:17:24.569094 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 2 00:17:24.573137 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 2 00:17:24.578326 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 00:17:24.578630 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 2 00:17:24.580406 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 00:17:24.580629 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 2 00:17:24.617335 coreos-metadata[1435]: Jul 02 00:17:24.615 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jul 2 00:17:24.621739 extend-filesystems[1438]: Found loop4 Jul 2 00:17:24.621739 extend-filesystems[1438]: Found loop5 Jul 2 00:17:24.621739 extend-filesystems[1438]: Found loop6 Jul 2 00:17:24.621739 extend-filesystems[1438]: Found loop7 Jul 2 00:17:24.621739 extend-filesystems[1438]: Found vda Jul 2 00:17:24.628736 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 2 00:17:24.668041 coreos-metadata[1435]: Jul 02 00:17:24.644 INFO Fetch successful Jul 2 00:17:24.627967 dbus-daemon[1436]: [system] SELinux support is enabled Jul 2 00:17:24.676578 extend-filesystems[1438]: Found vda1 Jul 2 00:17:24.676578 extend-filesystems[1438]: Found vda2 Jul 2 00:17:24.676578 extend-filesystems[1438]: Found vda3 Jul 2 00:17:24.676578 extend-filesystems[1438]: Found usr Jul 2 00:17:24.676578 extend-filesystems[1438]: Found vda4 Jul 2 00:17:24.676578 extend-filesystems[1438]: Found vda6 Jul 2 00:17:24.676578 extend-filesystems[1438]: Found vda7 Jul 2 00:17:24.676578 extend-filesystems[1438]: Found vda9 Jul 2 00:17:24.676578 extend-filesystems[1438]: Checking size of /dev/vda9 Jul 2 00:17:24.654486 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 00:17:24.748961 jq[1448]: true Jul 2 00:17:24.654598 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 2 00:17:24.665289 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 00:17:24.765307 extend-filesystems[1438]: Resized partition /dev/vda9 Jul 2 00:17:24.665429 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Jul 2 00:17:24.770359 jq[1462]: true Jul 2 00:17:24.665467 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 2 00:17:24.773780 extend-filesystems[1477]: resize2fs 1.47.0 (5-Feb-2023) Jul 2 00:17:24.674156 (ntainerd)[1460]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 2 00:17:24.780966 tar[1450]: linux-amd64/helm Jul 2 00:17:24.760449 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 00:17:24.760786 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 2 00:17:24.797607 update_engine[1447]: I0702 00:17:24.794426 1447 main.cc:92] Flatcar Update Engine starting Jul 2 00:17:24.798306 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Jul 2 00:17:24.810600 systemd[1]: Started update-engine.service - Update Engine. Jul 2 00:17:24.816949 update_engine[1447]: I0702 00:17:24.812067 1447 update_check_scheduler.cc:74] Next update check in 9m30s Jul 2 00:17:24.823020 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 2 00:17:24.946909 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1382) Jul 2 00:17:24.940283 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 2 00:17:24.941212 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 2 00:17:24.983050 systemd-networkd[1370]: eth0: Gained IPv6LL Jul 2 00:17:24.983869 systemd-timesyncd[1344]: Network configuration changed, trying to establish connection. Jul 2 00:17:24.989429 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 2 00:17:25.006929 systemd[1]: Reached target network-online.target - Network is Online. Jul 2 00:17:25.012345 bash[1497]: Updated "/home/core/.ssh/authorized_keys" Jul 2 00:17:25.021622 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:17:25.034992 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 2 00:17:25.037257 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 2 00:17:25.053040 systemd-logind[1446]: New seat seat0. Jul 2 00:17:25.061185 systemd[1]: Starting sshkeys.service... Jul 2 00:17:25.072849 systemd-logind[1446]: Watching system buttons on /dev/input/event1 (Power Button) Jul 2 00:17:25.072874 systemd-logind[1446]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 2 00:17:25.074987 systemd[1]: Started systemd-logind.service - User Login Management. Jul 2 00:17:25.166391 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 2 00:17:25.178768 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jul 2 00:17:25.187507 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 2 00:17:25.227173 locksmithd[1480]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 00:17:25.229273 extend-filesystems[1477]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 2 00:17:25.229273 extend-filesystems[1477]: old_desc_blocks = 1, new_desc_blocks = 8 Jul 2 00:17:25.229273 extend-filesystems[1477]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jul 2 00:17:25.248489 extend-filesystems[1438]: Resized filesystem in /dev/vda9 Jul 2 00:17:25.248489 extend-filesystems[1438]: Found vdb Jul 2 00:17:25.240353 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 00:17:25.240669 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 2 00:17:25.318597 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 2 00:17:25.340804 coreos-metadata[1514]: Jul 02 00:17:25.340 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jul 2 00:17:25.356849 coreos-metadata[1514]: Jul 02 00:17:25.354 INFO Fetch successful Jul 2 00:17:25.378294 unknown[1514]: wrote ssh authorized keys file for user: core Jul 2 00:17:25.446242 update-ssh-keys[1527]: Updated "/home/core/.ssh/authorized_keys" Jul 2 00:17:25.436888 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 2 00:17:25.441631 systemd[1]: Finished sshkeys.service. Jul 2 00:17:25.559840 systemd-networkd[1370]: eth1: Gained IPv6LL Jul 2 00:17:25.560962 systemd-timesyncd[1344]: Network configuration changed, trying to establish connection. Jul 2 00:17:25.572762 containerd[1460]: time="2024-07-02T00:17:25.572348460Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Jul 2 00:17:25.687778 containerd[1460]: time="2024-07-02T00:17:25.686194114Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 2 00:17:25.687778 containerd[1460]: time="2024-07-02T00:17:25.686278013Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:17:25.695050 containerd[1460]: time="2024-07-02T00:17:25.693673773Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.36-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:17:25.695050 containerd[1460]: time="2024-07-02T00:17:25.693755850Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:17:25.695050 containerd[1460]: time="2024-07-02T00:17:25.694130109Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:17:25.695050 containerd[1460]: time="2024-07-02T00:17:25.694160624Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 00:17:25.695050 containerd[1460]: time="2024-07-02T00:17:25.694296256Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 2 00:17:25.695050 containerd[1460]: time="2024-07-02T00:17:25.694380387Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:17:25.695050 containerd[1460]: time="2024-07-02T00:17:25.694397528Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 00:17:25.695050 containerd[1460]: time="2024-07-02T00:17:25.694487036Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:17:25.695050 containerd[1460]: time="2024-07-02T00:17:25.694739598Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 00:17:25.695050 containerd[1460]: time="2024-07-02T00:17:25.694759817Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 00:17:25.695050 containerd[1460]: time="2024-07-02T00:17:25.694772670Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:17:25.695591 containerd[1460]: time="2024-07-02T00:17:25.694893341Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:17:25.695591 containerd[1460]: time="2024-07-02T00:17:25.694908815Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 00:17:25.695591 containerd[1460]: time="2024-07-02T00:17:25.694987702Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 00:17:25.695591 containerd[1460]: time="2024-07-02T00:17:25.695004304Z" level=info msg="metadata content store policy set" policy=shared Jul 2 00:17:25.718954 containerd[1460]: time="2024-07-02T00:17:25.716875462Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 00:17:25.718954 containerd[1460]: time="2024-07-02T00:17:25.716952229Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 00:17:25.718954 containerd[1460]: time="2024-07-02T00:17:25.716976076Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 00:17:25.718954 containerd[1460]: time="2024-07-02T00:17:25.717031075Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 2 00:17:25.718954 containerd[1460]: time="2024-07-02T00:17:25.717056281Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 2 00:17:25.718954 containerd[1460]: time="2024-07-02T00:17:25.717075256Z" level=info msg="NRI interface is disabled by configuration." Jul 2 00:17:25.718954 containerd[1460]: time="2024-07-02T00:17:25.717094427Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 00:17:25.718954 containerd[1460]: time="2024-07-02T00:17:25.717405707Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 2 00:17:25.718954 containerd[1460]: time="2024-07-02T00:17:25.717435632Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 2 00:17:25.718954 containerd[1460]: time="2024-07-02T00:17:25.717455470Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 2 00:17:25.718954 containerd[1460]: time="2024-07-02T00:17:25.717475060Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 2 00:17:25.718954 containerd[1460]: time="2024-07-02T00:17:25.717493813Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 00:17:25.718954 containerd[1460]: time="2024-07-02T00:17:25.717518027Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 00:17:25.718954 containerd[1460]: time="2024-07-02T00:17:25.717553094Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 00:17:25.719493 containerd[1460]: time="2024-07-02T00:17:25.717571723Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 00:17:25.719493 containerd[1460]: time="2024-07-02T00:17:25.717596077Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 00:17:25.719493 containerd[1460]: time="2024-07-02T00:17:25.717617392Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 00:17:25.719493 containerd[1460]: time="2024-07-02T00:17:25.717638957Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 00:17:25.719493 containerd[1460]: time="2024-07-02T00:17:25.717661400Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 00:17:25.719493 containerd[1460]: time="2024-07-02T00:17:25.717877986Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 00:17:25.719493 containerd[1460]: time="2024-07-02T00:17:25.718244935Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 00:17:25.719493 containerd[1460]: time="2024-07-02T00:17:25.718290372Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 00:17:25.719493 containerd[1460]: time="2024-07-02T00:17:25.718306242Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 2 00:17:25.719493 containerd[1460]: time="2024-07-02T00:17:25.718331312Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 00:17:25.719493 containerd[1460]: time="2024-07-02T00:17:25.718388778Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 00:17:25.719493 containerd[1460]: time="2024-07-02T00:17:25.718401355Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 00:17:25.719493 containerd[1460]: time="2024-07-02T00:17:25.718412599Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 00:17:25.719493 containerd[1460]: time="2024-07-02T00:17:25.718424487Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 00:17:25.719801 containerd[1460]: time="2024-07-02T00:17:25.718440864Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 00:17:25.719801 containerd[1460]: time="2024-07-02T00:17:25.718454381Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 00:17:25.719801 containerd[1460]: time="2024-07-02T00:17:25.718475817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 00:17:25.719801 containerd[1460]: time="2024-07-02T00:17:25.718487490Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 00:17:25.719801 containerd[1460]: time="2024-07-02T00:17:25.718501785Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 00:17:25.719801 containerd[1460]: time="2024-07-02T00:17:25.718647999Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 2 00:17:25.719801 containerd[1460]: time="2024-07-02T00:17:25.718665444Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 2 00:17:25.719801 containerd[1460]: time="2024-07-02T00:17:25.718677405Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 00:17:25.720312 containerd[1460]: time="2024-07-02T00:17:25.720266125Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 2 00:17:25.721883 containerd[1460]: time="2024-07-02T00:17:25.721816740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 00:17:25.722062 containerd[1460]: time="2024-07-02T00:17:25.722038544Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 2 00:17:25.722147 containerd[1460]: time="2024-07-02T00:17:25.722131095Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 00:17:25.722265 containerd[1460]: time="2024-07-02T00:17:25.722246015Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 00:17:25.722839 containerd[1460]: time="2024-07-02T00:17:25.722742063Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 00:17:25.723324 containerd[1460]: time="2024-07-02T00:17:25.723116384Z" level=info msg="Connect containerd service" Jul 2 00:17:25.723324 containerd[1460]: time="2024-07-02T00:17:25.723186203Z" level=info msg="using legacy CRI server" Jul 2 00:17:25.723324 containerd[1460]: time="2024-07-02T00:17:25.723195285Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 2 00:17:25.724877 containerd[1460]: time="2024-07-02T00:17:25.724774501Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 00:17:25.728885 containerd[1460]: time="2024-07-02T00:17:25.728478986Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 00:17:25.728885 containerd[1460]: time="2024-07-02T00:17:25.728700788Z" level=info msg="Start subscribing containerd event" Jul 2 00:17:25.728885 containerd[1460]: time="2024-07-02T00:17:25.728778885Z" level=info msg="Start recovering state" Jul 2 00:17:25.728885 containerd[1460]: time="2024-07-02T00:17:25.728803245Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 00:17:25.728885 containerd[1460]: time="2024-07-02T00:17:25.728898361Z" level=info msg="Start event monitor" Jul 2 00:17:25.730683 containerd[1460]: time="2024-07-02T00:17:25.728921107Z" level=info msg="Start snapshots syncer" Jul 2 00:17:25.730683 containerd[1460]: time="2024-07-02T00:17:25.728939361Z" level=info msg="Start cni network conf syncer for default" Jul 2 00:17:25.730683 containerd[1460]: time="2024-07-02T00:17:25.728951192Z" level=info msg="Start streaming server" Jul 2 00:17:25.730683 containerd[1460]: time="2024-07-02T00:17:25.729136795Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 2 00:17:25.730683 containerd[1460]: time="2024-07-02T00:17:25.729169352Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 00:17:25.733589 containerd[1460]: time="2024-07-02T00:17:25.731122821Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 2 00:17:25.733589 containerd[1460]: time="2024-07-02T00:17:25.731465664Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 00:17:25.733589 containerd[1460]: time="2024-07-02T00:17:25.731523483Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 00:17:25.743285 systemd[1]: Started containerd.service - containerd container runtime. Jul 2 00:17:25.747630 containerd[1460]: time="2024-07-02T00:17:25.747586506Z" level=info msg="containerd successfully booted in 0.182024s" Jul 2 00:17:25.888720 sshd_keygen[1475]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 00:17:25.967726 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 2 00:17:25.985149 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 2 00:17:25.998495 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 00:17:25.998909 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 2 00:17:26.013547 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 2 00:17:26.056612 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 2 00:17:26.070215 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 2 00:17:26.077257 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 2 00:17:26.080677 systemd[1]: Reached target getty.target - Login Prompts. Jul 2 00:17:26.219523 tar[1450]: linux-amd64/LICENSE Jul 2 00:17:26.220147 tar[1450]: linux-amd64/README.md Jul 2 00:17:26.236718 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 2 00:17:26.646211 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:17:26.651250 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 2 00:17:26.655105 systemd[1]: Startup finished in 1.376s (kernel) + 6.622s (initrd) + 6.296s (userspace) = 14.295s. Jul 2 00:17:26.659917 (kubelet)[1559]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:17:27.584170 kubelet[1559]: E0702 00:17:27.584011 1559 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:17:27.587223 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:17:27.587388 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:17:27.588024 systemd[1]: kubelet.service: Consumed 1.395s CPU time. Jul 2 00:17:34.023171 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 2 00:17:34.030122 systemd[1]: Started sshd@0-64.227.97.255:22-147.75.109.163:54138.service - OpenSSH per-connection server daemon (147.75.109.163:54138). Jul 2 00:17:34.100878 sshd[1572]: Accepted publickey for core from 147.75.109.163 port 54138 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:17:34.104322 sshd[1572]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:17:34.117818 systemd-logind[1446]: New session 1 of user core. Jul 2 00:17:34.120140 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 2 00:17:34.128132 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 2 00:17:34.147491 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 2 00:17:34.152263 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 2 00:17:34.162274 (systemd)[1576]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:17:34.297636 systemd[1576]: Queued start job for default target default.target. Jul 2 00:17:34.305200 systemd[1576]: Created slice app.slice - User Application Slice. Jul 2 00:17:34.305269 systemd[1576]: Reached target paths.target - Paths. Jul 2 00:17:34.305285 systemd[1576]: Reached target timers.target - Timers. Jul 2 00:17:34.307186 systemd[1576]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 2 00:17:34.323591 systemd[1576]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 2 00:17:34.323768 systemd[1576]: Reached target sockets.target - Sockets. Jul 2 00:17:34.323792 systemd[1576]: Reached target basic.target - Basic System. Jul 2 00:17:34.323847 systemd[1576]: Reached target default.target - Main User Target. Jul 2 00:17:34.323881 systemd[1576]: Startup finished in 151ms. Jul 2 00:17:34.324071 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 2 00:17:34.328069 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 2 00:17:34.404047 systemd[1]: Started sshd@1-64.227.97.255:22-147.75.109.163:54144.service - OpenSSH per-connection server daemon (147.75.109.163:54144). Jul 2 00:17:34.471059 sshd[1587]: Accepted publickey for core from 147.75.109.163 port 54144 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:17:34.475431 sshd[1587]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:17:34.484178 systemd-logind[1446]: New session 2 of user core. Jul 2 00:17:34.490154 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 2 00:17:34.565000 sshd[1587]: pam_unix(sshd:session): session closed for user core Jul 2 00:17:34.583092 systemd[1]: sshd@1-64.227.97.255:22-147.75.109.163:54144.service: Deactivated successfully. Jul 2 00:17:34.585758 systemd[1]: session-2.scope: Deactivated successfully. Jul 2 00:17:34.593796 systemd-logind[1446]: Session 2 logged out. Waiting for processes to exit. Jul 2 00:17:34.601286 systemd[1]: Started sshd@2-64.227.97.255:22-147.75.109.163:54150.service - OpenSSH per-connection server daemon (147.75.109.163:54150). Jul 2 00:17:34.603679 systemd-logind[1446]: Removed session 2. Jul 2 00:17:34.677629 sshd[1594]: Accepted publickey for core from 147.75.109.163 port 54150 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:17:34.680862 sshd[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:17:34.690341 systemd-logind[1446]: New session 3 of user core. Jul 2 00:17:34.699093 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 2 00:17:34.758165 sshd[1594]: pam_unix(sshd:session): session closed for user core Jul 2 00:17:34.774069 systemd[1]: sshd@2-64.227.97.255:22-147.75.109.163:54150.service: Deactivated successfully. Jul 2 00:17:34.776138 systemd[1]: session-3.scope: Deactivated successfully. Jul 2 00:17:34.779026 systemd-logind[1446]: Session 3 logged out. Waiting for processes to exit. Jul 2 00:17:34.787240 systemd[1]: Started sshd@3-64.227.97.255:22-147.75.109.163:54154.service - OpenSSH per-connection server daemon (147.75.109.163:54154). Jul 2 00:17:34.790044 systemd-logind[1446]: Removed session 3. Jul 2 00:17:34.835218 sshd[1601]: Accepted publickey for core from 147.75.109.163 port 54154 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:17:34.837937 sshd[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:17:34.844720 systemd-logind[1446]: New session 4 of user core. Jul 2 00:17:34.855063 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 2 00:17:34.918989 sshd[1601]: pam_unix(sshd:session): session closed for user core Jul 2 00:17:34.932204 systemd[1]: sshd@3-64.227.97.255:22-147.75.109.163:54154.service: Deactivated successfully. Jul 2 00:17:34.934638 systemd[1]: session-4.scope: Deactivated successfully. Jul 2 00:17:34.936957 systemd-logind[1446]: Session 4 logged out. Waiting for processes to exit. Jul 2 00:17:34.941141 systemd[1]: Started sshd@4-64.227.97.255:22-147.75.109.163:54162.service - OpenSSH per-connection server daemon (147.75.109.163:54162). Jul 2 00:17:34.942596 systemd-logind[1446]: Removed session 4. Jul 2 00:17:35.008727 sshd[1608]: Accepted publickey for core from 147.75.109.163 port 54162 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:17:35.011278 sshd[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:17:35.017836 systemd-logind[1446]: New session 5 of user core. Jul 2 00:17:35.027282 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 2 00:17:35.105634 sudo[1611]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 2 00:17:35.106996 sudo[1611]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:17:35.131658 sudo[1611]: pam_unix(sudo:session): session closed for user root Jul 2 00:17:35.136362 sshd[1608]: pam_unix(sshd:session): session closed for user core Jul 2 00:17:35.150221 systemd[1]: sshd@4-64.227.97.255:22-147.75.109.163:54162.service: Deactivated successfully. Jul 2 00:17:35.153033 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 00:17:35.154146 systemd-logind[1446]: Session 5 logged out. Waiting for processes to exit. Jul 2 00:17:35.168408 systemd[1]: Started sshd@5-64.227.97.255:22-147.75.109.163:54172.service - OpenSSH per-connection server daemon (147.75.109.163:54172). Jul 2 00:17:35.171742 systemd-logind[1446]: Removed session 5. Jul 2 00:17:35.222248 sshd[1616]: Accepted publickey for core from 147.75.109.163 port 54172 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:17:35.224447 sshd[1616]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:17:35.233046 systemd-logind[1446]: New session 6 of user core. Jul 2 00:17:35.240045 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 2 00:17:35.305583 sudo[1620]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 2 00:17:35.306099 sudo[1620]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:17:35.312581 sudo[1620]: pam_unix(sudo:session): session closed for user root Jul 2 00:17:35.321100 sudo[1619]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 2 00:17:35.321664 sudo[1619]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:17:35.344129 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 2 00:17:35.347092 auditctl[1623]: No rules Jul 2 00:17:35.347573 systemd[1]: audit-rules.service: Deactivated successfully. Jul 2 00:17:35.347820 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 2 00:17:35.356314 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 00:17:35.403751 augenrules[1641]: No rules Jul 2 00:17:35.405617 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 00:17:35.408978 sudo[1619]: pam_unix(sudo:session): session closed for user root Jul 2 00:17:35.413317 sshd[1616]: pam_unix(sshd:session): session closed for user core Jul 2 00:17:35.427622 systemd[1]: sshd@5-64.227.97.255:22-147.75.109.163:54172.service: Deactivated successfully. Jul 2 00:17:35.431472 systemd[1]: session-6.scope: Deactivated successfully. Jul 2 00:17:35.435095 systemd-logind[1446]: Session 6 logged out. Waiting for processes to exit. Jul 2 00:17:35.444215 systemd[1]: Started sshd@6-64.227.97.255:22-147.75.109.163:54180.service - OpenSSH per-connection server daemon (147.75.109.163:54180). Jul 2 00:17:35.447014 systemd-logind[1446]: Removed session 6. Jul 2 00:17:35.494328 sshd[1649]: Accepted publickey for core from 147.75.109.163 port 54180 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:17:35.496556 sshd[1649]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:17:35.503396 systemd-logind[1446]: New session 7 of user core. Jul 2 00:17:35.513055 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 2 00:17:35.575544 sudo[1652]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 00:17:35.576714 sudo[1652]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:17:35.836309 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 2 00:17:35.854656 (dockerd)[1661]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 2 00:17:36.595030 dockerd[1661]: time="2024-07-02T00:17:36.583846077Z" level=info msg="Starting up" Jul 2 00:17:36.778398 dockerd[1661]: time="2024-07-02T00:17:36.776758390Z" level=info msg="Loading containers: start." Jul 2 00:17:37.034233 kernel: Initializing XFRM netlink socket Jul 2 00:17:37.108505 systemd-timesyncd[1344]: Network configuration changed, trying to establish connection. Jul 2 00:17:37.738311 systemd-resolved[1328]: Clock change detected. Flushing caches. Jul 2 00:17:37.739750 systemd-timesyncd[1344]: Contacted time server 23.141.40.124:123 (2.flatcar.pool.ntp.org). Jul 2 00:17:37.739866 systemd-timesyncd[1344]: Initial clock synchronization to Tue 2024-07-02 00:17:37.738222 UTC. Jul 2 00:17:37.859704 systemd-networkd[1370]: docker0: Link UP Jul 2 00:17:37.902666 dockerd[1661]: time="2024-07-02T00:17:37.902244530Z" level=info msg="Loading containers: done." Jul 2 00:17:38.070487 dockerd[1661]: time="2024-07-02T00:17:38.069810785Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 2 00:17:38.070487 dockerd[1661]: time="2024-07-02T00:17:38.070339993Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jul 2 00:17:38.070908 dockerd[1661]: time="2024-07-02T00:17:38.070590937Z" level=info msg="Daemon has completed initialization" Jul 2 00:17:38.156646 dockerd[1661]: time="2024-07-02T00:17:38.154769735Z" level=info msg="API listen on /run/docker.sock" Jul 2 00:17:38.157078 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 2 00:17:38.201255 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 2 00:17:38.209756 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:17:38.468746 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:17:38.488627 (kubelet)[1796]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:17:38.660960 kubelet[1796]: E0702 00:17:38.660294 1796 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:17:38.669968 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:17:38.670271 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:17:39.788475 containerd[1460]: time="2024-07-02T00:17:39.788028898Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.6\"" Jul 2 00:17:40.751422 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2858080070.mount: Deactivated successfully. Jul 2 00:17:43.366028 containerd[1460]: time="2024-07-02T00:17:43.365921412Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:17:43.367672 containerd[1460]: time="2024-07-02T00:17:43.367597830Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.6: active requests=0, bytes read=35235837" Jul 2 00:17:43.368892 containerd[1460]: time="2024-07-02T00:17:43.368819399Z" level=info msg="ImageCreate event name:\"sha256:3af2ab51e136465590d968a2052e02e180fc7967a03724b269c1337e8f09d36f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:17:43.374837 containerd[1460]: time="2024-07-02T00:17:43.374732522Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f4d993b3d73cc0d59558be584b5b40785b4a96874bc76873b69d1dd818485e70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:17:43.376806 containerd[1460]: time="2024-07-02T00:17:43.376566689Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.6\" with image id \"sha256:3af2ab51e136465590d968a2052e02e180fc7967a03724b269c1337e8f09d36f\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f4d993b3d73cc0d59558be584b5b40785b4a96874bc76873b69d1dd818485e70\", size \"35232637\" in 3.588449736s" Jul 2 00:17:43.376806 containerd[1460]: time="2024-07-02T00:17:43.376633845Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.6\" returns image reference \"sha256:3af2ab51e136465590d968a2052e02e180fc7967a03724b269c1337e8f09d36f\"" Jul 2 00:17:43.415010 containerd[1460]: time="2024-07-02T00:17:43.414903156Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.6\"" Jul 2 00:17:45.677412 containerd[1460]: time="2024-07-02T00:17:45.677333542Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:17:45.680454 containerd[1460]: time="2024-07-02T00:17:45.680385646Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.6: active requests=0, bytes read=32069747" Jul 2 00:17:45.683588 containerd[1460]: time="2024-07-02T00:17:45.683510305Z" level=info msg="ImageCreate event name:\"sha256:083b81fc09e858d3e0d4b42f567a9d44a2232b60bac396a94cbdd7ce1098235e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:17:45.688807 containerd[1460]: time="2024-07-02T00:17:45.687238091Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:692fc3f88a60b3afc76492ad347306d34042000f56f230959e9367fd59c48b1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:17:45.688807 containerd[1460]: time="2024-07-02T00:17:45.688629347Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.6\" with image id \"sha256:083b81fc09e858d3e0d4b42f567a9d44a2232b60bac396a94cbdd7ce1098235e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:692fc3f88a60b3afc76492ad347306d34042000f56f230959e9367fd59c48b1e\", size \"33590639\" in 2.273256009s" Jul 2 00:17:45.688807 containerd[1460]: time="2024-07-02T00:17:45.688674245Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.6\" returns image reference \"sha256:083b81fc09e858d3e0d4b42f567a9d44a2232b60bac396a94cbdd7ce1098235e\"" Jul 2 00:17:45.731551 containerd[1460]: time="2024-07-02T00:17:45.731469147Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.6\"" Jul 2 00:17:46.992102 containerd[1460]: time="2024-07-02T00:17:46.990709862Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:17:47.000192 containerd[1460]: time="2024-07-02T00:17:46.999844253Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.6: active requests=0, bytes read=17153803" Jul 2 00:17:47.002895 containerd[1460]: time="2024-07-02T00:17:47.001863056Z" level=info msg="ImageCreate event name:\"sha256:49d9b8328a8fda6ebca6b3226c6d722d92ec7adffff18668511a88058444cf15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:17:47.007194 containerd[1460]: time="2024-07-02T00:17:47.007120412Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b91a4e45debd0d5336d9f533aefdf47d4b39b24071feb459e521709b9e4ec24f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:17:47.008602 containerd[1460]: time="2024-07-02T00:17:47.008542590Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.6\" with image id \"sha256:49d9b8328a8fda6ebca6b3226c6d722d92ec7adffff18668511a88058444cf15\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b91a4e45debd0d5336d9f533aefdf47d4b39b24071feb459e521709b9e4ec24f\", size \"18674713\" in 1.276541942s" Jul 2 00:17:47.008808 containerd[1460]: time="2024-07-02T00:17:47.008787356Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.6\" returns image reference \"sha256:49d9b8328a8fda6ebca6b3226c6d722d92ec7adffff18668511a88058444cf15\"" Jul 2 00:17:47.040295 containerd[1460]: time="2024-07-02T00:17:47.040240781Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.6\"" Jul 2 00:17:48.362267 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3409080098.mount: Deactivated successfully. Jul 2 00:17:48.825031 containerd[1460]: time="2024-07-02T00:17:48.824973381Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:17:48.827149 containerd[1460]: time="2024-07-02T00:17:48.827097004Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.6: active requests=0, bytes read=28409334" Jul 2 00:17:48.829101 containerd[1460]: time="2024-07-02T00:17:48.828996072Z" level=info msg="ImageCreate event name:\"sha256:9c49592198fa15b509fe4ee4a538067866776e325d6dd33c77ad6647e1d3aac9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:17:48.833009 containerd[1460]: time="2024-07-02T00:17:48.832914819Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:88bacb3e1d6c0c37c6da95c6d6b8e30531d0b4d0ab540cc290b0af51fbfebd90\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:17:48.834267 containerd[1460]: time="2024-07-02T00:17:48.834099812Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.6\" with image id \"sha256:9c49592198fa15b509fe4ee4a538067866776e325d6dd33c77ad6647e1d3aac9\", repo tag \"registry.k8s.io/kube-proxy:v1.29.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:88bacb3e1d6c0c37c6da95c6d6b8e30531d0b4d0ab540cc290b0af51fbfebd90\", size \"28408353\" in 1.793808604s" Jul 2 00:17:48.834267 containerd[1460]: time="2024-07-02T00:17:48.834147947Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.6\" returns image reference \"sha256:9c49592198fa15b509fe4ee4a538067866776e325d6dd33c77ad6647e1d3aac9\"" Jul 2 00:17:48.864956 containerd[1460]: time="2024-07-02T00:17:48.864740215Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jul 2 00:17:48.915637 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 2 00:17:48.921202 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:17:49.094170 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:17:49.103444 (kubelet)[1911]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:17:49.187740 kubelet[1911]: E0702 00:17:49.187647 1911 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:17:49.190565 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:17:49.190750 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:17:49.600621 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3436719574.mount: Deactivated successfully. Jul 2 00:17:50.740763 containerd[1460]: time="2024-07-02T00:17:50.739464101Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:17:50.742194 containerd[1460]: time="2024-07-02T00:17:50.742108210Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jul 2 00:17:50.744201 containerd[1460]: time="2024-07-02T00:17:50.744088858Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:17:50.748927 containerd[1460]: time="2024-07-02T00:17:50.748837838Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:17:50.751037 containerd[1460]: time="2024-07-02T00:17:50.750524594Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.88568842s" Jul 2 00:17:50.751037 containerd[1460]: time="2024-07-02T00:17:50.750587345Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jul 2 00:17:50.781242 containerd[1460]: time="2024-07-02T00:17:50.781189135Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jul 2 00:17:51.371188 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2993897777.mount: Deactivated successfully. Jul 2 00:17:51.381870 containerd[1460]: time="2024-07-02T00:17:51.380296012Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:17:51.381870 containerd[1460]: time="2024-07-02T00:17:51.381642066Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jul 2 00:17:51.382679 containerd[1460]: time="2024-07-02T00:17:51.382603014Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:17:51.387904 containerd[1460]: time="2024-07-02T00:17:51.387291434Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:17:51.388303 containerd[1460]: time="2024-07-02T00:17:51.388270094Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 607.035614ms" Jul 2 00:17:51.388386 containerd[1460]: time="2024-07-02T00:17:51.388372164Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jul 2 00:17:51.424494 containerd[1460]: time="2024-07-02T00:17:51.424324543Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jul 2 00:17:52.106658 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1285994999.mount: Deactivated successfully. Jul 2 00:17:54.170472 containerd[1460]: time="2024-07-02T00:17:54.170394244Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:17:54.171974 containerd[1460]: time="2024-07-02T00:17:54.171896892Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Jul 2 00:17:54.173968 containerd[1460]: time="2024-07-02T00:17:54.173895941Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:17:54.179159 containerd[1460]: time="2024-07-02T00:17:54.179078077Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:17:54.181034 containerd[1460]: time="2024-07-02T00:17:54.180966024Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 2.756205766s" Jul 2 00:17:54.181034 containerd[1460]: time="2024-07-02T00:17:54.181034273Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jul 2 00:17:58.457393 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:17:58.465249 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:17:58.507075 systemd[1]: Reloading requested from client PID 2086 ('systemctl') (unit session-7.scope)... Jul 2 00:17:58.507298 systemd[1]: Reloading... Jul 2 00:17:58.644938 zram_generator::config[2121]: No configuration found. Jul 2 00:17:58.818552 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:17:58.908005 systemd[1]: Reloading finished in 400 ms. Jul 2 00:17:58.964772 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 2 00:17:58.965349 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 2 00:17:58.965943 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:17:58.974745 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:17:59.118357 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:17:59.131496 (kubelet)[2176]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 2 00:17:59.222191 kubelet[2176]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:17:59.222716 kubelet[2176]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 00:17:59.222788 kubelet[2176]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:17:59.223019 kubelet[2176]: I0702 00:17:59.222953 2176 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 00:18:00.031154 kubelet[2176]: I0702 00:18:00.028334 2176 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jul 2 00:18:00.031154 kubelet[2176]: I0702 00:18:00.028392 2176 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 00:18:00.031154 kubelet[2176]: I0702 00:18:00.028800 2176 server.go:919] "Client rotation is on, will bootstrap in background" Jul 2 00:18:00.161337 kubelet[2176]: E0702 00:18:00.161239 2176 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://64.227.97.255:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 64.227.97.255:6443: connect: connection refused Jul 2 00:18:00.162485 kubelet[2176]: I0702 00:18:00.161838 2176 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 00:18:00.183230 kubelet[2176]: I0702 00:18:00.183174 2176 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 00:18:00.183696 kubelet[2176]: I0702 00:18:00.183640 2176 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 00:18:00.186624 kubelet[2176]: I0702 00:18:00.185659 2176 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 00:18:00.186624 kubelet[2176]: I0702 00:18:00.185752 2176 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 00:18:00.186624 kubelet[2176]: I0702 00:18:00.185778 2176 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 00:18:00.186624 kubelet[2176]: I0702 00:18:00.186194 2176 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:18:00.190913 kubelet[2176]: I0702 00:18:00.189001 2176 kubelet.go:396] "Attempting to sync node with API server" Jul 2 00:18:00.190913 kubelet[2176]: I0702 00:18:00.189099 2176 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 00:18:00.190913 kubelet[2176]: I0702 00:18:00.189160 2176 kubelet.go:312] "Adding apiserver pod source" Jul 2 00:18:00.190913 kubelet[2176]: I0702 00:18:00.189218 2176 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 00:18:00.193072 kubelet[2176]: W0702 00:18:00.192980 2176 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://64.227.97.255:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.1.1-8-31c642c6eb&limit=500&resourceVersion=0": dial tcp 64.227.97.255:6443: connect: connection refused Jul 2 00:18:00.193937 kubelet[2176]: E0702 00:18:00.193882 2176 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://64.227.97.255:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.1.1-8-31c642c6eb&limit=500&resourceVersion=0": dial tcp 64.227.97.255:6443: connect: connection refused Jul 2 00:18:00.194733 kubelet[2176]: I0702 00:18:00.194703 2176 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Jul 2 00:18:00.208515 kubelet[2176]: W0702 00:18:00.208076 2176 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://64.227.97.255:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 64.227.97.255:6443: connect: connection refused Jul 2 00:18:00.208515 kubelet[2176]: E0702 00:18:00.208150 2176 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://64.227.97.255:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 64.227.97.255:6443: connect: connection refused Jul 2 00:18:00.216252 kubelet[2176]: I0702 00:18:00.216159 2176 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 00:18:00.219729 kubelet[2176]: W0702 00:18:00.218924 2176 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 00:18:00.220772 kubelet[2176]: I0702 00:18:00.220716 2176 server.go:1256] "Started kubelet" Jul 2 00:18:00.224381 kubelet[2176]: I0702 00:18:00.224134 2176 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 00:18:00.239999 kubelet[2176]: I0702 00:18:00.239938 2176 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 00:18:00.243310 kubelet[2176]: I0702 00:18:00.243261 2176 server.go:461] "Adding debug handlers to kubelet server" Jul 2 00:18:00.244903 kubelet[2176]: E0702 00:18:00.244811 2176 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://64.227.97.255:6443/api/v1/namespaces/default/events\": dial tcp 64.227.97.255:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3975.1.1-8-31c642c6eb.17de3d466308c2a7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3975.1.1-8-31c642c6eb,UID:ci-3975.1.1-8-31c642c6eb,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3975.1.1-8-31c642c6eb,},FirstTimestamp:2024-07-02 00:18:00.220631719 +0000 UTC m=+1.082486642,LastTimestamp:2024-07-02 00:18:00.220631719 +0000 UTC m=+1.082486642,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3975.1.1-8-31c642c6eb,}" Jul 2 00:18:00.249632 kubelet[2176]: I0702 00:18:00.249282 2176 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 00:18:00.250779 kubelet[2176]: I0702 00:18:00.249831 2176 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 00:18:00.257008 kubelet[2176]: I0702 00:18:00.254351 2176 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 00:18:00.258626 kubelet[2176]: I0702 00:18:00.258345 2176 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 00:18:00.258626 kubelet[2176]: I0702 00:18:00.258497 2176 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 00:18:00.266638 kubelet[2176]: E0702 00:18:00.266585 2176 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.227.97.255:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.1.1-8-31c642c6eb?timeout=10s\": dial tcp 64.227.97.255:6443: connect: connection refused" interval="200ms" Jul 2 00:18:00.269982 kubelet[2176]: W0702 00:18:00.264824 2176 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://64.227.97.255:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.227.97.255:6443: connect: connection refused Jul 2 00:18:00.269982 kubelet[2176]: E0702 00:18:00.269594 2176 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://64.227.97.255:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.227.97.255:6443: connect: connection refused Jul 2 00:18:00.276605 kubelet[2176]: I0702 00:18:00.276563 2176 factory.go:221] Registration of the systemd container factory successfully Jul 2 00:18:00.277306 kubelet[2176]: I0702 00:18:00.277250 2176 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 00:18:00.281757 kubelet[2176]: E0702 00:18:00.281457 2176 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 00:18:00.288285 kubelet[2176]: I0702 00:18:00.286198 2176 factory.go:221] Registration of the containerd container factory successfully Jul 2 00:18:00.308920 kubelet[2176]: I0702 00:18:00.308842 2176 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 00:18:00.315889 kubelet[2176]: I0702 00:18:00.315210 2176 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 00:18:00.315889 kubelet[2176]: I0702 00:18:00.315278 2176 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 00:18:00.315889 kubelet[2176]: I0702 00:18:00.315306 2176 kubelet.go:2329] "Starting kubelet main sync loop" Jul 2 00:18:00.315889 kubelet[2176]: E0702 00:18:00.315419 2176 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 00:18:00.318594 kubelet[2176]: W0702 00:18:00.318443 2176 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://64.227.97.255:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.227.97.255:6443: connect: connection refused Jul 2 00:18:00.318594 kubelet[2176]: E0702 00:18:00.318547 2176 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://64.227.97.255:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.227.97.255:6443: connect: connection refused Jul 2 00:18:00.335663 kubelet[2176]: I0702 00:18:00.335559 2176 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 00:18:00.335663 kubelet[2176]: I0702 00:18:00.335600 2176 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 00:18:00.335663 kubelet[2176]: I0702 00:18:00.335645 2176 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:18:00.352228 kubelet[2176]: I0702 00:18:00.352163 2176 policy_none.go:49] "None policy: Start" Jul 2 00:18:00.354705 kubelet[2176]: I0702 00:18:00.354660 2176 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 00:18:00.355462 kubelet[2176]: I0702 00:18:00.355050 2176 state_mem.go:35] "Initializing new in-memory state store" Jul 2 00:18:00.374959 kubelet[2176]: I0702 00:18:00.374896 2176 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.1.1-8-31c642c6eb" Jul 2 00:18:00.376170 kubelet[2176]: E0702 00:18:00.375769 2176 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://64.227.97.255:6443/api/v1/nodes\": dial tcp 64.227.97.255:6443: connect: connection refused" node="ci-3975.1.1-8-31c642c6eb" Jul 2 00:18:00.421287 kubelet[2176]: E0702 00:18:00.415566 2176 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 2 00:18:00.417815 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 2 00:18:00.437975 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 2 00:18:00.445126 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 2 00:18:00.463418 kubelet[2176]: I0702 00:18:00.463339 2176 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 00:18:00.464168 kubelet[2176]: I0702 00:18:00.463945 2176 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 00:18:00.467511 kubelet[2176]: E0702 00:18:00.467362 2176 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3975.1.1-8-31c642c6eb\" not found" Jul 2 00:18:00.485672 kubelet[2176]: E0702 00:18:00.485557 2176 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.227.97.255:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.1.1-8-31c642c6eb?timeout=10s\": dial tcp 64.227.97.255:6443: connect: connection refused" interval="400ms" Jul 2 00:18:00.578878 kubelet[2176]: I0702 00:18:00.578464 2176 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.1.1-8-31c642c6eb" Jul 2 00:18:00.579946 kubelet[2176]: E0702 00:18:00.579903 2176 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://64.227.97.255:6443/api/v1/nodes\": dial tcp 64.227.97.255:6443: connect: connection refused" node="ci-3975.1.1-8-31c642c6eb" Jul 2 00:18:00.616452 kubelet[2176]: I0702 00:18:00.616347 2176 topology_manager.go:215] "Topology Admit Handler" podUID="71ca84309fb2b05ed2fd4d7e960911d8" podNamespace="kube-system" podName="kube-controller-manager-ci-3975.1.1-8-31c642c6eb" Jul 2 00:18:00.618732 kubelet[2176]: I0702 00:18:00.618185 2176 topology_manager.go:215] "Topology Admit Handler" podUID="14e054431a2b71b9a87d2a3dd990cae8" podNamespace="kube-system" podName="kube-scheduler-ci-3975.1.1-8-31c642c6eb" Jul 2 00:18:00.619939 kubelet[2176]: I0702 00:18:00.619355 2176 topology_manager.go:215] "Topology Admit Handler" podUID="017ac7f2e737e1e16e89f8172459aad9" podNamespace="kube-system" podName="kube-apiserver-ci-3975.1.1-8-31c642c6eb" Jul 2 00:18:00.640734 systemd[1]: Created slice kubepods-burstable-pod71ca84309fb2b05ed2fd4d7e960911d8.slice - libcontainer container kubepods-burstable-pod71ca84309fb2b05ed2fd4d7e960911d8.slice. Jul 2 00:18:00.660800 kubelet[2176]: I0702 00:18:00.660749 2176 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71ca84309fb2b05ed2fd4d7e960911d8-k8s-certs\") pod \"kube-controller-manager-ci-3975.1.1-8-31c642c6eb\" (UID: \"71ca84309fb2b05ed2fd4d7e960911d8\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-8-31c642c6eb" Jul 2 00:18:00.660998 kubelet[2176]: I0702 00:18:00.660823 2176 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71ca84309fb2b05ed2fd4d7e960911d8-ca-certs\") pod \"kube-controller-manager-ci-3975.1.1-8-31c642c6eb\" (UID: \"71ca84309fb2b05ed2fd4d7e960911d8\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-8-31c642c6eb" Jul 2 00:18:00.663937 systemd[1]: Created slice kubepods-burstable-pod14e054431a2b71b9a87d2a3dd990cae8.slice - libcontainer container kubepods-burstable-pod14e054431a2b71b9a87d2a3dd990cae8.slice. Jul 2 00:18:00.665171 kubelet[2176]: I0702 00:18:00.664316 2176 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/017ac7f2e737e1e16e89f8172459aad9-k8s-certs\") pod \"kube-apiserver-ci-3975.1.1-8-31c642c6eb\" (UID: \"017ac7f2e737e1e16e89f8172459aad9\") " pod="kube-system/kube-apiserver-ci-3975.1.1-8-31c642c6eb" Jul 2 00:18:00.665171 kubelet[2176]: I0702 00:18:00.664409 2176 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/017ac7f2e737e1e16e89f8172459aad9-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3975.1.1-8-31c642c6eb\" (UID: \"017ac7f2e737e1e16e89f8172459aad9\") " pod="kube-system/kube-apiserver-ci-3975.1.1-8-31c642c6eb" Jul 2 00:18:00.665171 kubelet[2176]: I0702 00:18:00.664445 2176 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71ca84309fb2b05ed2fd4d7e960911d8-flexvolume-dir\") pod \"kube-controller-manager-ci-3975.1.1-8-31c642c6eb\" (UID: \"71ca84309fb2b05ed2fd4d7e960911d8\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-8-31c642c6eb" Jul 2 00:18:00.665171 kubelet[2176]: I0702 00:18:00.664484 2176 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71ca84309fb2b05ed2fd4d7e960911d8-kubeconfig\") pod \"kube-controller-manager-ci-3975.1.1-8-31c642c6eb\" (UID: \"71ca84309fb2b05ed2fd4d7e960911d8\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-8-31c642c6eb" Jul 2 00:18:00.665171 kubelet[2176]: I0702 00:18:00.664519 2176 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71ca84309fb2b05ed2fd4d7e960911d8-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3975.1.1-8-31c642c6eb\" (UID: \"71ca84309fb2b05ed2fd4d7e960911d8\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-8-31c642c6eb" Jul 2 00:18:00.667159 kubelet[2176]: I0702 00:18:00.664552 2176 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/14e054431a2b71b9a87d2a3dd990cae8-kubeconfig\") pod \"kube-scheduler-ci-3975.1.1-8-31c642c6eb\" (UID: \"14e054431a2b71b9a87d2a3dd990cae8\") " pod="kube-system/kube-scheduler-ci-3975.1.1-8-31c642c6eb" Jul 2 00:18:00.667159 kubelet[2176]: I0702 00:18:00.664617 2176 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/017ac7f2e737e1e16e89f8172459aad9-ca-certs\") pod \"kube-apiserver-ci-3975.1.1-8-31c642c6eb\" (UID: \"017ac7f2e737e1e16e89f8172459aad9\") " pod="kube-system/kube-apiserver-ci-3975.1.1-8-31c642c6eb" Jul 2 00:18:00.705951 systemd[1]: Created slice kubepods-burstable-pod017ac7f2e737e1e16e89f8172459aad9.slice - libcontainer container kubepods-burstable-pod017ac7f2e737e1e16e89f8172459aad9.slice. Jul 2 00:18:00.886798 kubelet[2176]: E0702 00:18:00.886611 2176 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.227.97.255:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.1.1-8-31c642c6eb?timeout=10s\": dial tcp 64.227.97.255:6443: connect: connection refused" interval="800ms" Jul 2 00:18:00.960093 kubelet[2176]: E0702 00:18:00.959390 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:18:00.961432 containerd[1460]: time="2024-07-02T00:18:00.960554758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3975.1.1-8-31c642c6eb,Uid:71ca84309fb2b05ed2fd4d7e960911d8,Namespace:kube-system,Attempt:0,}" Jul 2 00:18:00.982063 kubelet[2176]: I0702 00:18:00.981712 2176 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.1.1-8-31c642c6eb" Jul 2 00:18:00.982536 kubelet[2176]: E0702 00:18:00.982375 2176 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://64.227.97.255:6443/api/v1/nodes\": dial tcp 64.227.97.255:6443: connect: connection refused" node="ci-3975.1.1-8-31c642c6eb" Jul 2 00:18:00.999895 kubelet[2176]: E0702 00:18:00.998772 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:18:01.006171 containerd[1460]: time="2024-07-02T00:18:01.003239948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3975.1.1-8-31c642c6eb,Uid:14e054431a2b71b9a87d2a3dd990cae8,Namespace:kube-system,Attempt:0,}" Jul 2 00:18:01.011306 kubelet[2176]: E0702 00:18:01.011265 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:18:01.012619 containerd[1460]: time="2024-07-02T00:18:01.012499707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3975.1.1-8-31c642c6eb,Uid:017ac7f2e737e1e16e89f8172459aad9,Namespace:kube-system,Attempt:0,}" Jul 2 00:18:01.130795 kubelet[2176]: W0702 00:18:01.130613 2176 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://64.227.97.255:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 64.227.97.255:6443: connect: connection refused Jul 2 00:18:01.130795 kubelet[2176]: E0702 00:18:01.130742 2176 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://64.227.97.255:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 64.227.97.255:6443: connect: connection refused Jul 2 00:18:01.372752 kubelet[2176]: W0702 00:18:01.372631 2176 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://64.227.97.255:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.227.97.255:6443: connect: connection refused Jul 2 00:18:01.374312 kubelet[2176]: E0702 00:18:01.373937 2176 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://64.227.97.255:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.227.97.255:6443: connect: connection refused Jul 2 00:18:01.577272 kubelet[2176]: W0702 00:18:01.577052 2176 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://64.227.97.255:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.227.97.255:6443: connect: connection refused Jul 2 00:18:01.577272 kubelet[2176]: E0702 00:18:01.577116 2176 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://64.227.97.255:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.227.97.255:6443: connect: connection refused Jul 2 00:18:01.709838 kubelet[2176]: E0702 00:18:01.709758 2176 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.227.97.255:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.1.1-8-31c642c6eb?timeout=10s\": dial tcp 64.227.97.255:6443: connect: connection refused" interval="1.6s" Jul 2 00:18:01.713641 kubelet[2176]: W0702 00:18:01.713341 2176 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://64.227.97.255:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.1.1-8-31c642c6eb&limit=500&resourceVersion=0": dial tcp 64.227.97.255:6443: connect: connection refused Jul 2 00:18:01.714043 kubelet[2176]: E0702 00:18:01.714014 2176 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://64.227.97.255:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.1.1-8-31c642c6eb&limit=500&resourceVersion=0": dial tcp 64.227.97.255:6443: connect: connection refused Jul 2 00:18:01.786924 kubelet[2176]: I0702 00:18:01.786523 2176 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.1.1-8-31c642c6eb" Jul 2 00:18:01.787288 kubelet[2176]: E0702 00:18:01.787261 2176 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://64.227.97.255:6443/api/v1/nodes\": dial tcp 64.227.97.255:6443: connect: connection refused" node="ci-3975.1.1-8-31c642c6eb" Jul 2 00:18:01.947672 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2483142094.mount: Deactivated successfully. Jul 2 00:18:01.976264 containerd[1460]: time="2024-07-02T00:18:01.975115540Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:18:01.984844 containerd[1460]: time="2024-07-02T00:18:01.984716454Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 00:18:01.993934 containerd[1460]: time="2024-07-02T00:18:01.992472218Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:18:02.001054 containerd[1460]: time="2024-07-02T00:18:02.000534851Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:18:02.012893 containerd[1460]: time="2024-07-02T00:18:02.011747824Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:18:02.019753 containerd[1460]: time="2024-07-02T00:18:02.019531549Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 00:18:02.021965 containerd[1460]: time="2024-07-02T00:18:02.021872563Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jul 2 00:18:02.030125 containerd[1460]: time="2024-07-02T00:18:02.030031086Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:18:02.036721 containerd[1460]: time="2024-07-02T00:18:02.035962804Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.023194172s" Jul 2 00:18:02.038048 containerd[1460]: time="2024-07-02T00:18:02.037428308Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.076713939s" Jul 2 00:18:02.047800 containerd[1460]: time="2024-07-02T00:18:02.047510703Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.04412559s" Jul 2 00:18:02.285458 kubelet[2176]: E0702 00:18:02.285254 2176 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://64.227.97.255:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 64.227.97.255:6443: connect: connection refused Jul 2 00:18:02.584041 containerd[1460]: time="2024-07-02T00:18:02.583397862Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:18:02.584041 containerd[1460]: time="2024-07-02T00:18:02.583581765Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:18:02.585225 containerd[1460]: time="2024-07-02T00:18:02.583620522Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:18:02.585225 containerd[1460]: time="2024-07-02T00:18:02.583649851Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:18:02.613055 containerd[1460]: time="2024-07-02T00:18:02.611367769Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:18:02.613055 containerd[1460]: time="2024-07-02T00:18:02.611476159Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:18:02.613055 containerd[1460]: time="2024-07-02T00:18:02.611538307Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:18:02.613055 containerd[1460]: time="2024-07-02T00:18:02.611564291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:18:02.614174 containerd[1460]: time="2024-07-02T00:18:02.613629386Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:18:02.614174 containerd[1460]: time="2024-07-02T00:18:02.613747279Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:18:02.614174 containerd[1460]: time="2024-07-02T00:18:02.613775669Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:18:02.614174 containerd[1460]: time="2024-07-02T00:18:02.613796141Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:18:02.679047 systemd[1]: Started cri-containerd-d2b2964b9d7b9ef2be74c0e7ebb0505eb48c96ec20c73a59691a0168b4da4c1b.scope - libcontainer container d2b2964b9d7b9ef2be74c0e7ebb0505eb48c96ec20c73a59691a0168b4da4c1b. Jul 2 00:18:02.690993 systemd[1]: Started cri-containerd-7035e7d8aa9de0076fbdbdaf6fb9eab509ce30383bcfbead4fa28216dc18357c.scope - libcontainer container 7035e7d8aa9de0076fbdbdaf6fb9eab509ce30383bcfbead4fa28216dc18357c. Jul 2 00:18:02.720253 systemd[1]: Started cri-containerd-aa7e35aca95ebe761494fc7b1c4031bbda300f38342d1427f281261fffe4990a.scope - libcontainer container aa7e35aca95ebe761494fc7b1c4031bbda300f38342d1427f281261fffe4990a. Jul 2 00:18:02.888412 containerd[1460]: time="2024-07-02T00:18:02.886921678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3975.1.1-8-31c642c6eb,Uid:71ca84309fb2b05ed2fd4d7e960911d8,Namespace:kube-system,Attempt:0,} returns sandbox id \"7035e7d8aa9de0076fbdbdaf6fb9eab509ce30383bcfbead4fa28216dc18357c\"" Jul 2 00:18:02.892299 kubelet[2176]: E0702 00:18:02.892112 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:18:02.897722 containerd[1460]: time="2024-07-02T00:18:02.897102387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3975.1.1-8-31c642c6eb,Uid:14e054431a2b71b9a87d2a3dd990cae8,Namespace:kube-system,Attempt:0,} returns sandbox id \"d2b2964b9d7b9ef2be74c0e7ebb0505eb48c96ec20c73a59691a0168b4da4c1b\"" Jul 2 00:18:02.905797 kubelet[2176]: E0702 00:18:02.905717 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:18:02.909612 containerd[1460]: time="2024-07-02T00:18:02.908916374Z" level=info msg="CreateContainer within sandbox \"7035e7d8aa9de0076fbdbdaf6fb9eab509ce30383bcfbead4fa28216dc18357c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 2 00:18:02.923514 containerd[1460]: time="2024-07-02T00:18:02.922941687Z" level=info msg="CreateContainer within sandbox \"d2b2964b9d7b9ef2be74c0e7ebb0505eb48c96ec20c73a59691a0168b4da4c1b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 2 00:18:02.947367 containerd[1460]: time="2024-07-02T00:18:02.947167324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3975.1.1-8-31c642c6eb,Uid:017ac7f2e737e1e16e89f8172459aad9,Namespace:kube-system,Attempt:0,} returns sandbox id \"aa7e35aca95ebe761494fc7b1c4031bbda300f38342d1427f281261fffe4990a\"" Jul 2 00:18:02.949266 kubelet[2176]: E0702 00:18:02.949196 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:18:02.958735 containerd[1460]: time="2024-07-02T00:18:02.958634082Z" level=info msg="CreateContainer within sandbox \"aa7e35aca95ebe761494fc7b1c4031bbda300f38342d1427f281261fffe4990a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 2 00:18:02.976260 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2381601669.mount: Deactivated successfully. Jul 2 00:18:02.984701 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2955882648.mount: Deactivated successfully. Jul 2 00:18:03.005983 containerd[1460]: time="2024-07-02T00:18:03.005611053Z" level=info msg="CreateContainer within sandbox \"d2b2964b9d7b9ef2be74c0e7ebb0505eb48c96ec20c73a59691a0168b4da4c1b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6ca9d75611b6285d27609dd9523c2d8d479a4406b02883c3fc8ab78a28621205\"" Jul 2 00:18:03.014952 containerd[1460]: time="2024-07-02T00:18:03.014544657Z" level=info msg="StartContainer for \"6ca9d75611b6285d27609dd9523c2d8d479a4406b02883c3fc8ab78a28621205\"" Jul 2 00:18:03.034105 containerd[1460]: time="2024-07-02T00:18:03.033980843Z" level=info msg="CreateContainer within sandbox \"7035e7d8aa9de0076fbdbdaf6fb9eab509ce30383bcfbead4fa28216dc18357c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f4badbea8ad316a52e56b241a26671287e353b15b64b78a48f1212a7ad486d54\"" Jul 2 00:18:03.037900 containerd[1460]: time="2024-07-02T00:18:03.036692840Z" level=info msg="StartContainer for \"f4badbea8ad316a52e56b241a26671287e353b15b64b78a48f1212a7ad486d54\"" Jul 2 00:18:03.040632 containerd[1460]: time="2024-07-02T00:18:03.040565774Z" level=info msg="CreateContainer within sandbox \"aa7e35aca95ebe761494fc7b1c4031bbda300f38342d1427f281261fffe4990a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"322bee2e6a364104efc4f91caa21b936ccc7c58a80d5c364ddfbd9fa93251101\"" Jul 2 00:18:03.041721 containerd[1460]: time="2024-07-02T00:18:03.041659407Z" level=info msg="StartContainer for \"322bee2e6a364104efc4f91caa21b936ccc7c58a80d5c364ddfbd9fa93251101\"" Jul 2 00:18:03.106969 systemd[1]: Started cri-containerd-6ca9d75611b6285d27609dd9523c2d8d479a4406b02883c3fc8ab78a28621205.scope - libcontainer container 6ca9d75611b6285d27609dd9523c2d8d479a4406b02883c3fc8ab78a28621205. Jul 2 00:18:03.153772 systemd[1]: Started cri-containerd-322bee2e6a364104efc4f91caa21b936ccc7c58a80d5c364ddfbd9fa93251101.scope - libcontainer container 322bee2e6a364104efc4f91caa21b936ccc7c58a80d5c364ddfbd9fa93251101. Jul 2 00:18:03.168360 systemd[1]: Started cri-containerd-f4badbea8ad316a52e56b241a26671287e353b15b64b78a48f1212a7ad486d54.scope - libcontainer container f4badbea8ad316a52e56b241a26671287e353b15b64b78a48f1212a7ad486d54. Jul 2 00:18:03.299555 containerd[1460]: time="2024-07-02T00:18:03.299474155Z" level=info msg="StartContainer for \"6ca9d75611b6285d27609dd9523c2d8d479a4406b02883c3fc8ab78a28621205\" returns successfully" Jul 2 00:18:03.313233 kubelet[2176]: E0702 00:18:03.313023 2176 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.227.97.255:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.1.1-8-31c642c6eb?timeout=10s\": dial tcp 64.227.97.255:6443: connect: connection refused" interval="3.2s" Jul 2 00:18:03.381140 kubelet[2176]: W0702 00:18:03.380966 2176 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://64.227.97.255:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.1.1-8-31c642c6eb&limit=500&resourceVersion=0": dial tcp 64.227.97.255:6443: connect: connection refused Jul 2 00:18:03.381140 kubelet[2176]: E0702 00:18:03.381052 2176 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://64.227.97.255:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.1.1-8-31c642c6eb&limit=500&resourceVersion=0": dial tcp 64.227.97.255:6443: connect: connection refused Jul 2 00:18:03.384019 containerd[1460]: time="2024-07-02T00:18:03.383942923Z" level=info msg="StartContainer for \"f4badbea8ad316a52e56b241a26671287e353b15b64b78a48f1212a7ad486d54\" returns successfully" Jul 2 00:18:03.384251 containerd[1460]: time="2024-07-02T00:18:03.384189108Z" level=info msg="StartContainer for \"322bee2e6a364104efc4f91caa21b936ccc7c58a80d5c364ddfbd9fa93251101\" returns successfully" Jul 2 00:18:03.396407 kubelet[2176]: I0702 00:18:03.396334 2176 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.1.1-8-31c642c6eb" Jul 2 00:18:03.399028 kubelet[2176]: E0702 00:18:03.398977 2176 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://64.227.97.255:6443/api/v1/nodes\": dial tcp 64.227.97.255:6443: connect: connection refused" node="ci-3975.1.1-8-31c642c6eb" Jul 2 00:18:03.420725 kubelet[2176]: E0702 00:18:03.420664 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:18:03.433654 kubelet[2176]: E0702 00:18:03.433567 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:18:03.507267 kubelet[2176]: W0702 00:18:03.506782 2176 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://64.227.97.255:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.227.97.255:6443: connect: connection refused Jul 2 00:18:03.507507 kubelet[2176]: E0702 00:18:03.507295 2176 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://64.227.97.255:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.227.97.255:6443: connect: connection refused Jul 2 00:18:03.581614 kubelet[2176]: W0702 00:18:03.576783 2176 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://64.227.97.255:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.227.97.255:6443: connect: connection refused Jul 2 00:18:03.581614 kubelet[2176]: E0702 00:18:03.576893 2176 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://64.227.97.255:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.227.97.255:6443: connect: connection refused Jul 2 00:18:04.438911 kubelet[2176]: E0702 00:18:04.438640 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:18:04.448931 kubelet[2176]: E0702 00:18:04.448246 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:18:05.455878 kubelet[2176]: E0702 00:18:05.455661 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:18:05.457648 kubelet[2176]: E0702 00:18:05.457509 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:18:06.456725 kubelet[2176]: E0702 00:18:06.456678 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:18:06.603684 kubelet[2176]: I0702 00:18:06.601403 2176 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.1.1-8-31c642c6eb" Jul 2 00:18:07.246831 kubelet[2176]: I0702 00:18:07.246772 2176 apiserver.go:52] "Watching apiserver" Jul 2 00:18:07.302918 kubelet[2176]: E0702 00:18:07.302867 2176 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3975.1.1-8-31c642c6eb\" not found" node="ci-3975.1.1-8-31c642c6eb" Jul 2 00:18:07.351642 kubelet[2176]: I0702 00:18:07.351362 2176 kubelet_node_status.go:76] "Successfully registered node" node="ci-3975.1.1-8-31c642c6eb" Jul 2 00:18:07.359205 kubelet[2176]: I0702 00:18:07.359089 2176 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 00:18:10.942483 update_engine[1447]: I0702 00:18:10.941641 1447 update_attempter.cc:509] Updating boot flags... Jul 2 00:18:10.978958 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2460) Jul 2 00:18:11.055130 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2459) Jul 2 00:18:11.300350 systemd[1]: Reloading requested from client PID 2468 ('systemctl') (unit session-7.scope)... Jul 2 00:18:11.300368 systemd[1]: Reloading... Jul 2 00:18:11.455952 zram_generator::config[2508]: No configuration found. Jul 2 00:18:11.611471 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:18:11.735399 systemd[1]: Reloading finished in 434 ms. Jul 2 00:18:11.786869 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:18:11.801564 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 00:18:11.802441 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:18:11.802683 systemd[1]: kubelet.service: Consumed 1.650s CPU time, 109.9M memory peak, 0B memory swap peak. Jul 2 00:18:11.820189 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:18:11.982845 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:18:12.001548 (kubelet)[2555]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 2 00:18:12.126703 kubelet[2555]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:18:12.127391 kubelet[2555]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 00:18:12.127489 kubelet[2555]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:18:12.127713 kubelet[2555]: I0702 00:18:12.127652 2555 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 00:18:12.137370 kubelet[2555]: I0702 00:18:12.137316 2555 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jul 2 00:18:12.137646 kubelet[2555]: I0702 00:18:12.137623 2555 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 00:18:12.138368 kubelet[2555]: I0702 00:18:12.138332 2555 server.go:919] "Client rotation is on, will bootstrap in background" Jul 2 00:18:12.141753 kubelet[2555]: I0702 00:18:12.141715 2555 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 2 00:18:12.151340 kubelet[2555]: I0702 00:18:12.151287 2555 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 00:18:12.162227 kubelet[2555]: I0702 00:18:12.162170 2555 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 00:18:12.162476 kubelet[2555]: I0702 00:18:12.162453 2555 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 00:18:12.163735 kubelet[2555]: I0702 00:18:12.162649 2555 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 00:18:12.163735 kubelet[2555]: I0702 00:18:12.162685 2555 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 00:18:12.163735 kubelet[2555]: I0702 00:18:12.162699 2555 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 00:18:12.163735 kubelet[2555]: I0702 00:18:12.162744 2555 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:18:12.163735 kubelet[2555]: I0702 00:18:12.162910 2555 kubelet.go:396] "Attempting to sync node with API server" Jul 2 00:18:12.163735 kubelet[2555]: I0702 00:18:12.162933 2555 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 00:18:12.163735 kubelet[2555]: I0702 00:18:12.162971 2555 kubelet.go:312] "Adding apiserver pod source" Jul 2 00:18:12.164268 kubelet[2555]: I0702 00:18:12.162996 2555 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 00:18:12.183762 kubelet[2555]: I0702 00:18:12.180709 2555 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Jul 2 00:18:12.183762 kubelet[2555]: I0702 00:18:12.181710 2555 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 00:18:12.187790 kubelet[2555]: I0702 00:18:12.187410 2555 server.go:1256] "Started kubelet" Jul 2 00:18:12.191811 kubelet[2555]: I0702 00:18:12.189662 2555 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 00:18:12.192030 kubelet[2555]: I0702 00:18:12.191921 2555 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 00:18:12.192030 kubelet[2555]: I0702 00:18:12.192025 2555 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 00:18:12.194281 kubelet[2555]: I0702 00:18:12.193607 2555 server.go:461] "Adding debug handlers to kubelet server" Jul 2 00:18:12.201819 kubelet[2555]: E0702 00:18:12.201365 2555 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 00:18:12.201819 kubelet[2555]: I0702 00:18:12.201661 2555 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 00:18:12.207508 kubelet[2555]: I0702 00:18:12.207464 2555 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 00:18:12.213091 kubelet[2555]: I0702 00:18:12.208220 2555 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 00:18:12.213091 kubelet[2555]: I0702 00:18:12.212516 2555 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 00:18:12.219094 kubelet[2555]: I0702 00:18:12.217846 2555 factory.go:221] Registration of the systemd container factory successfully Jul 2 00:18:12.221406 kubelet[2555]: I0702 00:18:12.220224 2555 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 00:18:12.230791 kubelet[2555]: I0702 00:18:12.230187 2555 factory.go:221] Registration of the containerd container factory successfully Jul 2 00:18:12.271902 kubelet[2555]: I0702 00:18:12.271063 2555 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 00:18:12.276724 kubelet[2555]: I0702 00:18:12.276538 2555 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 00:18:12.276724 kubelet[2555]: I0702 00:18:12.276578 2555 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 00:18:12.276724 kubelet[2555]: I0702 00:18:12.276605 2555 kubelet.go:2329] "Starting kubelet main sync loop" Jul 2 00:18:12.277963 kubelet[2555]: E0702 00:18:12.277632 2555 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 00:18:12.310805 kubelet[2555]: I0702 00:18:12.309060 2555 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.1.1-8-31c642c6eb" Jul 2 00:18:12.324587 kubelet[2555]: I0702 00:18:12.324548 2555 kubelet_node_status.go:112] "Node was previously registered" node="ci-3975.1.1-8-31c642c6eb" Jul 2 00:18:12.324759 kubelet[2555]: I0702 00:18:12.324641 2555 kubelet_node_status.go:76] "Successfully registered node" node="ci-3975.1.1-8-31c642c6eb" Jul 2 00:18:12.339439 kubelet[2555]: I0702 00:18:12.339401 2555 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 00:18:12.339439 kubelet[2555]: I0702 00:18:12.339428 2555 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 00:18:12.339439 kubelet[2555]: I0702 00:18:12.339455 2555 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:18:12.339771 kubelet[2555]: I0702 00:18:12.339645 2555 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 2 00:18:12.339771 kubelet[2555]: I0702 00:18:12.339666 2555 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 2 00:18:12.339771 kubelet[2555]: I0702 00:18:12.339673 2555 policy_none.go:49] "None policy: Start" Jul 2 00:18:12.344041 kubelet[2555]: I0702 00:18:12.343690 2555 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 00:18:12.344323 kubelet[2555]: I0702 00:18:12.344305 2555 state_mem.go:35] "Initializing new in-memory state store" Jul 2 00:18:12.344616 kubelet[2555]: I0702 00:18:12.344601 2555 state_mem.go:75] "Updated machine memory state" Jul 2 00:18:12.351001 kubelet[2555]: I0702 00:18:12.350274 2555 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 00:18:12.351957 kubelet[2555]: I0702 00:18:12.351840 2555 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 00:18:12.378729 kubelet[2555]: I0702 00:18:12.377737 2555 topology_manager.go:215] "Topology Admit Handler" podUID="71ca84309fb2b05ed2fd4d7e960911d8" podNamespace="kube-system" podName="kube-controller-manager-ci-3975.1.1-8-31c642c6eb" Jul 2 00:18:12.378729 kubelet[2555]: I0702 00:18:12.377981 2555 topology_manager.go:215] "Topology Admit Handler" podUID="14e054431a2b71b9a87d2a3dd990cae8" podNamespace="kube-system" podName="kube-scheduler-ci-3975.1.1-8-31c642c6eb" Jul 2 00:18:12.378729 kubelet[2555]: I0702 00:18:12.378049 2555 topology_manager.go:215] "Topology Admit Handler" podUID="017ac7f2e737e1e16e89f8172459aad9" podNamespace="kube-system" podName="kube-apiserver-ci-3975.1.1-8-31c642c6eb" Jul 2 00:18:12.392741 kubelet[2555]: W0702 00:18:12.392695 2555 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 00:18:12.396600 kubelet[2555]: W0702 00:18:12.396556 2555 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 00:18:12.398008 kubelet[2555]: W0702 00:18:12.397030 2555 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 00:18:12.514926 kubelet[2555]: I0702 00:18:12.513766 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71ca84309fb2b05ed2fd4d7e960911d8-k8s-certs\") pod \"kube-controller-manager-ci-3975.1.1-8-31c642c6eb\" (UID: \"71ca84309fb2b05ed2fd4d7e960911d8\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-8-31c642c6eb" Jul 2 00:18:12.514926 kubelet[2555]: I0702 00:18:12.513843 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/14e054431a2b71b9a87d2a3dd990cae8-kubeconfig\") pod \"kube-scheduler-ci-3975.1.1-8-31c642c6eb\" (UID: \"14e054431a2b71b9a87d2a3dd990cae8\") " pod="kube-system/kube-scheduler-ci-3975.1.1-8-31c642c6eb" Jul 2 00:18:12.514926 kubelet[2555]: I0702 00:18:12.514058 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/017ac7f2e737e1e16e89f8172459aad9-ca-certs\") pod \"kube-apiserver-ci-3975.1.1-8-31c642c6eb\" (UID: \"017ac7f2e737e1e16e89f8172459aad9\") " pod="kube-system/kube-apiserver-ci-3975.1.1-8-31c642c6eb" Jul 2 00:18:12.514926 kubelet[2555]: I0702 00:18:12.514091 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/017ac7f2e737e1e16e89f8172459aad9-k8s-certs\") pod \"kube-apiserver-ci-3975.1.1-8-31c642c6eb\" (UID: \"017ac7f2e737e1e16e89f8172459aad9\") " pod="kube-system/kube-apiserver-ci-3975.1.1-8-31c642c6eb" Jul 2 00:18:12.514926 kubelet[2555]: I0702 00:18:12.514122 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/017ac7f2e737e1e16e89f8172459aad9-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3975.1.1-8-31c642c6eb\" (UID: \"017ac7f2e737e1e16e89f8172459aad9\") " pod="kube-system/kube-apiserver-ci-3975.1.1-8-31c642c6eb" Jul 2 00:18:12.515288 kubelet[2555]: I0702 00:18:12.514149 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71ca84309fb2b05ed2fd4d7e960911d8-ca-certs\") pod \"kube-controller-manager-ci-3975.1.1-8-31c642c6eb\" (UID: \"71ca84309fb2b05ed2fd4d7e960911d8\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-8-31c642c6eb" Jul 2 00:18:12.515288 kubelet[2555]: I0702 00:18:12.514177 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71ca84309fb2b05ed2fd4d7e960911d8-flexvolume-dir\") pod \"kube-controller-manager-ci-3975.1.1-8-31c642c6eb\" (UID: \"71ca84309fb2b05ed2fd4d7e960911d8\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-8-31c642c6eb" Jul 2 00:18:12.515288 kubelet[2555]: I0702 00:18:12.514204 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71ca84309fb2b05ed2fd4d7e960911d8-kubeconfig\") pod \"kube-controller-manager-ci-3975.1.1-8-31c642c6eb\" (UID: \"71ca84309fb2b05ed2fd4d7e960911d8\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-8-31c642c6eb" Jul 2 00:18:12.515288 kubelet[2555]: I0702 00:18:12.514234 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71ca84309fb2b05ed2fd4d7e960911d8-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3975.1.1-8-31c642c6eb\" (UID: \"71ca84309fb2b05ed2fd4d7e960911d8\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-8-31c642c6eb" Jul 2 00:18:12.696200 kubelet[2555]: E0702 00:18:12.694817 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:18:12.698342 kubelet[2555]: E0702 00:18:12.698285 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:18:12.699650 kubelet[2555]: E0702 00:18:12.699608 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:18:13.166871 kubelet[2555]: I0702 00:18:13.166544 2555 apiserver.go:52] "Watching apiserver" Jul 2 00:18:13.213367 kubelet[2555]: I0702 00:18:13.213277 2555 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 00:18:13.307642 kubelet[2555]: E0702 00:18:13.307565 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:18:13.308270 kubelet[2555]: E0702 00:18:13.308193 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:18:13.309010 kubelet[2555]: E0702 00:18:13.308553 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:18:13.360758 kubelet[2555]: I0702 00:18:13.360460 2555 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3975.1.1-8-31c642c6eb" podStartSLOduration=1.3604084269999999 podStartE2EDuration="1.360408427s" podCreationTimestamp="2024-07-02 00:18:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:18:13.360374524 +0000 UTC m=+1.344778594" watchObservedRunningTime="2024-07-02 00:18:13.360408427 +0000 UTC m=+1.344812500" Jul 2 00:18:13.413067 kubelet[2555]: I0702 00:18:13.413021 2555 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3975.1.1-8-31c642c6eb" podStartSLOduration=1.412969709 podStartE2EDuration="1.412969709s" podCreationTimestamp="2024-07-02 00:18:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:18:13.388014418 +0000 UTC m=+1.372418490" watchObservedRunningTime="2024-07-02 00:18:13.412969709 +0000 UTC m=+1.397373782" Jul 2 00:18:13.477369 kubelet[2555]: I0702 00:18:13.477164 2555 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3975.1.1-8-31c642c6eb" podStartSLOduration=1.477120134 podStartE2EDuration="1.477120134s" podCreationTimestamp="2024-07-02 00:18:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:18:13.413714016 +0000 UTC m=+1.398118086" watchObservedRunningTime="2024-07-02 00:18:13.477120134 +0000 UTC m=+1.461524205" Jul 2 00:18:14.312728 kubelet[2555]: E0702 00:18:14.312567 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:18:15.315444 kubelet[2555]: E0702 00:18:15.314012 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:18:17.646165 sudo[1652]: pam_unix(sudo:session): session closed for user root Jul 2 00:18:17.651413 sshd[1649]: pam_unix(sshd:session): session closed for user core Jul 2 00:18:17.656047 systemd[1]: sshd@6-64.227.97.255:22-147.75.109.163:54180.service: Deactivated successfully. Jul 2 00:18:17.659776 systemd[1]: session-7.scope: Deactivated successfully. Jul 2 00:18:17.660480 systemd[1]: session-7.scope: Consumed 6.800s CPU time, 135.0M memory peak, 0B memory swap peak. Jul 2 00:18:17.663363 systemd-logind[1446]: Session 7 logged out. Waiting for processes to exit. Jul 2 00:18:17.664747 systemd-logind[1446]: Removed session 7. Jul 2 00:18:18.952445 kubelet[2555]: E0702 00:18:18.952404 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:18:19.061917 kubelet[2555]: E0702 00:18:19.061096 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:18:19.323116 kubelet[2555]: E0702 00:18:19.322432 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:18:19.324100 kubelet[2555]: E0702 00:18:19.324061 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:18:21.004882 kubelet[2555]: E0702 00:18:21.004669 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:18:21.328801 kubelet[2555]: E0702 00:18:21.328103 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:18:23.321185 kubelet[2555]: I0702 00:18:23.321140 2555 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 2 00:18:23.323108 containerd[1460]: time="2024-07-02T00:18:23.323018996Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 00:18:23.323758 kubelet[2555]: I0702 00:18:23.323426 2555 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 2 00:18:24.306921 kubelet[2555]: I0702 00:18:24.306042 2555 topology_manager.go:215] "Topology Admit Handler" podUID="1c8b4718-7a21-4dc6-8dc7-62019cbc950f" podNamespace="kube-system" podName="kube-proxy-82tfl" Jul 2 00:18:24.340773 systemd[1]: Created slice kubepods-besteffort-pod1c8b4718_7a21_4dc6_8dc7_62019cbc950f.slice - libcontainer container kubepods-besteffort-pod1c8b4718_7a21_4dc6_8dc7_62019cbc950f.slice. Jul 2 00:18:24.492114 kubelet[2555]: I0702 00:18:24.492020 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1c8b4718-7a21-4dc6-8dc7-62019cbc950f-lib-modules\") pod \"kube-proxy-82tfl\" (UID: \"1c8b4718-7a21-4dc6-8dc7-62019cbc950f\") " pod="kube-system/kube-proxy-82tfl" Jul 2 00:18:24.492114 kubelet[2555]: I0702 00:18:24.492096 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j87gj\" (UniqueName: \"kubernetes.io/projected/1c8b4718-7a21-4dc6-8dc7-62019cbc950f-kube-api-access-j87gj\") pod \"kube-proxy-82tfl\" (UID: \"1c8b4718-7a21-4dc6-8dc7-62019cbc950f\") " pod="kube-system/kube-proxy-82tfl" Jul 2 00:18:24.492939 kubelet[2555]: I0702 00:18:24.492151 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1c8b4718-7a21-4dc6-8dc7-62019cbc950f-xtables-lock\") pod \"kube-proxy-82tfl\" (UID: \"1c8b4718-7a21-4dc6-8dc7-62019cbc950f\") " pod="kube-system/kube-proxy-82tfl" Jul 2 00:18:24.492939 kubelet[2555]: I0702 00:18:24.492201 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1c8b4718-7a21-4dc6-8dc7-62019cbc950f-kube-proxy\") pod \"kube-proxy-82tfl\" (UID: \"1c8b4718-7a21-4dc6-8dc7-62019cbc950f\") " pod="kube-system/kube-proxy-82tfl" Jul 2 00:18:24.494456 kubelet[2555]: I0702 00:18:24.494346 2555 topology_manager.go:215] "Topology Admit Handler" podUID="136d08e4-9023-4c81-ad58-47e5a924cfe1" podNamespace="tigera-operator" podName="tigera-operator-76c4974c85-9mtps" Jul 2 00:18:24.514771 systemd[1]: Created slice kubepods-besteffort-pod136d08e4_9023_4c81_ad58_47e5a924cfe1.slice - libcontainer container kubepods-besteffort-pod136d08e4_9023_4c81_ad58_47e5a924cfe1.slice. Jul 2 00:18:24.593220 kubelet[2555]: I0702 00:18:24.592997 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/136d08e4-9023-4c81-ad58-47e5a924cfe1-var-lib-calico\") pod \"tigera-operator-76c4974c85-9mtps\" (UID: \"136d08e4-9023-4c81-ad58-47e5a924cfe1\") " pod="tigera-operator/tigera-operator-76c4974c85-9mtps" Jul 2 00:18:24.593220 kubelet[2555]: I0702 00:18:24.593073 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkl7r\" (UniqueName: \"kubernetes.io/projected/136d08e4-9023-4c81-ad58-47e5a924cfe1-kube-api-access-nkl7r\") pod \"tigera-operator-76c4974c85-9mtps\" (UID: \"136d08e4-9023-4c81-ad58-47e5a924cfe1\") " pod="tigera-operator/tigera-operator-76c4974c85-9mtps" Jul 2 00:18:24.658501 kubelet[2555]: E0702 00:18:24.656017 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:18:24.660053 containerd[1460]: time="2024-07-02T00:18:24.659971824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-82tfl,Uid:1c8b4718-7a21-4dc6-8dc7-62019cbc950f,Namespace:kube-system,Attempt:0,}" Jul 2 00:18:24.745948 containerd[1460]: time="2024-07-02T00:18:24.745226100Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:18:24.745948 containerd[1460]: time="2024-07-02T00:18:24.745344600Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:18:24.745948 containerd[1460]: time="2024-07-02T00:18:24.745401567Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:18:24.745948 containerd[1460]: time="2024-07-02T00:18:24.745432073Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:18:24.812336 systemd[1]: Started cri-containerd-880d3cd456b1149911e236a1a140e21f4208811a74d0b0024b01be5f9e03b712.scope - libcontainer container 880d3cd456b1149911e236a1a140e21f4208811a74d0b0024b01be5f9e03b712. Jul 2 00:18:24.824834 containerd[1460]: time="2024-07-02T00:18:24.824478295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-9mtps,Uid:136d08e4-9023-4c81-ad58-47e5a924cfe1,Namespace:tigera-operator,Attempt:0,}" Jul 2 00:18:24.908239 containerd[1460]: time="2024-07-02T00:18:24.906965438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-82tfl,Uid:1c8b4718-7a21-4dc6-8dc7-62019cbc950f,Namespace:kube-system,Attempt:0,} returns sandbox id \"880d3cd456b1149911e236a1a140e21f4208811a74d0b0024b01be5f9e03b712\"" Jul 2 00:18:24.909652 kubelet[2555]: E0702 00:18:24.909439 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:18:24.923739 containerd[1460]: time="2024-07-02T00:18:24.922874317Z" level=info msg="CreateContainer within sandbox \"880d3cd456b1149911e236a1a140e21f4208811a74d0b0024b01be5f9e03b712\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 00:18:24.957091 containerd[1460]: time="2024-07-02T00:18:24.956778527Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:18:24.957091 containerd[1460]: time="2024-07-02T00:18:24.956908359Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:18:24.957342 containerd[1460]: time="2024-07-02T00:18:24.957159398Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:18:24.957928 containerd[1460]: time="2024-07-02T00:18:24.957301096Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:18:24.997643 containerd[1460]: time="2024-07-02T00:18:24.997560353Z" level=info msg="CreateContainer within sandbox \"880d3cd456b1149911e236a1a140e21f4208811a74d0b0024b01be5f9e03b712\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"19c0b9983cc296dd29fb1fd7172393532e95f4fb67bc47e4bbd378019ccac8df\"" Jul 2 00:18:24.998288 systemd[1]: Started cri-containerd-7563a6608d0b4e034a93d37c5bb2c740ee3d53f0df59d47d9c892191fcc23581.scope - libcontainer container 7563a6608d0b4e034a93d37c5bb2c740ee3d53f0df59d47d9c892191fcc23581. Jul 2 00:18:25.003128 containerd[1460]: time="2024-07-02T00:18:25.003056493Z" level=info msg="StartContainer for \"19c0b9983cc296dd29fb1fd7172393532e95f4fb67bc47e4bbd378019ccac8df\"" Jul 2 00:18:25.092692 systemd[1]: Started cri-containerd-19c0b9983cc296dd29fb1fd7172393532e95f4fb67bc47e4bbd378019ccac8df.scope - libcontainer container 19c0b9983cc296dd29fb1fd7172393532e95f4fb67bc47e4bbd378019ccac8df. Jul 2 00:18:25.129448 containerd[1460]: time="2024-07-02T00:18:25.129373947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-9mtps,Uid:136d08e4-9023-4c81-ad58-47e5a924cfe1,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"7563a6608d0b4e034a93d37c5bb2c740ee3d53f0df59d47d9c892191fcc23581\"" Jul 2 00:18:25.139388 containerd[1460]: time="2024-07-02T00:18:25.139325257Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Jul 2 00:18:25.195749 containerd[1460]: time="2024-07-02T00:18:25.195658294Z" level=info msg="StartContainer for \"19c0b9983cc296dd29fb1fd7172393532e95f4fb67bc47e4bbd378019ccac8df\" returns successfully" Jul 2 00:18:25.346965 kubelet[2555]: E0702 00:18:25.346370 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:18:25.378752 kubelet[2555]: I0702 00:18:25.377292 2555 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-82tfl" podStartSLOduration=1.3772316660000001 podStartE2EDuration="1.377231666s" podCreationTimestamp="2024-07-02 00:18:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:18:25.376875036 +0000 UTC m=+13.361279116" watchObservedRunningTime="2024-07-02 00:18:25.377231666 +0000 UTC m=+13.361635737" Jul 2 00:18:26.932757 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2112470204.mount: Deactivated successfully. Jul 2 00:18:28.756137 containerd[1460]: time="2024-07-02T00:18:28.756059719Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:18:28.759319 containerd[1460]: time="2024-07-02T00:18:28.758786132Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=22076068" Jul 2 00:18:28.764396 containerd[1460]: time="2024-07-02T00:18:28.764326255Z" level=info msg="ImageCreate event name:\"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:18:28.784129 containerd[1460]: time="2024-07-02T00:18:28.778356099Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:18:28.786304 containerd[1460]: time="2024-07-02T00:18:28.786213057Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"22070263\" in 3.64682562s" Jul 2 00:18:28.786652 containerd[1460]: time="2024-07-02T00:18:28.786470244Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\"" Jul 2 00:18:28.793313 containerd[1460]: time="2024-07-02T00:18:28.791631541Z" level=info msg="CreateContainer within sandbox \"7563a6608d0b4e034a93d37c5bb2c740ee3d53f0df59d47d9c892191fcc23581\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 2 00:18:28.823608 containerd[1460]: time="2024-07-02T00:18:28.823503109Z" level=info msg="CreateContainer within sandbox \"7563a6608d0b4e034a93d37c5bb2c740ee3d53f0df59d47d9c892191fcc23581\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"9f78a50ebf828dad9981ca55eaaef138867355935e4556d8fb11e490ce934695\"" Jul 2 00:18:28.826562 containerd[1460]: time="2024-07-02T00:18:28.825138676Z" level=info msg="StartContainer for \"9f78a50ebf828dad9981ca55eaaef138867355935e4556d8fb11e490ce934695\"" Jul 2 00:18:28.904380 systemd[1]: Started cri-containerd-9f78a50ebf828dad9981ca55eaaef138867355935e4556d8fb11e490ce934695.scope - libcontainer container 9f78a50ebf828dad9981ca55eaaef138867355935e4556d8fb11e490ce934695. Jul 2 00:18:28.989602 containerd[1460]: time="2024-07-02T00:18:28.989528359Z" level=info msg="StartContainer for \"9f78a50ebf828dad9981ca55eaaef138867355935e4556d8fb11e490ce934695\" returns successfully" Jul 2 00:18:29.385268 kubelet[2555]: I0702 00:18:29.385156 2555 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4974c85-9mtps" podStartSLOduration=1.734052382 podStartE2EDuration="5.385039617s" podCreationTimestamp="2024-07-02 00:18:24 +0000 UTC" firstStartedPulling="2024-07-02 00:18:25.136217061 +0000 UTC m=+13.120621134" lastFinishedPulling="2024-07-02 00:18:28.787204316 +0000 UTC m=+16.771608369" observedRunningTime="2024-07-02 00:18:29.384969011 +0000 UTC m=+17.369373084" watchObservedRunningTime="2024-07-02 00:18:29.385039617 +0000 UTC m=+17.369443690" Jul 2 00:18:32.317075 kubelet[2555]: I0702 00:18:32.317026 2555 topology_manager.go:215] "Topology Admit Handler" podUID="1cdd4291-9cb6-42ab-9845-5280104db715" podNamespace="calico-system" podName="calico-typha-86944d65b7-bn7hz" Jul 2 00:18:32.329781 systemd[1]: Created slice kubepods-besteffort-pod1cdd4291_9cb6_42ab_9845_5280104db715.slice - libcontainer container kubepods-besteffort-pod1cdd4291_9cb6_42ab_9845_5280104db715.slice. Jul 2 00:18:32.440339 kubelet[2555]: I0702 00:18:32.440286 2555 topology_manager.go:215] "Topology Admit Handler" podUID="a031f6b3-45d9-401c-bb15-a710f4226970" podNamespace="calico-system" podName="calico-node-xzbwn" Jul 2 00:18:32.452620 systemd[1]: Created slice kubepods-besteffort-poda031f6b3_45d9_401c_bb15_a710f4226970.slice - libcontainer container kubepods-besteffort-poda031f6b3_45d9_401c_bb15_a710f4226970.slice. Jul 2 00:18:32.460915 kubelet[2555]: I0702 00:18:32.460403 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/1cdd4291-9cb6-42ab-9845-5280104db715-typha-certs\") pod \"calico-typha-86944d65b7-bn7hz\" (UID: \"1cdd4291-9cb6-42ab-9845-5280104db715\") " pod="calico-system/calico-typha-86944d65b7-bn7hz" Jul 2 00:18:32.460915 kubelet[2555]: I0702 00:18:32.460457 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8wbr\" (UniqueName: \"kubernetes.io/projected/1cdd4291-9cb6-42ab-9845-5280104db715-kube-api-access-j8wbr\") pod \"calico-typha-86944d65b7-bn7hz\" (UID: \"1cdd4291-9cb6-42ab-9845-5280104db715\") " pod="calico-system/calico-typha-86944d65b7-bn7hz" Jul 2 00:18:32.460915 kubelet[2555]: I0702 00:18:32.460479 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1cdd4291-9cb6-42ab-9845-5280104db715-tigera-ca-bundle\") pod \"calico-typha-86944d65b7-bn7hz\" (UID: \"1cdd4291-9cb6-42ab-9845-5280104db715\") " pod="calico-system/calico-typha-86944d65b7-bn7hz" Jul 2 00:18:32.555779 kubelet[2555]: I0702 00:18:32.555708 2555 topology_manager.go:215] "Topology Admit Handler" podUID="40064fc9-24a4-4ccf-9623-b652332a27c6" podNamespace="calico-system" podName="csi-node-driver-9kkwj" Jul 2 00:18:32.556929 kubelet[2555]: E0702 00:18:32.556263 2555 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9kkwj" podUID="40064fc9-24a4-4ccf-9623-b652332a27c6" Jul 2 00:18:32.563078 kubelet[2555]: I0702 00:18:32.561626 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a031f6b3-45d9-401c-bb15-a710f4226970-policysync\") pod \"calico-node-xzbwn\" (UID: \"a031f6b3-45d9-401c-bb15-a710f4226970\") " pod="calico-system/calico-node-xzbwn" Jul 2 00:18:32.563078 kubelet[2555]: I0702 00:18:32.562796 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a031f6b3-45d9-401c-bb15-a710f4226970-var-run-calico\") pod \"calico-node-xzbwn\" (UID: \"a031f6b3-45d9-401c-bb15-a710f4226970\") " pod="calico-system/calico-node-xzbwn" Jul 2 00:18:32.563078 kubelet[2555]: I0702 00:18:32.562834 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a031f6b3-45d9-401c-bb15-a710f4226970-cni-log-dir\") pod \"calico-node-xzbwn\" (UID: \"a031f6b3-45d9-401c-bb15-a710f4226970\") " pod="calico-system/calico-node-xzbwn" Jul 2 00:18:32.563078 kubelet[2555]: I0702 00:18:32.562900 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8qjf\" (UniqueName: \"kubernetes.io/projected/a031f6b3-45d9-401c-bb15-a710f4226970-kube-api-access-p8qjf\") pod \"calico-node-xzbwn\" (UID: \"a031f6b3-45d9-401c-bb15-a710f4226970\") " pod="calico-system/calico-node-xzbwn" Jul 2 00:18:32.563078 kubelet[2555]: I0702 00:18:32.562937 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a031f6b3-45d9-401c-bb15-a710f4226970-lib-modules\") pod \"calico-node-xzbwn\" (UID: \"a031f6b3-45d9-401c-bb15-a710f4226970\") " pod="calico-system/calico-node-xzbwn" Jul 2 00:18:32.563473 kubelet[2555]: I0702 00:18:32.562970 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a031f6b3-45d9-401c-bb15-a710f4226970-node-certs\") pod \"calico-node-xzbwn\" (UID: \"a031f6b3-45d9-401c-bb15-a710f4226970\") " pod="calico-system/calico-node-xzbwn" Jul 2 00:18:32.564842 kubelet[2555]: I0702 00:18:32.564422 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a031f6b3-45d9-401c-bb15-a710f4226970-cni-bin-dir\") pod \"calico-node-xzbwn\" (UID: \"a031f6b3-45d9-401c-bb15-a710f4226970\") " pod="calico-system/calico-node-xzbwn" Jul 2 00:18:32.564842 kubelet[2555]: I0702 00:18:32.564490 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a031f6b3-45d9-401c-bb15-a710f4226970-var-lib-calico\") pod \"calico-node-xzbwn\" (UID: \"a031f6b3-45d9-401c-bb15-a710f4226970\") " pod="calico-system/calico-node-xzbwn" Jul 2 00:18:32.566888 kubelet[2555]: I0702 00:18:32.565918 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a031f6b3-45d9-401c-bb15-a710f4226970-cni-net-dir\") pod \"calico-node-xzbwn\" (UID: \"a031f6b3-45d9-401c-bb15-a710f4226970\") " pod="calico-system/calico-node-xzbwn" Jul 2 00:18:32.566888 kubelet[2555]: I0702 00:18:32.566062 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a031f6b3-45d9-401c-bb15-a710f4226970-xtables-lock\") pod \"calico-node-xzbwn\" (UID: \"a031f6b3-45d9-401c-bb15-a710f4226970\") " pod="calico-system/calico-node-xzbwn" Jul 2 00:18:32.566888 kubelet[2555]: I0702 00:18:32.566109 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a031f6b3-45d9-401c-bb15-a710f4226970-flexvol-driver-host\") pod \"calico-node-xzbwn\" (UID: \"a031f6b3-45d9-401c-bb15-a710f4226970\") " pod="calico-system/calico-node-xzbwn" Jul 2 00:18:32.566888 kubelet[2555]: I0702 00:18:32.566193 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a031f6b3-45d9-401c-bb15-a710f4226970-tigera-ca-bundle\") pod \"calico-node-xzbwn\" (UID: \"a031f6b3-45d9-401c-bb15-a710f4226970\") " pod="calico-system/calico-node-xzbwn" Jul 2 00:18:32.637390 kubelet[2555]: E0702 00:18:32.637223 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:18:32.638892 containerd[1460]: time="2024-07-02T00:18:32.638353458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-86944d65b7-bn7hz,Uid:1cdd4291-9cb6-42ab-9845-5280104db715,Namespace:calico-system,Attempt:0,}" Jul 2 00:18:32.669899 kubelet[2555]: I0702 00:18:32.667260 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/40064fc9-24a4-4ccf-9623-b652332a27c6-socket-dir\") pod \"csi-node-driver-9kkwj\" (UID: \"40064fc9-24a4-4ccf-9623-b652332a27c6\") " pod="calico-system/csi-node-driver-9kkwj" Jul 2 00:18:32.669899 kubelet[2555]: I0702 00:18:32.667334 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxq97\" (UniqueName: \"kubernetes.io/projected/40064fc9-24a4-4ccf-9623-b652332a27c6-kube-api-access-lxq97\") pod \"csi-node-driver-9kkwj\" (UID: \"40064fc9-24a4-4ccf-9623-b652332a27c6\") " pod="calico-system/csi-node-driver-9kkwj" Jul 2 00:18:32.669899 kubelet[2555]: I0702 00:18:32.667448 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/40064fc9-24a4-4ccf-9623-b652332a27c6-kubelet-dir\") pod \"csi-node-driver-9kkwj\" (UID: \"40064fc9-24a4-4ccf-9623-b652332a27c6\") " pod="calico-system/csi-node-driver-9kkwj" Jul 2 00:18:32.669899 kubelet[2555]: I0702 00:18:32.667525 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/40064fc9-24a4-4ccf-9623-b652332a27c6-varrun\") pod \"csi-node-driver-9kkwj\" (UID: \"40064fc9-24a4-4ccf-9623-b652332a27c6\") " pod="calico-system/csi-node-driver-9kkwj" Jul 2 00:18:32.669899 kubelet[2555]: I0702 00:18:32.667582 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/40064fc9-24a4-4ccf-9623-b652332a27c6-registration-dir\") pod \"csi-node-driver-9kkwj\" (UID: \"40064fc9-24a4-4ccf-9623-b652332a27c6\") " pod="calico-system/csi-node-driver-9kkwj" Jul 2 00:18:32.680915 kubelet[2555]: E0702 00:18:32.680187 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:32.680915 kubelet[2555]: W0702 00:18:32.680235 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:32.680915 kubelet[2555]: E0702 00:18:32.680295 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:32.683093 kubelet[2555]: E0702 00:18:32.682994 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:32.683301 kubelet[2555]: W0702 00:18:32.683048 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:32.683301 kubelet[2555]: E0702 00:18:32.683177 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:32.693251 kubelet[2555]: E0702 00:18:32.691562 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:32.693691 kubelet[2555]: W0702 00:18:32.692844 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:32.697898 kubelet[2555]: E0702 00:18:32.694963 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:32.731534 kubelet[2555]: E0702 00:18:32.731476 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:32.731534 kubelet[2555]: W0702 00:18:32.731512 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:32.731534 kubelet[2555]: E0702 00:18:32.731535 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:32.744514 containerd[1460]: time="2024-07-02T00:18:32.744314656Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:18:32.744915 containerd[1460]: time="2024-07-02T00:18:32.744426116Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:18:32.744915 containerd[1460]: time="2024-07-02T00:18:32.744465194Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:18:32.744915 containerd[1460]: time="2024-07-02T00:18:32.744484469Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:18:32.760059 kubelet[2555]: E0702 00:18:32.759991 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:18:32.763599 containerd[1460]: time="2024-07-02T00:18:32.763534136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xzbwn,Uid:a031f6b3-45d9-401c-bb15-a710f4226970,Namespace:calico-system,Attempt:0,}" Jul 2 00:18:32.770115 kubelet[2555]: E0702 00:18:32.769550 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:32.770115 kubelet[2555]: W0702 00:18:32.769614 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:32.770115 kubelet[2555]: E0702 00:18:32.769649 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:32.770456 kubelet[2555]: E0702 00:18:32.770149 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:32.770456 kubelet[2555]: W0702 00:18:32.770163 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:32.770456 kubelet[2555]: E0702 00:18:32.770205 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:32.772168 kubelet[2555]: E0702 00:18:32.770683 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:32.772168 kubelet[2555]: W0702 00:18:32.770699 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:32.772168 kubelet[2555]: E0702 00:18:32.770719 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:32.772168 kubelet[2555]: E0702 00:18:32.770996 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:32.772168 kubelet[2555]: W0702 00:18:32.771007 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:32.772168 kubelet[2555]: E0702 00:18:32.771147 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:32.772168 kubelet[2555]: E0702 00:18:32.771284 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:32.772168 kubelet[2555]: W0702 00:18:32.771293 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:32.772168 kubelet[2555]: E0702 00:18:32.771312 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:32.772168 kubelet[2555]: E0702 00:18:32.771545 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:32.772878 kubelet[2555]: W0702 00:18:32.771553 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:32.772878 kubelet[2555]: E0702 00:18:32.771572 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:32.772878 kubelet[2555]: E0702 00:18:32.771822 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:32.772878 kubelet[2555]: W0702 00:18:32.771832 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:32.772878 kubelet[2555]: E0702 00:18:32.771989 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:32.772878 kubelet[2555]: E0702 00:18:32.772151 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:32.772878 kubelet[2555]: W0702 00:18:32.772161 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:32.772878 kubelet[2555]: E0702 00:18:32.772191 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:32.772878 kubelet[2555]: E0702 00:18:32.772548 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:32.772878 kubelet[2555]: W0702 00:18:32.772597 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:32.775756 kubelet[2555]: E0702 00:18:32.772625 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:32.775756 kubelet[2555]: E0702 00:18:32.773033 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:32.775756 kubelet[2555]: W0702 00:18:32.773047 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:32.775756 kubelet[2555]: E0702 00:18:32.773101 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:32.775756 kubelet[2555]: E0702 00:18:32.773578 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:32.775756 kubelet[2555]: W0702 00:18:32.773590 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:32.775756 kubelet[2555]: E0702 00:18:32.773610 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:32.775756 kubelet[2555]: E0702 00:18:32.773960 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:32.775756 kubelet[2555]: W0702 00:18:32.773971 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:32.775756 kubelet[2555]: E0702 00:18:32.774067 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:32.776479 kubelet[2555]: E0702 00:18:32.774220 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:32.776479 kubelet[2555]: W0702 00:18:32.774228 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:32.776479 kubelet[2555]: E0702 00:18:32.774347 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:32.776479 kubelet[2555]: E0702 00:18:32.774627 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:32.776479 kubelet[2555]: W0702 00:18:32.774642 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:32.776479 kubelet[2555]: E0702 00:18:32.774667 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:32.776479 kubelet[2555]: E0702 00:18:32.774993 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:32.776479 kubelet[2555]: W0702 00:18:32.775009 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:32.776479 kubelet[2555]: E0702 00:18:32.775100 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:32.776479 kubelet[2555]: E0702 00:18:32.775337 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:32.778832 kubelet[2555]: W0702 00:18:32.775348 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:32.778832 kubelet[2555]: E0702 00:18:32.775463 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:32.778832 kubelet[2555]: E0702 00:18:32.775649 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:32.778832 kubelet[2555]: W0702 00:18:32.775661 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:32.778832 kubelet[2555]: E0702 00:18:32.775824 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:32.778832 kubelet[2555]: E0702 00:18:32.776369 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:32.778832 kubelet[2555]: W0702 00:18:32.776383 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:32.778832 kubelet[2555]: E0702 00:18:32.776471 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:32.778832 kubelet[2555]: E0702 00:18:32.776703 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:32.778832 kubelet[2555]: W0702 00:18:32.776716 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:32.780569 kubelet[2555]: E0702 00:18:32.776807 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:32.780569 kubelet[2555]: E0702 00:18:32.777186 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:32.780569 kubelet[2555]: W0702 00:18:32.777197 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:32.780569 kubelet[2555]: E0702 00:18:32.777398 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:32.780569 kubelet[2555]: E0702 00:18:32.777449 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:32.780569 kubelet[2555]: W0702 00:18:32.777470 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:32.780569 kubelet[2555]: E0702 00:18:32.777500 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:32.780569 kubelet[2555]: E0702 00:18:32.777727 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:32.780569 kubelet[2555]: W0702 00:18:32.777739 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:32.780569 kubelet[2555]: E0702 00:18:32.777769 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:32.782327 kubelet[2555]: E0702 00:18:32.778789 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:32.782327 kubelet[2555]: W0702 00:18:32.778803 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:32.782327 kubelet[2555]: E0702 00:18:32.778827 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:32.782327 kubelet[2555]: E0702 00:18:32.779112 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:32.782327 kubelet[2555]: W0702 00:18:32.779157 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:32.782327 kubelet[2555]: E0702 00:18:32.779170 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:32.782327 kubelet[2555]: E0702 00:18:32.779496 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:32.782327 kubelet[2555]: W0702 00:18:32.779533 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:32.782327 kubelet[2555]: E0702 00:18:32.779554 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:32.806277 systemd[1]: Started cri-containerd-a668524987b1cd96d35f0a10eea62128ea378b8fb3b65e75c653fb0e6fac857e.scope - libcontainer container a668524987b1cd96d35f0a10eea62128ea378b8fb3b65e75c653fb0e6fac857e. Jul 2 00:18:32.832273 kubelet[2555]: E0702 00:18:32.831634 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:32.832273 kubelet[2555]: W0702 00:18:32.831708 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:32.832273 kubelet[2555]: E0702 00:18:32.831742 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:32.858450 containerd[1460]: time="2024-07-02T00:18:32.857629660Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:18:32.858450 containerd[1460]: time="2024-07-02T00:18:32.857712292Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:18:32.858450 containerd[1460]: time="2024-07-02T00:18:32.857728800Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:18:32.858450 containerd[1460]: time="2024-07-02T00:18:32.857738719Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:18:32.908474 systemd[1]: Started cri-containerd-589ce236ef4df78c2f02eaa7224b9b351990cc55548e6106e047cb04d97a3687.scope - libcontainer container 589ce236ef4df78c2f02eaa7224b9b351990cc55548e6106e047cb04d97a3687. Jul 2 00:18:32.983564 containerd[1460]: time="2024-07-02T00:18:32.983269534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-86944d65b7-bn7hz,Uid:1cdd4291-9cb6-42ab-9845-5280104db715,Namespace:calico-system,Attempt:0,} returns sandbox id \"a668524987b1cd96d35f0a10eea62128ea378b8fb3b65e75c653fb0e6fac857e\"" Jul 2 00:18:32.985647 containerd[1460]: time="2024-07-02T00:18:32.985421664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xzbwn,Uid:a031f6b3-45d9-401c-bb15-a710f4226970,Namespace:calico-system,Attempt:0,} returns sandbox id \"589ce236ef4df78c2f02eaa7224b9b351990cc55548e6106e047cb04d97a3687\"" Jul 2 00:18:32.994139 kubelet[2555]: E0702 00:18:32.994080 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:18:32.996753 kubelet[2555]: E0702 00:18:32.996568 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:18:33.012236 containerd[1460]: time="2024-07-02T00:18:33.011780650Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Jul 2 00:18:34.278525 kubelet[2555]: E0702 00:18:34.278467 2555 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9kkwj" podUID="40064fc9-24a4-4ccf-9623-b652332a27c6" Jul 2 00:18:35.462173 containerd[1460]: time="2024-07-02T00:18:35.461527469Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:18:35.465167 containerd[1460]: time="2024-07-02T00:18:35.464941631Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=29458030" Jul 2 00:18:35.465904 containerd[1460]: time="2024-07-02T00:18:35.465681107Z" level=info msg="ImageCreate event name:\"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:18:35.473913 containerd[1460]: time="2024-07-02T00:18:35.473792854Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:18:35.476193 containerd[1460]: time="2024-07-02T00:18:35.476087632Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"30905782\" in 2.464238048s" Jul 2 00:18:35.479890 containerd[1460]: time="2024-07-02T00:18:35.478304345Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\"" Jul 2 00:18:35.488884 containerd[1460]: time="2024-07-02T00:18:35.487738179Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Jul 2 00:18:35.515940 containerd[1460]: time="2024-07-02T00:18:35.513620776Z" level=info msg="CreateContainer within sandbox \"a668524987b1cd96d35f0a10eea62128ea378b8fb3b65e75c653fb0e6fac857e\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 2 00:18:35.539681 containerd[1460]: time="2024-07-02T00:18:35.539624006Z" level=info msg="CreateContainer within sandbox \"a668524987b1cd96d35f0a10eea62128ea378b8fb3b65e75c653fb0e6fac857e\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"7e183e381e1554689b83738f7016040f8f61403309772d9ce48a230f372caf06\"" Jul 2 00:18:35.541638 containerd[1460]: time="2024-07-02T00:18:35.541579918Z" level=info msg="StartContainer for \"7e183e381e1554689b83738f7016040f8f61403309772d9ce48a230f372caf06\"" Jul 2 00:18:35.611176 systemd[1]: Started cri-containerd-7e183e381e1554689b83738f7016040f8f61403309772d9ce48a230f372caf06.scope - libcontainer container 7e183e381e1554689b83738f7016040f8f61403309772d9ce48a230f372caf06. Jul 2 00:18:35.702599 containerd[1460]: time="2024-07-02T00:18:35.701336570Z" level=info msg="StartContainer for \"7e183e381e1554689b83738f7016040f8f61403309772d9ce48a230f372caf06\" returns successfully" Jul 2 00:18:36.277950 kubelet[2555]: E0702 00:18:36.277909 2555 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9kkwj" podUID="40064fc9-24a4-4ccf-9623-b652332a27c6" Jul 2 00:18:36.416911 containerd[1460]: time="2024-07-02T00:18:36.414740132Z" level=info msg="StopContainer for \"7e183e381e1554689b83738f7016040f8f61403309772d9ce48a230f372caf06\" with timeout 300 (s)" Jul 2 00:18:36.416911 containerd[1460]: time="2024-07-02T00:18:36.416801285Z" level=info msg="Stop container \"7e183e381e1554689b83738f7016040f8f61403309772d9ce48a230f372caf06\" with signal terminated" Jul 2 00:18:36.436358 systemd[1]: cri-containerd-7e183e381e1554689b83738f7016040f8f61403309772d9ce48a230f372caf06.scope: Deactivated successfully. Jul 2 00:18:36.503502 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7e183e381e1554689b83738f7016040f8f61403309772d9ce48a230f372caf06-rootfs.mount: Deactivated successfully. Jul 2 00:18:36.512552 containerd[1460]: time="2024-07-02T00:18:36.512220835Z" level=info msg="shim disconnected" id=7e183e381e1554689b83738f7016040f8f61403309772d9ce48a230f372caf06 namespace=k8s.io Jul 2 00:18:36.512552 containerd[1460]: time="2024-07-02T00:18:36.512314423Z" level=warning msg="cleaning up after shim disconnected" id=7e183e381e1554689b83738f7016040f8f61403309772d9ce48a230f372caf06 namespace=k8s.io Jul 2 00:18:36.512552 containerd[1460]: time="2024-07-02T00:18:36.512329022Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:18:36.544283 containerd[1460]: time="2024-07-02T00:18:36.543912832Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:18:36Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 2 00:18:36.550414 containerd[1460]: time="2024-07-02T00:18:36.550345943Z" level=info msg="StopContainer for \"7e183e381e1554689b83738f7016040f8f61403309772d9ce48a230f372caf06\" returns successfully" Jul 2 00:18:36.552277 containerd[1460]: time="2024-07-02T00:18:36.551251974Z" level=info msg="StopPodSandbox for \"a668524987b1cd96d35f0a10eea62128ea378b8fb3b65e75c653fb0e6fac857e\"" Jul 2 00:18:36.552277 containerd[1460]: time="2024-07-02T00:18:36.551302332Z" level=info msg="Container to stop \"7e183e381e1554689b83738f7016040f8f61403309772d9ce48a230f372caf06\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:18:36.553865 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a668524987b1cd96d35f0a10eea62128ea378b8fb3b65e75c653fb0e6fac857e-shm.mount: Deactivated successfully. Jul 2 00:18:36.572911 systemd[1]: cri-containerd-a668524987b1cd96d35f0a10eea62128ea378b8fb3b65e75c653fb0e6fac857e.scope: Deactivated successfully. Jul 2 00:18:36.626283 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a668524987b1cd96d35f0a10eea62128ea378b8fb3b65e75c653fb0e6fac857e-rootfs.mount: Deactivated successfully. Jul 2 00:18:36.630226 containerd[1460]: time="2024-07-02T00:18:36.629833430Z" level=info msg="shim disconnected" id=a668524987b1cd96d35f0a10eea62128ea378b8fb3b65e75c653fb0e6fac857e namespace=k8s.io Jul 2 00:18:36.631076 containerd[1460]: time="2024-07-02T00:18:36.630448384Z" level=warning msg="cleaning up after shim disconnected" id=a668524987b1cd96d35f0a10eea62128ea378b8fb3b65e75c653fb0e6fac857e namespace=k8s.io Jul 2 00:18:36.631076 containerd[1460]: time="2024-07-02T00:18:36.630471786Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:18:36.661241 containerd[1460]: time="2024-07-02T00:18:36.661086075Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:18:36Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 2 00:18:36.662892 containerd[1460]: time="2024-07-02T00:18:36.662786863Z" level=info msg="TearDown network for sandbox \"a668524987b1cd96d35f0a10eea62128ea378b8fb3b65e75c653fb0e6fac857e\" successfully" Jul 2 00:18:36.662892 containerd[1460]: time="2024-07-02T00:18:36.662848386Z" level=info msg="StopPodSandbox for \"a668524987b1cd96d35f0a10eea62128ea378b8fb3b65e75c653fb0e6fac857e\" returns successfully" Jul 2 00:18:36.710416 kubelet[2555]: I0702 00:18:36.710352 2555 topology_manager.go:215] "Topology Admit Handler" podUID="692550c7-5ec8-4d48-b3b1-36b44ca21833" podNamespace="calico-system" podName="calico-typha-95f9b5958-9srrc" Jul 2 00:18:36.710644 kubelet[2555]: E0702 00:18:36.710449 2555 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1cdd4291-9cb6-42ab-9845-5280104db715" containerName="calico-typha" Jul 2 00:18:36.710644 kubelet[2555]: I0702 00:18:36.710491 2555 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cdd4291-9cb6-42ab-9845-5280104db715" containerName="calico-typha" Jul 2 00:18:36.726892 kubelet[2555]: E0702 00:18:36.724371 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:36.726892 kubelet[2555]: W0702 00:18:36.724400 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:36.726892 kubelet[2555]: E0702 00:18:36.724458 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:36.726892 kubelet[2555]: E0702 00:18:36.724754 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:36.726892 kubelet[2555]: W0702 00:18:36.724765 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:36.726892 kubelet[2555]: E0702 00:18:36.724782 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:36.726892 kubelet[2555]: E0702 00:18:36.725572 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:36.726892 kubelet[2555]: W0702 00:18:36.725586 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:36.726892 kubelet[2555]: E0702 00:18:36.725601 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:36.726892 kubelet[2555]: E0702 00:18:36.725988 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:36.727373 kubelet[2555]: W0702 00:18:36.726002 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:36.727373 kubelet[2555]: E0702 00:18:36.726033 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:36.727373 kubelet[2555]: E0702 00:18:36.726299 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:36.727373 kubelet[2555]: W0702 00:18:36.726313 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:36.727373 kubelet[2555]: E0702 00:18:36.726345 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:36.727373 kubelet[2555]: E0702 00:18:36.726670 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:36.727373 kubelet[2555]: W0702 00:18:36.726683 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:36.727373 kubelet[2555]: E0702 00:18:36.726713 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:36.727373 kubelet[2555]: E0702 00:18:36.726919 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:36.727373 kubelet[2555]: W0702 00:18:36.726927 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:36.727723 kubelet[2555]: E0702 00:18:36.726940 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:36.727723 kubelet[2555]: E0702 00:18:36.727115 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:36.727723 kubelet[2555]: W0702 00:18:36.727122 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:36.727723 kubelet[2555]: E0702 00:18:36.727132 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:36.727723 kubelet[2555]: E0702 00:18:36.727386 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:36.727723 kubelet[2555]: W0702 00:18:36.727398 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:36.727723 kubelet[2555]: E0702 00:18:36.727422 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:36.727723 kubelet[2555]: E0702 00:18:36.727665 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:36.727723 kubelet[2555]: W0702 00:18:36.727677 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:36.727723 kubelet[2555]: E0702 00:18:36.727707 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:36.732225 kubelet[2555]: E0702 00:18:36.729313 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:36.732225 kubelet[2555]: W0702 00:18:36.729337 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:36.732225 kubelet[2555]: E0702 00:18:36.729354 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:36.732225 kubelet[2555]: E0702 00:18:36.729589 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:36.732225 kubelet[2555]: W0702 00:18:36.729602 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:36.732225 kubelet[2555]: E0702 00:18:36.729641 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:36.728697 systemd[1]: Created slice kubepods-besteffort-pod692550c7_5ec8_4d48_b3b1_36b44ca21833.slice - libcontainer container kubepods-besteffort-pod692550c7_5ec8_4d48_b3b1_36b44ca21833.slice. Jul 2 00:18:36.805051 kubelet[2555]: E0702 00:18:36.804908 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:36.805051 kubelet[2555]: W0702 00:18:36.804937 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:36.805051 kubelet[2555]: E0702 00:18:36.804979 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:36.805305 kubelet[2555]: I0702 00:18:36.805131 2555 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/1cdd4291-9cb6-42ab-9845-5280104db715-typha-certs\") pod \"1cdd4291-9cb6-42ab-9845-5280104db715\" (UID: \"1cdd4291-9cb6-42ab-9845-5280104db715\") " Jul 2 00:18:36.808354 kubelet[2555]: E0702 00:18:36.807796 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:36.808354 kubelet[2555]: W0702 00:18:36.807822 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:36.808354 kubelet[2555]: E0702 00:18:36.807877 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:36.808354 kubelet[2555]: I0702 00:18:36.807920 2555 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1cdd4291-9cb6-42ab-9845-5280104db715-tigera-ca-bundle\") pod \"1cdd4291-9cb6-42ab-9845-5280104db715\" (UID: \"1cdd4291-9cb6-42ab-9845-5280104db715\") " Jul 2 00:18:36.812027 kubelet[2555]: E0702 00:18:36.810754 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:36.812027 kubelet[2555]: W0702 00:18:36.810789 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:36.814263 kubelet[2555]: E0702 00:18:36.813356 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:36.814840 kubelet[2555]: W0702 00:18:36.814454 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:36.815368 kubelet[2555]: E0702 00:18:36.815020 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:36.830139 kubelet[2555]: E0702 00:18:36.829361 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:36.830139 kubelet[2555]: I0702 00:18:36.829460 2555 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8wbr\" (UniqueName: \"kubernetes.io/projected/1cdd4291-9cb6-42ab-9845-5280104db715-kube-api-access-j8wbr\") pod \"1cdd4291-9cb6-42ab-9845-5280104db715\" (UID: \"1cdd4291-9cb6-42ab-9845-5280104db715\") " Jul 2 00:18:36.832373 kubelet[2555]: E0702 00:18:36.831621 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:36.832373 kubelet[2555]: W0702 00:18:36.831654 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:36.832373 kubelet[2555]: E0702 00:18:36.831687 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:36.832373 kubelet[2555]: I0702 00:18:36.831737 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lgdt\" (UniqueName: \"kubernetes.io/projected/692550c7-5ec8-4d48-b3b1-36b44ca21833-kube-api-access-9lgdt\") pod \"calico-typha-95f9b5958-9srrc\" (UID: \"692550c7-5ec8-4d48-b3b1-36b44ca21833\") " pod="calico-system/calico-typha-95f9b5958-9srrc" Jul 2 00:18:36.838045 kubelet[2555]: E0702 00:18:36.838005 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:36.840696 kubelet[2555]: W0702 00:18:36.840560 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:36.842029 systemd[1]: var-lib-kubelet-pods-1cdd4291\x2d9cb6\x2d42ab\x2d9845\x2d5280104db715-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. Jul 2 00:18:36.843541 kubelet[2555]: E0702 00:18:36.842076 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:36.847331 kubelet[2555]: I0702 00:18:36.843749 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/692550c7-5ec8-4d48-b3b1-36b44ca21833-tigera-ca-bundle\") pod \"calico-typha-95f9b5958-9srrc\" (UID: \"692550c7-5ec8-4d48-b3b1-36b44ca21833\") " pod="calico-system/calico-typha-95f9b5958-9srrc" Jul 2 00:18:36.847812 systemd[1]: var-lib-kubelet-pods-1cdd4291\x2d9cb6\x2d42ab\x2d9845\x2d5280104db715-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. Jul 2 00:18:36.849154 kubelet[2555]: E0702 00:18:36.849107 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:36.849220 kubelet[2555]: W0702 00:18:36.849150 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:36.849220 kubelet[2555]: E0702 00:18:36.849195 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:36.849717 kubelet[2555]: I0702 00:18:36.849669 2555 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cdd4291-9cb6-42ab-9845-5280104db715-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "1cdd4291-9cb6-42ab-9845-5280104db715" (UID: "1cdd4291-9cb6-42ab-9845-5280104db715"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 00:18:36.850951 kubelet[2555]: E0702 00:18:36.850919 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:36.850951 kubelet[2555]: W0702 00:18:36.850943 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:36.851191 kubelet[2555]: E0702 00:18:36.851148 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:36.851838 kubelet[2555]: E0702 00:18:36.851813 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:36.851838 kubelet[2555]: W0702 00:18:36.851834 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:36.852213 kubelet[2555]: E0702 00:18:36.852050 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:36.853171 kubelet[2555]: I0702 00:18:36.853132 2555 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1cdd4291-9cb6-42ab-9845-5280104db715-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "1cdd4291-9cb6-42ab-9845-5280104db715" (UID: "1cdd4291-9cb6-42ab-9845-5280104db715"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 00:18:36.854196 kubelet[2555]: E0702 00:18:36.854171 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:36.854196 kubelet[2555]: W0702 00:18:36.854191 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:36.854533 kubelet[2555]: E0702 00:18:36.854225 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:36.855229 kubelet[2555]: E0702 00:18:36.855207 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:36.855229 kubelet[2555]: W0702 00:18:36.855227 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:36.855969 kubelet[2555]: E0702 00:18:36.855942 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:36.855969 kubelet[2555]: W0702 00:18:36.855959 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:36.856220 kubelet[2555]: E0702 00:18:36.855949 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:36.856220 kubelet[2555]: E0702 00:18:36.856191 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:36.857088 kubelet[2555]: E0702 00:18:36.857059 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:36.857088 kubelet[2555]: W0702 00:18:36.857083 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:36.857365 kubelet[2555]: E0702 00:18:36.857136 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:36.857365 kubelet[2555]: I0702 00:18:36.857207 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/692550c7-5ec8-4d48-b3b1-36b44ca21833-typha-certs\") pod \"calico-typha-95f9b5958-9srrc\" (UID: \"692550c7-5ec8-4d48-b3b1-36b44ca21833\") " pod="calico-system/calico-typha-95f9b5958-9srrc" Jul 2 00:18:36.858211 kubelet[2555]: E0702 00:18:36.858186 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:36.858211 kubelet[2555]: W0702 00:18:36.858206 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:36.858834 kubelet[2555]: E0702 00:18:36.858293 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:36.858834 kubelet[2555]: E0702 00:18:36.858776 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:36.858834 kubelet[2555]: W0702 00:18:36.858802 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:36.859496 kubelet[2555]: E0702 00:18:36.859473 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:36.867479 kubelet[2555]: E0702 00:18:36.867437 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:36.867479 kubelet[2555]: W0702 00:18:36.867465 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:36.867479 kubelet[2555]: E0702 00:18:36.867494 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:36.869989 kubelet[2555]: E0702 00:18:36.869924 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:36.869989 kubelet[2555]: W0702 00:18:36.869953 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:36.869989 kubelet[2555]: E0702 00:18:36.869983 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:36.872234 kubelet[2555]: E0702 00:18:36.871097 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:36.872234 kubelet[2555]: W0702 00:18:36.871129 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:36.872234 kubelet[2555]: E0702 00:18:36.871154 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:36.872234 kubelet[2555]: E0702 00:18:36.872154 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:36.872234 kubelet[2555]: W0702 00:18:36.872179 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:36.872234 kubelet[2555]: E0702 00:18:36.872205 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:36.872642 kubelet[2555]: E0702 00:18:36.872493 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:36.872642 kubelet[2555]: W0702 00:18:36.872503 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:36.872642 kubelet[2555]: E0702 00:18:36.872514 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:36.873935 kubelet[2555]: E0702 00:18:36.872848 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:36.873935 kubelet[2555]: W0702 00:18:36.872881 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:36.873935 kubelet[2555]: E0702 00:18:36.872900 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:36.873935 kubelet[2555]: E0702 00:18:36.873160 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:36.873935 kubelet[2555]: W0702 00:18:36.873170 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:36.873935 kubelet[2555]: E0702 00:18:36.873184 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:36.873935 kubelet[2555]: E0702 00:18:36.873341 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:36.873935 kubelet[2555]: W0702 00:18:36.873348 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:36.873935 kubelet[2555]: E0702 00:18:36.873357 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:36.873935 kubelet[2555]: E0702 00:18:36.873522 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:36.875077 kubelet[2555]: W0702 00:18:36.873529 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:36.875077 kubelet[2555]: E0702 00:18:36.873539 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:36.875077 kubelet[2555]: E0702 00:18:36.873695 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:36.875077 kubelet[2555]: W0702 00:18:36.873704 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:36.875077 kubelet[2555]: E0702 00:18:36.873715 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:36.875077 kubelet[2555]: I0702 00:18:36.873880 2555 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cdd4291-9cb6-42ab-9845-5280104db715-kube-api-access-j8wbr" (OuterVolumeSpecName: "kube-api-access-j8wbr") pod "1cdd4291-9cb6-42ab-9845-5280104db715" (UID: "1cdd4291-9cb6-42ab-9845-5280104db715"). InnerVolumeSpecName "kube-api-access-j8wbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 00:18:36.875077 kubelet[2555]: E0702 00:18:36.874341 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:36.875077 kubelet[2555]: W0702 00:18:36.874387 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:36.875077 kubelet[2555]: E0702 00:18:36.874412 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:36.875326 kubelet[2555]: E0702 00:18:36.874962 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:36.875326 kubelet[2555]: W0702 00:18:36.874976 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:36.875326 kubelet[2555]: E0702 00:18:36.874990 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:36.875326 kubelet[2555]: I0702 00:18:36.875038 2555 reconciler_common.go:300] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/1cdd4291-9cb6-42ab-9845-5280104db715-typha-certs\") on node \"ci-3975.1.1-8-31c642c6eb\" DevicePath \"\"" Jul 2 00:18:36.875326 kubelet[2555]: I0702 00:18:36.875089 2555 reconciler_common.go:300] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1cdd4291-9cb6-42ab-9845-5280104db715-tigera-ca-bundle\") on node \"ci-3975.1.1-8-31c642c6eb\" DevicePath \"\"" Jul 2 00:18:36.965895 containerd[1460]: time="2024-07-02T00:18:36.965723486Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:18:36.968081 containerd[1460]: time="2024-07-02T00:18:36.967904826Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=5140568" Jul 2 00:18:36.970118 containerd[1460]: time="2024-07-02T00:18:36.969994946Z" level=info msg="ImageCreate event name:\"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:18:36.973595 containerd[1460]: time="2024-07-02T00:18:36.972902019Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:18:36.975657 containerd[1460]: time="2024-07-02T00:18:36.975402238Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6588288\" in 1.486289635s" Jul 2 00:18:36.975657 containerd[1460]: time="2024-07-02T00:18:36.975445875Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\"" Jul 2 00:18:36.976837 kubelet[2555]: E0702 00:18:36.976612 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:36.976837 kubelet[2555]: W0702 00:18:36.976639 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:36.976837 kubelet[2555]: E0702 00:18:36.976664 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:36.978557 containerd[1460]: time="2024-07-02T00:18:36.978402455Z" level=info msg="CreateContainer within sandbox \"589ce236ef4df78c2f02eaa7224b9b351990cc55548e6106e047cb04d97a3687\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 2 00:18:36.978998 kubelet[2555]: E0702 00:18:36.978882 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:36.978998 kubelet[2555]: W0702 00:18:36.978904 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:36.978998 kubelet[2555]: E0702 00:18:36.978953 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:36.980720 kubelet[2555]: E0702 00:18:36.980471 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:36.980720 kubelet[2555]: W0702 00:18:36.980495 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:36.980720 kubelet[2555]: E0702 00:18:36.980552 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:36.980720 kubelet[2555]: I0702 00:18:36.980669 2555 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-j8wbr\" (UniqueName: \"kubernetes.io/projected/1cdd4291-9cb6-42ab-9845-5280104db715-kube-api-access-j8wbr\") on node \"ci-3975.1.1-8-31c642c6eb\" DevicePath \"\"" Jul 2 00:18:36.981433 kubelet[2555]: E0702 00:18:36.981058 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:36.982429 kubelet[2555]: W0702 00:18:36.981633 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:36.982429 kubelet[2555]: E0702 00:18:36.981742 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:36.983084 kubelet[2555]: E0702 00:18:36.982800 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:36.983084 kubelet[2555]: W0702 00:18:36.982827 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:36.983084 kubelet[2555]: E0702 00:18:36.983015 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:36.984185 kubelet[2555]: E0702 00:18:36.983809 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:36.984185 kubelet[2555]: W0702 00:18:36.983831 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:36.984348 kubelet[2555]: E0702 00:18:36.984216 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:36.985646 kubelet[2555]: E0702 00:18:36.985590 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:36.985646 kubelet[2555]: W0702 00:18:36.985633 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:36.986458 kubelet[2555]: E0702 00:18:36.986419 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:36.986777 kubelet[2555]: E0702 00:18:36.986758 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:36.986914 kubelet[2555]: W0702 00:18:36.986777 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:36.987030 kubelet[2555]: E0702 00:18:36.986971 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:36.987661 kubelet[2555]: E0702 00:18:36.987635 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:36.987661 kubelet[2555]: W0702 00:18:36.987659 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:36.988955 kubelet[2555]: E0702 00:18:36.988671 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:36.988955 kubelet[2555]: W0702 00:18:36.988697 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:36.988955 kubelet[2555]: E0702 00:18:36.988766 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:36.989311 kubelet[2555]: E0702 00:18:36.989258 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:36.990014 kubelet[2555]: E0702 00:18:36.989991 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:36.990184 kubelet[2555]: W0702 00:18:36.990084 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:36.990184 kubelet[2555]: E0702 00:18:36.990118 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:36.991020 kubelet[2555]: E0702 00:18:36.990997 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:36.991105 kubelet[2555]: W0702 00:18:36.991020 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:36.991646 kubelet[2555]: E0702 00:18:36.991058 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:36.991646 kubelet[2555]: E0702 00:18:36.991533 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:36.991646 kubelet[2555]: W0702 00:18:36.991551 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:36.991646 kubelet[2555]: E0702 00:18:36.991573 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:36.992989 kubelet[2555]: E0702 00:18:36.992954 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:36.992989 kubelet[2555]: W0702 00:18:36.992978 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:36.993105 kubelet[2555]: E0702 00:18:36.993012 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:36.993934 kubelet[2555]: E0702 00:18:36.993845 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:36.994587 kubelet[2555]: W0702 00:18:36.994010 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:36.994587 kubelet[2555]: E0702 00:18:36.994034 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:36.998595 kubelet[2555]: E0702 00:18:36.998561 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:36.998806 kubelet[2555]: W0702 00:18:36.998788 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:36.999163 kubelet[2555]: E0702 00:18:36.999146 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:37.004493 kubelet[2555]: E0702 00:18:37.004449 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:37.004493 kubelet[2555]: W0702 00:18:37.004481 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:37.005486 kubelet[2555]: E0702 00:18:37.004535 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:37.012447 containerd[1460]: time="2024-07-02T00:18:37.012371993Z" level=info msg="CreateContainer within sandbox \"589ce236ef4df78c2f02eaa7224b9b351990cc55548e6106e047cb04d97a3687\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"0a2e66d08c538a7f0744826257b2f2a115717707106aacdd007ff6680b5cc59f\"" Jul 2 00:18:37.014897 containerd[1460]: time="2024-07-02T00:18:37.014189517Z" level=info msg="StartContainer for \"0a2e66d08c538a7f0744826257b2f2a115717707106aacdd007ff6680b5cc59f\"" Jul 2 00:18:37.017883 kubelet[2555]: E0702 00:18:37.017755 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:18:37.017883 kubelet[2555]: W0702 00:18:37.017782 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:18:37.017883 kubelet[2555]: E0702 00:18:37.017810 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:18:37.034912 kubelet[2555]: E0702 00:18:37.034103 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:18:37.040699 containerd[1460]: time="2024-07-02T00:18:37.038186270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-95f9b5958-9srrc,Uid:692550c7-5ec8-4d48-b3b1-36b44ca21833,Namespace:calico-system,Attempt:0,}" Jul 2 00:18:37.080350 systemd[1]: Started cri-containerd-0a2e66d08c538a7f0744826257b2f2a115717707106aacdd007ff6680b5cc59f.scope - libcontainer container 0a2e66d08c538a7f0744826257b2f2a115717707106aacdd007ff6680b5cc59f. Jul 2 00:18:37.151306 containerd[1460]: time="2024-07-02T00:18:37.151169712Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:18:37.151306 containerd[1460]: time="2024-07-02T00:18:37.151253841Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:18:37.151634 containerd[1460]: time="2024-07-02T00:18:37.151274496Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:18:37.151634 containerd[1460]: time="2024-07-02T00:18:37.151287961Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:18:37.199178 systemd[1]: Started cri-containerd-1517f2028157c34f4c797d538f33cc8c9610174ff95a04a6e906f6c285117874.scope - libcontainer container 1517f2028157c34f4c797d538f33cc8c9610174ff95a04a6e906f6c285117874. Jul 2 00:18:37.319706 containerd[1460]: time="2024-07-02T00:18:37.316464687Z" level=info msg="StartContainer for \"0a2e66d08c538a7f0744826257b2f2a115717707106aacdd007ff6680b5cc59f\" returns successfully" Jul 2 00:18:37.317329 systemd[1]: cri-containerd-0a2e66d08c538a7f0744826257b2f2a115717707106aacdd007ff6680b5cc59f.scope: Deactivated successfully. Jul 2 00:18:37.388982 containerd[1460]: time="2024-07-02T00:18:37.388742851Z" level=info msg="shim disconnected" id=0a2e66d08c538a7f0744826257b2f2a115717707106aacdd007ff6680b5cc59f namespace=k8s.io Jul 2 00:18:37.389976 containerd[1460]: time="2024-07-02T00:18:37.389927132Z" level=warning msg="cleaning up after shim disconnected" id=0a2e66d08c538a7f0744826257b2f2a115717707106aacdd007ff6680b5cc59f namespace=k8s.io Jul 2 00:18:37.389976 containerd[1460]: time="2024-07-02T00:18:37.389966176Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:18:37.427452 containerd[1460]: time="2024-07-02T00:18:37.427274451Z" level=info msg="StopContainer for \"0a2e66d08c538a7f0744826257b2f2a115717707106aacdd007ff6680b5cc59f\" with timeout 5 (s)" Jul 2 00:18:37.427647 containerd[1460]: time="2024-07-02T00:18:37.427572857Z" level=info msg="StopContainer for \"0a2e66d08c538a7f0744826257b2f2a115717707106aacdd007ff6680b5cc59f\" returns successfully" Jul 2 00:18:37.429937 containerd[1460]: time="2024-07-02T00:18:37.428634392Z" level=info msg="StopPodSandbox for \"589ce236ef4df78c2f02eaa7224b9b351990cc55548e6106e047cb04d97a3687\"" Jul 2 00:18:37.438956 kubelet[2555]: I0702 00:18:37.438914 2555 scope.go:117] "RemoveContainer" containerID="7e183e381e1554689b83738f7016040f8f61403309772d9ce48a230f372caf06" Jul 2 00:18:37.446326 systemd[1]: Removed slice kubepods-besteffort-pod1cdd4291_9cb6_42ab_9845_5280104db715.slice - libcontainer container kubepods-besteffort-pod1cdd4291_9cb6_42ab_9845_5280104db715.slice. Jul 2 00:18:37.453952 containerd[1460]: time="2024-07-02T00:18:37.452222956Z" level=info msg="RemoveContainer for \"7e183e381e1554689b83738f7016040f8f61403309772d9ce48a230f372caf06\"" Jul 2 00:18:37.457591 systemd[1]: cri-containerd-589ce236ef4df78c2f02eaa7224b9b351990cc55548e6106e047cb04d97a3687.scope: Deactivated successfully. Jul 2 00:18:37.479448 containerd[1460]: time="2024-07-02T00:18:37.478123833Z" level=info msg="RemoveContainer for \"7e183e381e1554689b83738f7016040f8f61403309772d9ce48a230f372caf06\" returns successfully" Jul 2 00:18:37.479448 containerd[1460]: time="2024-07-02T00:18:37.479151635Z" level=error msg="ContainerStatus for \"7e183e381e1554689b83738f7016040f8f61403309772d9ce48a230f372caf06\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7e183e381e1554689b83738f7016040f8f61403309772d9ce48a230f372caf06\": not found" Jul 2 00:18:37.479728 kubelet[2555]: I0702 00:18:37.478717 2555 scope.go:117] "RemoveContainer" containerID="7e183e381e1554689b83738f7016040f8f61403309772d9ce48a230f372caf06" Jul 2 00:18:37.482758 kubelet[2555]: E0702 00:18:37.481993 2555 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7e183e381e1554689b83738f7016040f8f61403309772d9ce48a230f372caf06\": not found" containerID="7e183e381e1554689b83738f7016040f8f61403309772d9ce48a230f372caf06" Jul 2 00:18:37.482758 kubelet[2555]: I0702 00:18:37.482069 2555 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7e183e381e1554689b83738f7016040f8f61403309772d9ce48a230f372caf06"} err="failed to get container status \"7e183e381e1554689b83738f7016040f8f61403309772d9ce48a230f372caf06\": rpc error: code = NotFound desc = an error occurred when try to find container \"7e183e381e1554689b83738f7016040f8f61403309772d9ce48a230f372caf06\": not found" Jul 2 00:18:37.497802 containerd[1460]: time="2024-07-02T00:18:37.497260901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-95f9b5958-9srrc,Uid:692550c7-5ec8-4d48-b3b1-36b44ca21833,Namespace:calico-system,Attempt:0,} returns sandbox id \"1517f2028157c34f4c797d538f33cc8c9610174ff95a04a6e906f6c285117874\"" Jul 2 00:18:37.500913 kubelet[2555]: E0702 00:18:37.499300 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:18:37.518531 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-589ce236ef4df78c2f02eaa7224b9b351990cc55548e6106e047cb04d97a3687-shm.mount: Deactivated successfully. Jul 2 00:18:37.518681 systemd[1]: var-lib-kubelet-pods-1cdd4291\x2d9cb6\x2d42ab\x2d9845\x2d5280104db715-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dj8wbr.mount: Deactivated successfully. Jul 2 00:18:37.555258 containerd[1460]: time="2024-07-02T00:18:37.551639006Z" level=info msg="CreateContainer within sandbox \"1517f2028157c34f4c797d538f33cc8c9610174ff95a04a6e906f6c285117874\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 2 00:18:37.553165 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-589ce236ef4df78c2f02eaa7224b9b351990cc55548e6106e047cb04d97a3687-rootfs.mount: Deactivated successfully. Jul 2 00:18:37.567688 containerd[1460]: time="2024-07-02T00:18:37.565020227Z" level=info msg="shim disconnected" id=589ce236ef4df78c2f02eaa7224b9b351990cc55548e6106e047cb04d97a3687 namespace=k8s.io Jul 2 00:18:37.567688 containerd[1460]: time="2024-07-02T00:18:37.565107700Z" level=warning msg="cleaning up after shim disconnected" id=589ce236ef4df78c2f02eaa7224b9b351990cc55548e6106e047cb04d97a3687 namespace=k8s.io Jul 2 00:18:37.567688 containerd[1460]: time="2024-07-02T00:18:37.565116764Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:18:37.596048 containerd[1460]: time="2024-07-02T00:18:37.595990240Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:18:37Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 2 00:18:37.597665 containerd[1460]: time="2024-07-02T00:18:37.597439969Z" level=info msg="CreateContainer within sandbox \"1517f2028157c34f4c797d538f33cc8c9610174ff95a04a6e906f6c285117874\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"91460481d62300f7a939eb61f015a54a57caa6674d8e60b4c2776c249d3cbc93\"" Jul 2 00:18:37.597974 containerd[1460]: time="2024-07-02T00:18:37.597626686Z" level=info msg="TearDown network for sandbox \"589ce236ef4df78c2f02eaa7224b9b351990cc55548e6106e047cb04d97a3687\" successfully" Jul 2 00:18:37.597974 containerd[1460]: time="2024-07-02T00:18:37.597966603Z" level=info msg="StopPodSandbox for \"589ce236ef4df78c2f02eaa7224b9b351990cc55548e6106e047cb04d97a3687\" returns successfully" Jul 2 00:18:37.600685 containerd[1460]: time="2024-07-02T00:18:37.599057457Z" level=info msg="StartContainer for \"91460481d62300f7a939eb61f015a54a57caa6674d8e60b4c2776c249d3cbc93\"" Jul 2 00:18:37.684161 systemd[1]: Started cri-containerd-91460481d62300f7a939eb61f015a54a57caa6674d8e60b4c2776c249d3cbc93.scope - libcontainer container 91460481d62300f7a939eb61f015a54a57caa6674d8e60b4c2776c249d3cbc93. Jul 2 00:18:37.708782 kubelet[2555]: I0702 00:18:37.708624 2555 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a031f6b3-45d9-401c-bb15-a710f4226970-var-run-calico\") pod \"a031f6b3-45d9-401c-bb15-a710f4226970\" (UID: \"a031f6b3-45d9-401c-bb15-a710f4226970\") " Jul 2 00:18:37.709538 kubelet[2555]: I0702 00:18:37.709417 2555 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a031f6b3-45d9-401c-bb15-a710f4226970-node-certs\") pod \"a031f6b3-45d9-401c-bb15-a710f4226970\" (UID: \"a031f6b3-45d9-401c-bb15-a710f4226970\") " Jul 2 00:18:37.710015 kubelet[2555]: I0702 00:18:37.709908 2555 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a031f6b3-45d9-401c-bb15-a710f4226970-flexvol-driver-host\") pod \"a031f6b3-45d9-401c-bb15-a710f4226970\" (UID: \"a031f6b3-45d9-401c-bb15-a710f4226970\") " Jul 2 00:18:37.710397 kubelet[2555]: I0702 00:18:37.710301 2555 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a031f6b3-45d9-401c-bb15-a710f4226970-var-lib-calico\") pod \"a031f6b3-45d9-401c-bb15-a710f4226970\" (UID: \"a031f6b3-45d9-401c-bb15-a710f4226970\") " Jul 2 00:18:37.711054 kubelet[2555]: I0702 00:18:37.710880 2555 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a031f6b3-45d9-401c-bb15-a710f4226970-tigera-ca-bundle\") pod \"a031f6b3-45d9-401c-bb15-a710f4226970\" (UID: \"a031f6b3-45d9-401c-bb15-a710f4226970\") " Jul 2 00:18:37.711473 kubelet[2555]: I0702 00:18:37.711319 2555 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a031f6b3-45d9-401c-bb15-a710f4226970-cni-net-dir\") pod \"a031f6b3-45d9-401c-bb15-a710f4226970\" (UID: \"a031f6b3-45d9-401c-bb15-a710f4226970\") " Jul 2 00:18:37.711730 kubelet[2555]: I0702 00:18:37.711715 2555 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8qjf\" (UniqueName: \"kubernetes.io/projected/a031f6b3-45d9-401c-bb15-a710f4226970-kube-api-access-p8qjf\") pod \"a031f6b3-45d9-401c-bb15-a710f4226970\" (UID: \"a031f6b3-45d9-401c-bb15-a710f4226970\") " Jul 2 00:18:37.711819 kubelet[2555]: I0702 00:18:37.711810 2555 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a031f6b3-45d9-401c-bb15-a710f4226970-xtables-lock\") pod \"a031f6b3-45d9-401c-bb15-a710f4226970\" (UID: \"a031f6b3-45d9-401c-bb15-a710f4226970\") " Jul 2 00:18:37.711903 kubelet[2555]: I0702 00:18:37.711893 2555 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a031f6b3-45d9-401c-bb15-a710f4226970-policysync\") pod \"a031f6b3-45d9-401c-bb15-a710f4226970\" (UID: \"a031f6b3-45d9-401c-bb15-a710f4226970\") " Jul 2 00:18:37.712010 kubelet[2555]: I0702 00:18:37.711996 2555 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a031f6b3-45d9-401c-bb15-a710f4226970-cni-log-dir\") pod \"a031f6b3-45d9-401c-bb15-a710f4226970\" (UID: \"a031f6b3-45d9-401c-bb15-a710f4226970\") " Jul 2 00:18:37.712143 kubelet[2555]: I0702 00:18:37.712124 2555 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a031f6b3-45d9-401c-bb15-a710f4226970-cni-bin-dir\") pod \"a031f6b3-45d9-401c-bb15-a710f4226970\" (UID: \"a031f6b3-45d9-401c-bb15-a710f4226970\") " Jul 2 00:18:37.712280 kubelet[2555]: I0702 00:18:37.712267 2555 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a031f6b3-45d9-401c-bb15-a710f4226970-lib-modules\") pod \"a031f6b3-45d9-401c-bb15-a710f4226970\" (UID: \"a031f6b3-45d9-401c-bb15-a710f4226970\") " Jul 2 00:18:37.712910 kubelet[2555]: I0702 00:18:37.712651 2555 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a031f6b3-45d9-401c-bb15-a710f4226970-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a031f6b3-45d9-401c-bb15-a710f4226970" (UID: "a031f6b3-45d9-401c-bb15-a710f4226970"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:18:37.713642 kubelet[2555]: I0702 00:18:37.708740 2555 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a031f6b3-45d9-401c-bb15-a710f4226970-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "a031f6b3-45d9-401c-bb15-a710f4226970" (UID: "a031f6b3-45d9-401c-bb15-a710f4226970"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:18:37.713642 kubelet[2555]: I0702 00:18:37.713416 2555 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a031f6b3-45d9-401c-bb15-a710f4226970-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "a031f6b3-45d9-401c-bb15-a710f4226970" (UID: "a031f6b3-45d9-401c-bb15-a710f4226970"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:18:37.713642 kubelet[2555]: I0702 00:18:37.713450 2555 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a031f6b3-45d9-401c-bb15-a710f4226970-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "a031f6b3-45d9-401c-bb15-a710f4226970" (UID: "a031f6b3-45d9-401c-bb15-a710f4226970"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:18:37.713812 kubelet[2555]: I0702 00:18:37.713686 2555 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a031f6b3-45d9-401c-bb15-a710f4226970-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a031f6b3-45d9-401c-bb15-a710f4226970" (UID: "a031f6b3-45d9-401c-bb15-a710f4226970"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:18:37.713812 kubelet[2555]: I0702 00:18:37.713716 2555 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a031f6b3-45d9-401c-bb15-a710f4226970-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "a031f6b3-45d9-401c-bb15-a710f4226970" (UID: "a031f6b3-45d9-401c-bb15-a710f4226970"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:18:37.715919 kubelet[2555]: I0702 00:18:37.714948 2555 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a031f6b3-45d9-401c-bb15-a710f4226970-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "a031f6b3-45d9-401c-bb15-a710f4226970" (UID: "a031f6b3-45d9-401c-bb15-a710f4226970"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 00:18:37.717951 kubelet[2555]: I0702 00:18:37.715143 2555 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a031f6b3-45d9-401c-bb15-a710f4226970-policysync" (OuterVolumeSpecName: "policysync") pod "a031f6b3-45d9-401c-bb15-a710f4226970" (UID: "a031f6b3-45d9-401c-bb15-a710f4226970"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:18:37.718158 kubelet[2555]: I0702 00:18:37.715178 2555 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a031f6b3-45d9-401c-bb15-a710f4226970-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "a031f6b3-45d9-401c-bb15-a710f4226970" (UID: "a031f6b3-45d9-401c-bb15-a710f4226970"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:18:37.718158 kubelet[2555]: I0702 00:18:37.715200 2555 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a031f6b3-45d9-401c-bb15-a710f4226970-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "a031f6b3-45d9-401c-bb15-a710f4226970" (UID: "a031f6b3-45d9-401c-bb15-a710f4226970"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:18:37.718158 kubelet[2555]: I0702 00:18:37.715276 2555 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a031f6b3-45d9-401c-bb15-a710f4226970-node-certs" (OuterVolumeSpecName: "node-certs") pod "a031f6b3-45d9-401c-bb15-a710f4226970" (UID: "a031f6b3-45d9-401c-bb15-a710f4226970"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 00:18:37.726188 kubelet[2555]: I0702 00:18:37.726123 2555 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a031f6b3-45d9-401c-bb15-a710f4226970-kube-api-access-p8qjf" (OuterVolumeSpecName: "kube-api-access-p8qjf") pod "a031f6b3-45d9-401c-bb15-a710f4226970" (UID: "a031f6b3-45d9-401c-bb15-a710f4226970"). InnerVolumeSpecName "kube-api-access-p8qjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 00:18:37.811115 containerd[1460]: time="2024-07-02T00:18:37.811062451Z" level=info msg="StartContainer for \"91460481d62300f7a939eb61f015a54a57caa6674d8e60b4c2776c249d3cbc93\" returns successfully" Jul 2 00:18:37.812832 kubelet[2555]: I0702 00:18:37.812774 2555 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a031f6b3-45d9-401c-bb15-a710f4226970-lib-modules\") on node \"ci-3975.1.1-8-31c642c6eb\" DevicePath \"\"" Jul 2 00:18:37.812832 kubelet[2555]: I0702 00:18:37.812807 2555 reconciler_common.go:300] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a031f6b3-45d9-401c-bb15-a710f4226970-node-certs\") on node \"ci-3975.1.1-8-31c642c6eb\" DevicePath \"\"" Jul 2 00:18:37.812832 kubelet[2555]: I0702 00:18:37.812823 2555 reconciler_common.go:300] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a031f6b3-45d9-401c-bb15-a710f4226970-flexvol-driver-host\") on node \"ci-3975.1.1-8-31c642c6eb\" DevicePath \"\"" Jul 2 00:18:37.812832 kubelet[2555]: I0702 00:18:37.812840 2555 reconciler_common.go:300] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a031f6b3-45d9-401c-bb15-a710f4226970-var-run-calico\") on node \"ci-3975.1.1-8-31c642c6eb\" DevicePath \"\"" Jul 2 00:18:37.812832 kubelet[2555]: I0702 00:18:37.812869 2555 reconciler_common.go:300] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a031f6b3-45d9-401c-bb15-a710f4226970-var-lib-calico\") on node \"ci-3975.1.1-8-31c642c6eb\" DevicePath \"\"" Jul 2 00:18:37.813704 kubelet[2555]: I0702 00:18:37.812887 2555 reconciler_common.go:300] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a031f6b3-45d9-401c-bb15-a710f4226970-tigera-ca-bundle\") on node \"ci-3975.1.1-8-31c642c6eb\" DevicePath \"\"" Jul 2 00:18:37.813704 kubelet[2555]: I0702 00:18:37.812897 2555 reconciler_common.go:300] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a031f6b3-45d9-401c-bb15-a710f4226970-cni-net-dir\") on node \"ci-3975.1.1-8-31c642c6eb\" DevicePath \"\"" Jul 2 00:18:37.813704 kubelet[2555]: I0702 00:18:37.812908 2555 reconciler_common.go:300] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a031f6b3-45d9-401c-bb15-a710f4226970-policysync\") on node \"ci-3975.1.1-8-31c642c6eb\" DevicePath \"\"" Jul 2 00:18:37.813704 kubelet[2555]: I0702 00:18:37.812919 2555 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-p8qjf\" (UniqueName: \"kubernetes.io/projected/a031f6b3-45d9-401c-bb15-a710f4226970-kube-api-access-p8qjf\") on node \"ci-3975.1.1-8-31c642c6eb\" DevicePath \"\"" Jul 2 00:18:37.813704 kubelet[2555]: I0702 00:18:37.812929 2555 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a031f6b3-45d9-401c-bb15-a710f4226970-xtables-lock\") on node \"ci-3975.1.1-8-31c642c6eb\" DevicePath \"\"" Jul 2 00:18:37.813704 kubelet[2555]: I0702 00:18:37.812948 2555 reconciler_common.go:300] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a031f6b3-45d9-401c-bb15-a710f4226970-cni-log-dir\") on node \"ci-3975.1.1-8-31c642c6eb\" DevicePath \"\"" Jul 2 00:18:37.813704 kubelet[2555]: I0702 00:18:37.812957 2555 reconciler_common.go:300] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a031f6b3-45d9-401c-bb15-a710f4226970-cni-bin-dir\") on node \"ci-3975.1.1-8-31c642c6eb\" DevicePath \"\"" Jul 2 00:18:38.279427 kubelet[2555]: E0702 00:18:38.279019 2555 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9kkwj" podUID="40064fc9-24a4-4ccf-9623-b652332a27c6" Jul 2 00:18:38.283970 kubelet[2555]: I0702 00:18:38.282396 2555 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="1cdd4291-9cb6-42ab-9845-5280104db715" path="/var/lib/kubelet/pods/1cdd4291-9cb6-42ab-9845-5280104db715/volumes" Jul 2 00:18:38.288958 systemd[1]: Removed slice kubepods-besteffort-poda031f6b3_45d9_401c_bb15_a710f4226970.slice - libcontainer container kubepods-besteffort-poda031f6b3_45d9_401c_bb15_a710f4226970.slice. Jul 2 00:18:38.444634 kubelet[2555]: I0702 00:18:38.443583 2555 scope.go:117] "RemoveContainer" containerID="0a2e66d08c538a7f0744826257b2f2a115717707106aacdd007ff6680b5cc59f" Jul 2 00:18:38.460290 containerd[1460]: time="2024-07-02T00:18:38.458260807Z" level=info msg="RemoveContainer for \"0a2e66d08c538a7f0744826257b2f2a115717707106aacdd007ff6680b5cc59f\"" Jul 2 00:18:38.469302 kubelet[2555]: E0702 00:18:38.469118 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:18:38.474318 containerd[1460]: time="2024-07-02T00:18:38.473222033Z" level=info msg="RemoveContainer for \"0a2e66d08c538a7f0744826257b2f2a115717707106aacdd007ff6680b5cc59f\" returns successfully" Jul 2 00:18:38.478082 kubelet[2555]: I0702 00:18:38.477942 2555 scope.go:117] "RemoveContainer" containerID="0a2e66d08c538a7f0744826257b2f2a115717707106aacdd007ff6680b5cc59f" Jul 2 00:18:38.478983 kubelet[2555]: E0702 00:18:38.478551 2555 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0a2e66d08c538a7f0744826257b2f2a115717707106aacdd007ff6680b5cc59f\": not found" containerID="0a2e66d08c538a7f0744826257b2f2a115717707106aacdd007ff6680b5cc59f" Jul 2 00:18:38.478983 kubelet[2555]: I0702 00:18:38.478603 2555 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0a2e66d08c538a7f0744826257b2f2a115717707106aacdd007ff6680b5cc59f"} err="failed to get container status \"0a2e66d08c538a7f0744826257b2f2a115717707106aacdd007ff6680b5cc59f\": rpc error: code = NotFound desc = an error occurred when try to find container \"0a2e66d08c538a7f0744826257b2f2a115717707106aacdd007ff6680b5cc59f\": not found" Jul 2 00:18:38.479166 containerd[1460]: time="2024-07-02T00:18:38.478392610Z" level=error msg="ContainerStatus for \"0a2e66d08c538a7f0744826257b2f2a115717707106aacdd007ff6680b5cc59f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0a2e66d08c538a7f0744826257b2f2a115717707106aacdd007ff6680b5cc59f\": not found" Jul 2 00:18:38.508204 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2933703976.mount: Deactivated successfully. Jul 2 00:18:38.508465 systemd[1]: var-lib-kubelet-pods-a031f6b3\x2d45d9\x2d401c\x2dbb15\x2da710f4226970-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dp8qjf.mount: Deactivated successfully. Jul 2 00:18:38.508558 systemd[1]: var-lib-kubelet-pods-a031f6b3\x2d45d9\x2d401c\x2dbb15\x2da710f4226970-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Jul 2 00:18:38.533691 kubelet[2555]: I0702 00:18:38.533552 2555 topology_manager.go:215] "Topology Admit Handler" podUID="6b4f37f5-5b21-45b7-874e-22c0e5067257" podNamespace="calico-system" podName="calico-node-th4hz" Jul 2 00:18:38.533691 kubelet[2555]: E0702 00:18:38.533617 2555 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a031f6b3-45d9-401c-bb15-a710f4226970" containerName="flexvol-driver" Jul 2 00:18:38.533691 kubelet[2555]: I0702 00:18:38.533653 2555 memory_manager.go:354] "RemoveStaleState removing state" podUID="a031f6b3-45d9-401c-bb15-a710f4226970" containerName="flexvol-driver" Jul 2 00:18:38.547287 systemd[1]: Created slice kubepods-besteffort-pod6b4f37f5_5b21_45b7_874e_22c0e5067257.slice - libcontainer container kubepods-besteffort-pod6b4f37f5_5b21_45b7_874e_22c0e5067257.slice. Jul 2 00:18:38.575541 kubelet[2555]: I0702 00:18:38.575107 2555 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-95f9b5958-9srrc" podStartSLOduration=5.575063411 podStartE2EDuration="5.575063411s" podCreationTimestamp="2024-07-02 00:18:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:18:38.552006604 +0000 UTC m=+26.536410675" watchObservedRunningTime="2024-07-02 00:18:38.575063411 +0000 UTC m=+26.559467518" Jul 2 00:18:38.618297 kubelet[2555]: I0702 00:18:38.618256 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/6b4f37f5-5b21-45b7-874e-22c0e5067257-node-certs\") pod \"calico-node-th4hz\" (UID: \"6b4f37f5-5b21-45b7-874e-22c0e5067257\") " pod="calico-system/calico-node-th4hz" Jul 2 00:18:38.618690 kubelet[2555]: I0702 00:18:38.618620 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/6b4f37f5-5b21-45b7-874e-22c0e5067257-cni-bin-dir\") pod \"calico-node-th4hz\" (UID: \"6b4f37f5-5b21-45b7-874e-22c0e5067257\") " pod="calico-system/calico-node-th4hz" Jul 2 00:18:38.618997 kubelet[2555]: I0702 00:18:38.618833 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6b4f37f5-5b21-45b7-874e-22c0e5067257-tigera-ca-bundle\") pod \"calico-node-th4hz\" (UID: \"6b4f37f5-5b21-45b7-874e-22c0e5067257\") " pod="calico-system/calico-node-th4hz" Jul 2 00:18:38.618997 kubelet[2555]: I0702 00:18:38.618917 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6b4f37f5-5b21-45b7-874e-22c0e5067257-var-lib-calico\") pod \"calico-node-th4hz\" (UID: \"6b4f37f5-5b21-45b7-874e-22c0e5067257\") " pod="calico-system/calico-node-th4hz" Jul 2 00:18:38.618997 kubelet[2555]: I0702 00:18:38.618948 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/6b4f37f5-5b21-45b7-874e-22c0e5067257-cni-net-dir\") pod \"calico-node-th4hz\" (UID: \"6b4f37f5-5b21-45b7-874e-22c0e5067257\") " pod="calico-system/calico-node-th4hz" Jul 2 00:18:38.620505 kubelet[2555]: I0702 00:18:38.620012 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/6b4f37f5-5b21-45b7-874e-22c0e5067257-cni-log-dir\") pod \"calico-node-th4hz\" (UID: \"6b4f37f5-5b21-45b7-874e-22c0e5067257\") " pod="calico-system/calico-node-th4hz" Jul 2 00:18:38.620505 kubelet[2555]: I0702 00:18:38.620055 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/6b4f37f5-5b21-45b7-874e-22c0e5067257-flexvol-driver-host\") pod \"calico-node-th4hz\" (UID: \"6b4f37f5-5b21-45b7-874e-22c0e5067257\") " pod="calico-system/calico-node-th4hz" Jul 2 00:18:38.620505 kubelet[2555]: I0702 00:18:38.620085 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/6b4f37f5-5b21-45b7-874e-22c0e5067257-policysync\") pod \"calico-node-th4hz\" (UID: \"6b4f37f5-5b21-45b7-874e-22c0e5067257\") " pod="calico-system/calico-node-th4hz" Jul 2 00:18:38.620505 kubelet[2555]: I0702 00:18:38.620306 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/6b4f37f5-5b21-45b7-874e-22c0e5067257-var-run-calico\") pod \"calico-node-th4hz\" (UID: \"6b4f37f5-5b21-45b7-874e-22c0e5067257\") " pod="calico-system/calico-node-th4hz" Jul 2 00:18:38.620505 kubelet[2555]: I0702 00:18:38.620328 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6b4f37f5-5b21-45b7-874e-22c0e5067257-xtables-lock\") pod \"calico-node-th4hz\" (UID: \"6b4f37f5-5b21-45b7-874e-22c0e5067257\") " pod="calico-system/calico-node-th4hz" Jul 2 00:18:38.620694 kubelet[2555]: I0702 00:18:38.620357 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmdc9\" (UniqueName: \"kubernetes.io/projected/6b4f37f5-5b21-45b7-874e-22c0e5067257-kube-api-access-bmdc9\") pod \"calico-node-th4hz\" (UID: \"6b4f37f5-5b21-45b7-874e-22c0e5067257\") " pod="calico-system/calico-node-th4hz" Jul 2 00:18:38.620694 kubelet[2555]: I0702 00:18:38.620378 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6b4f37f5-5b21-45b7-874e-22c0e5067257-lib-modules\") pod \"calico-node-th4hz\" (UID: \"6b4f37f5-5b21-45b7-874e-22c0e5067257\") " pod="calico-system/calico-node-th4hz" Jul 2 00:18:38.854675 kubelet[2555]: E0702 00:18:38.854454 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:18:38.856814 containerd[1460]: time="2024-07-02T00:18:38.856400962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-th4hz,Uid:6b4f37f5-5b21-45b7-874e-22c0e5067257,Namespace:calico-system,Attempt:0,}" Jul 2 00:18:38.893778 containerd[1460]: time="2024-07-02T00:18:38.893625171Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:18:38.895478 containerd[1460]: time="2024-07-02T00:18:38.895338834Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:18:38.895478 containerd[1460]: time="2024-07-02T00:18:38.895416234Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:18:38.895903 containerd[1460]: time="2024-07-02T00:18:38.895459311Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:18:38.928367 systemd[1]: Started cri-containerd-c2899f82f116e8994d759357409bc5c8b820663f04ec172a858dc38861e3fc22.scope - libcontainer container c2899f82f116e8994d759357409bc5c8b820663f04ec172a858dc38861e3fc22. Jul 2 00:18:38.963547 containerd[1460]: time="2024-07-02T00:18:38.963416573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-th4hz,Uid:6b4f37f5-5b21-45b7-874e-22c0e5067257,Namespace:calico-system,Attempt:0,} returns sandbox id \"c2899f82f116e8994d759357409bc5c8b820663f04ec172a858dc38861e3fc22\"" Jul 2 00:18:38.965254 kubelet[2555]: E0702 00:18:38.964708 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:18:38.969714 containerd[1460]: time="2024-07-02T00:18:38.969670355Z" level=info msg="CreateContainer within sandbox \"c2899f82f116e8994d759357409bc5c8b820663f04ec172a858dc38861e3fc22\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 2 00:18:38.994836 containerd[1460]: time="2024-07-02T00:18:38.994776284Z" level=info msg="CreateContainer within sandbox \"c2899f82f116e8994d759357409bc5c8b820663f04ec172a858dc38861e3fc22\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a38f44e231119c6e0dad85ea0359a053318afcbafa78ccd0ae5c034abb2b70e7\"" Jul 2 00:18:38.995960 containerd[1460]: time="2024-07-02T00:18:38.995916671Z" level=info msg="StartContainer for \"a38f44e231119c6e0dad85ea0359a053318afcbafa78ccd0ae5c034abb2b70e7\"" Jul 2 00:18:39.035157 systemd[1]: Started cri-containerd-a38f44e231119c6e0dad85ea0359a053318afcbafa78ccd0ae5c034abb2b70e7.scope - libcontainer container a38f44e231119c6e0dad85ea0359a053318afcbafa78ccd0ae5c034abb2b70e7. Jul 2 00:18:39.087999 containerd[1460]: time="2024-07-02T00:18:39.087934810Z" level=info msg="StartContainer for \"a38f44e231119c6e0dad85ea0359a053318afcbafa78ccd0ae5c034abb2b70e7\" returns successfully" Jul 2 00:18:39.100691 systemd[1]: cri-containerd-a38f44e231119c6e0dad85ea0359a053318afcbafa78ccd0ae5c034abb2b70e7.scope: Deactivated successfully. Jul 2 00:18:39.149486 containerd[1460]: time="2024-07-02T00:18:39.149028013Z" level=info msg="shim disconnected" id=a38f44e231119c6e0dad85ea0359a053318afcbafa78ccd0ae5c034abb2b70e7 namespace=k8s.io Jul 2 00:18:39.149486 containerd[1460]: time="2024-07-02T00:18:39.149186363Z" level=warning msg="cleaning up after shim disconnected" id=a38f44e231119c6e0dad85ea0359a053318afcbafa78ccd0ae5c034abb2b70e7 namespace=k8s.io Jul 2 00:18:39.149486 containerd[1460]: time="2024-07-02T00:18:39.149197388Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:18:39.471212 kubelet[2555]: E0702 00:18:39.470847 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:18:39.475006 containerd[1460]: time="2024-07-02T00:18:39.474426261Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Jul 2 00:18:40.277744 kubelet[2555]: E0702 00:18:40.277221 2555 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9kkwj" podUID="40064fc9-24a4-4ccf-9623-b652332a27c6" Jul 2 00:18:40.280643 kubelet[2555]: I0702 00:18:40.280552 2555 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="a031f6b3-45d9-401c-bb15-a710f4226970" path="/var/lib/kubelet/pods/a031f6b3-45d9-401c-bb15-a710f4226970/volumes" Jul 2 00:18:42.072672 kubelet[2555]: I0702 00:18:42.071327 2555 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 00:18:42.074817 kubelet[2555]: E0702 00:18:42.074301 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:18:42.281504 kubelet[2555]: E0702 00:18:42.279847 2555 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9kkwj" podUID="40064fc9-24a4-4ccf-9623-b652332a27c6" Jul 2 00:18:42.481894 kubelet[2555]: E0702 00:18:42.481733 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:18:43.787024 containerd[1460]: time="2024-07-02T00:18:43.786194001Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:18:43.789116 containerd[1460]: time="2024-07-02T00:18:43.789043802Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=93087850" Jul 2 00:18:43.792390 containerd[1460]: time="2024-07-02T00:18:43.792130095Z" level=info msg="ImageCreate event name:\"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:18:43.850233 containerd[1460]: time="2024-07-02T00:18:43.850100544Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:18:43.852947 containerd[1460]: time="2024-07-02T00:18:43.852082138Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"94535610\" in 4.37760615s" Jul 2 00:18:43.852947 containerd[1460]: time="2024-07-02T00:18:43.852151802Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\"" Jul 2 00:18:43.856716 containerd[1460]: time="2024-07-02T00:18:43.856675347Z" level=info msg="CreateContainer within sandbox \"c2899f82f116e8994d759357409bc5c8b820663f04ec172a858dc38861e3fc22\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 2 00:18:43.896437 containerd[1460]: time="2024-07-02T00:18:43.896201282Z" level=info msg="CreateContainer within sandbox \"c2899f82f116e8994d759357409bc5c8b820663f04ec172a858dc38861e3fc22\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"1124f5f0fc301c97ade2a98f6ad03c3dd71137566f2d088e6fcd848292e0b668\"" Jul 2 00:18:43.898882 containerd[1460]: time="2024-07-02T00:18:43.897508320Z" level=info msg="StartContainer for \"1124f5f0fc301c97ade2a98f6ad03c3dd71137566f2d088e6fcd848292e0b668\"" Jul 2 00:18:44.008180 systemd[1]: run-containerd-runc-k8s.io-1124f5f0fc301c97ade2a98f6ad03c3dd71137566f2d088e6fcd848292e0b668-runc.EMyBO0.mount: Deactivated successfully. Jul 2 00:18:44.017182 systemd[1]: Started cri-containerd-1124f5f0fc301c97ade2a98f6ad03c3dd71137566f2d088e6fcd848292e0b668.scope - libcontainer container 1124f5f0fc301c97ade2a98f6ad03c3dd71137566f2d088e6fcd848292e0b668. Jul 2 00:18:44.083013 containerd[1460]: time="2024-07-02T00:18:44.082781555Z" level=info msg="StartContainer for \"1124f5f0fc301c97ade2a98f6ad03c3dd71137566f2d088e6fcd848292e0b668\" returns successfully" Jul 2 00:18:44.278264 kubelet[2555]: E0702 00:18:44.277757 2555 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9kkwj" podUID="40064fc9-24a4-4ccf-9623-b652332a27c6" Jul 2 00:18:44.493143 kubelet[2555]: E0702 00:18:44.493095 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:18:44.568838 systemd[1]: cri-containerd-1124f5f0fc301c97ade2a98f6ad03c3dd71137566f2d088e6fcd848292e0b668.scope: Deactivated successfully. Jul 2 00:18:44.617875 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1124f5f0fc301c97ade2a98f6ad03c3dd71137566f2d088e6fcd848292e0b668-rootfs.mount: Deactivated successfully. Jul 2 00:18:44.622778 containerd[1460]: time="2024-07-02T00:18:44.622690703Z" level=info msg="shim disconnected" id=1124f5f0fc301c97ade2a98f6ad03c3dd71137566f2d088e6fcd848292e0b668 namespace=k8s.io Jul 2 00:18:44.622778 containerd[1460]: time="2024-07-02T00:18:44.622774173Z" level=warning msg="cleaning up after shim disconnected" id=1124f5f0fc301c97ade2a98f6ad03c3dd71137566f2d088e6fcd848292e0b668 namespace=k8s.io Jul 2 00:18:44.623020 containerd[1460]: time="2024-07-02T00:18:44.622789982Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:18:44.639445 containerd[1460]: time="2024-07-02T00:18:44.639348445Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:18:44Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 2 00:18:44.650040 kubelet[2555]: I0702 00:18:44.649545 2555 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jul 2 00:18:44.682021 kubelet[2555]: I0702 00:18:44.681960 2555 topology_manager.go:215] "Topology Admit Handler" podUID="88002364-1346-4c15-9674-d6ca67a6704b" podNamespace="kube-system" podName="coredns-76f75df574-9gzwx" Jul 2 00:18:44.687700 kubelet[2555]: I0702 00:18:44.687248 2555 topology_manager.go:215] "Topology Admit Handler" podUID="8d27d654-6b07-46b5-9c59-a377d9f9d512" podNamespace="calico-system" podName="calico-kube-controllers-84d6cb5bbb-c2tkd" Jul 2 00:18:44.690411 kubelet[2555]: I0702 00:18:44.690376 2555 topology_manager.go:215] "Topology Admit Handler" podUID="824774ed-b893-4741-9700-12c3e4efcaed" podNamespace="kube-system" podName="coredns-76f75df574-ccp64" Jul 2 00:18:44.699654 systemd[1]: Created slice kubepods-burstable-pod88002364_1346_4c15_9674_d6ca67a6704b.slice - libcontainer container kubepods-burstable-pod88002364_1346_4c15_9674_d6ca67a6704b.slice. Jul 2 00:18:44.714618 systemd[1]: Created slice kubepods-besteffort-pod8d27d654_6b07_46b5_9c59_a377d9f9d512.slice - libcontainer container kubepods-besteffort-pod8d27d654_6b07_46b5_9c59_a377d9f9d512.slice. Jul 2 00:18:44.725234 systemd[1]: Created slice kubepods-burstable-pod824774ed_b893_4741_9700_12c3e4efcaed.slice - libcontainer container kubepods-burstable-pod824774ed_b893_4741_9700_12c3e4efcaed.slice. Jul 2 00:18:44.767341 kubelet[2555]: I0702 00:18:44.766826 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vb47v\" (UniqueName: \"kubernetes.io/projected/824774ed-b893-4741-9700-12c3e4efcaed-kube-api-access-vb47v\") pod \"coredns-76f75df574-ccp64\" (UID: \"824774ed-b893-4741-9700-12c3e4efcaed\") " pod="kube-system/coredns-76f75df574-ccp64" Jul 2 00:18:44.767341 kubelet[2555]: I0702 00:18:44.766918 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58l6t\" (UniqueName: \"kubernetes.io/projected/8d27d654-6b07-46b5-9c59-a377d9f9d512-kube-api-access-58l6t\") pod \"calico-kube-controllers-84d6cb5bbb-c2tkd\" (UID: \"8d27d654-6b07-46b5-9c59-a377d9f9d512\") " pod="calico-system/calico-kube-controllers-84d6cb5bbb-c2tkd" Jul 2 00:18:44.767341 kubelet[2555]: I0702 00:18:44.766947 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/88002364-1346-4c15-9674-d6ca67a6704b-config-volume\") pod \"coredns-76f75df574-9gzwx\" (UID: \"88002364-1346-4c15-9674-d6ca67a6704b\") " pod="kube-system/coredns-76f75df574-9gzwx" Jul 2 00:18:44.767341 kubelet[2555]: I0702 00:18:44.767090 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgh7j\" (UniqueName: \"kubernetes.io/projected/88002364-1346-4c15-9674-d6ca67a6704b-kube-api-access-jgh7j\") pod \"coredns-76f75df574-9gzwx\" (UID: \"88002364-1346-4c15-9674-d6ca67a6704b\") " pod="kube-system/coredns-76f75df574-9gzwx" Jul 2 00:18:44.767341 kubelet[2555]: I0702 00:18:44.767148 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8d27d654-6b07-46b5-9c59-a377d9f9d512-tigera-ca-bundle\") pod \"calico-kube-controllers-84d6cb5bbb-c2tkd\" (UID: \"8d27d654-6b07-46b5-9c59-a377d9f9d512\") " pod="calico-system/calico-kube-controllers-84d6cb5bbb-c2tkd" Jul 2 00:18:44.767833 kubelet[2555]: I0702 00:18:44.767172 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/824774ed-b893-4741-9700-12c3e4efcaed-config-volume\") pod \"coredns-76f75df574-ccp64\" (UID: \"824774ed-b893-4741-9700-12c3e4efcaed\") " pod="kube-system/coredns-76f75df574-ccp64" Jul 2 00:18:45.008942 kubelet[2555]: E0702 00:18:45.007747 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:18:45.009898 containerd[1460]: time="2024-07-02T00:18:45.009405567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-9gzwx,Uid:88002364-1346-4c15-9674-d6ca67a6704b,Namespace:kube-system,Attempt:0,}" Jul 2 00:18:45.022027 containerd[1460]: time="2024-07-02T00:18:45.021533072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84d6cb5bbb-c2tkd,Uid:8d27d654-6b07-46b5-9c59-a377d9f9d512,Namespace:calico-system,Attempt:0,}" Jul 2 00:18:45.036180 kubelet[2555]: E0702 00:18:45.035223 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:18:45.043809 containerd[1460]: time="2024-07-02T00:18:45.043758946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-ccp64,Uid:824774ed-b893-4741-9700-12c3e4efcaed,Namespace:kube-system,Attempt:0,}" Jul 2 00:18:45.255815 containerd[1460]: time="2024-07-02T00:18:45.255644019Z" level=error msg="Failed to destroy network for sandbox \"661e55041f1972bf9d38b9223fcf97849743408eac967c426eccfffe2507b499\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:18:45.262637 containerd[1460]: time="2024-07-02T00:18:45.261663097Z" level=error msg="encountered an error cleaning up failed sandbox \"661e55041f1972bf9d38b9223fcf97849743408eac967c426eccfffe2507b499\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:18:45.262637 containerd[1460]: time="2024-07-02T00:18:45.261788833Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-9gzwx,Uid:88002364-1346-4c15-9674-d6ca67a6704b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"661e55041f1972bf9d38b9223fcf97849743408eac967c426eccfffe2507b499\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:18:45.262909 kubelet[2555]: E0702 00:18:45.262136 2555 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"661e55041f1972bf9d38b9223fcf97849743408eac967c426eccfffe2507b499\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:18:45.262909 kubelet[2555]: E0702 00:18:45.262204 2555 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"661e55041f1972bf9d38b9223fcf97849743408eac967c426eccfffe2507b499\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-9gzwx" Jul 2 00:18:45.262909 kubelet[2555]: E0702 00:18:45.262226 2555 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"661e55041f1972bf9d38b9223fcf97849743408eac967c426eccfffe2507b499\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-9gzwx" Jul 2 00:18:45.263018 kubelet[2555]: E0702 00:18:45.262303 2555 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-9gzwx_kube-system(88002364-1346-4c15-9674-d6ca67a6704b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-9gzwx_kube-system(88002364-1346-4c15-9674-d6ca67a6704b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"661e55041f1972bf9d38b9223fcf97849743408eac967c426eccfffe2507b499\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-9gzwx" podUID="88002364-1346-4c15-9674-d6ca67a6704b" Jul 2 00:18:45.275670 containerd[1460]: time="2024-07-02T00:18:45.275430818Z" level=error msg="Failed to destroy network for sandbox \"f95609100b6becc88a27e3d5f937698f4c32fba82df406f9fd8f6cb9fb80dcda\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:18:45.277942 containerd[1460]: time="2024-07-02T00:18:45.277443516Z" level=error msg="Failed to destroy network for sandbox \"226279d9da9cc06c16a7c7d0061bccff729a887fe6dadd447ee24a19a0d2d467\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:18:45.277942 containerd[1460]: time="2024-07-02T00:18:45.277716132Z" level=error msg="encountered an error cleaning up failed sandbox \"f95609100b6becc88a27e3d5f937698f4c32fba82df406f9fd8f6cb9fb80dcda\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:18:45.277942 containerd[1460]: time="2024-07-02T00:18:45.277787030Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84d6cb5bbb-c2tkd,Uid:8d27d654-6b07-46b5-9c59-a377d9f9d512,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f95609100b6becc88a27e3d5f937698f4c32fba82df406f9fd8f6cb9fb80dcda\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:18:45.278946 kubelet[2555]: E0702 00:18:45.278476 2555 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f95609100b6becc88a27e3d5f937698f4c32fba82df406f9fd8f6cb9fb80dcda\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:18:45.278946 kubelet[2555]: E0702 00:18:45.278565 2555 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f95609100b6becc88a27e3d5f937698f4c32fba82df406f9fd8f6cb9fb80dcda\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-84d6cb5bbb-c2tkd" Jul 2 00:18:45.278946 kubelet[2555]: E0702 00:18:45.278602 2555 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f95609100b6becc88a27e3d5f937698f4c32fba82df406f9fd8f6cb9fb80dcda\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-84d6cb5bbb-c2tkd" Jul 2 00:18:45.279850 kubelet[2555]: E0702 00:18:45.278676 2555 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-84d6cb5bbb-c2tkd_calico-system(8d27d654-6b07-46b5-9c59-a377d9f9d512)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-84d6cb5bbb-c2tkd_calico-system(8d27d654-6b07-46b5-9c59-a377d9f9d512)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f95609100b6becc88a27e3d5f937698f4c32fba82df406f9fd8f6cb9fb80dcda\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-84d6cb5bbb-c2tkd" podUID="8d27d654-6b07-46b5-9c59-a377d9f9d512" Jul 2 00:18:45.280612 kubelet[2555]: E0702 00:18:45.280493 2555 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"226279d9da9cc06c16a7c7d0061bccff729a887fe6dadd447ee24a19a0d2d467\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:18:45.280612 kubelet[2555]: E0702 00:18:45.280581 2555 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"226279d9da9cc06c16a7c7d0061bccff729a887fe6dadd447ee24a19a0d2d467\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-ccp64" Jul 2 00:18:45.280720 containerd[1460]: time="2024-07-02T00:18:45.279255503Z" level=error msg="encountered an error cleaning up failed sandbox \"226279d9da9cc06c16a7c7d0061bccff729a887fe6dadd447ee24a19a0d2d467\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:18:45.280720 containerd[1460]: time="2024-07-02T00:18:45.280122470Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-ccp64,Uid:824774ed-b893-4741-9700-12c3e4efcaed,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"226279d9da9cc06c16a7c7d0061bccff729a887fe6dadd447ee24a19a0d2d467\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:18:45.280802 kubelet[2555]: E0702 00:18:45.280612 2555 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"226279d9da9cc06c16a7c7d0061bccff729a887fe6dadd447ee24a19a0d2d467\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-ccp64" Jul 2 00:18:45.280802 kubelet[2555]: E0702 00:18:45.280690 2555 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-ccp64_kube-system(824774ed-b893-4741-9700-12c3e4efcaed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-ccp64_kube-system(824774ed-b893-4741-9700-12c3e4efcaed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"226279d9da9cc06c16a7c7d0061bccff729a887fe6dadd447ee24a19a0d2d467\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-ccp64" podUID="824774ed-b893-4741-9700-12c3e4efcaed" Jul 2 00:18:45.498604 kubelet[2555]: E0702 00:18:45.498529 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:18:45.503593 kubelet[2555]: I0702 00:18:45.503125 2555 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="226279d9da9cc06c16a7c7d0061bccff729a887fe6dadd447ee24a19a0d2d467" Jul 2 00:18:45.503802 containerd[1460]: time="2024-07-02T00:18:45.503740912Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Jul 2 00:18:45.508247 containerd[1460]: time="2024-07-02T00:18:45.507480022Z" level=info msg="StopPodSandbox for \"226279d9da9cc06c16a7c7d0061bccff729a887fe6dadd447ee24a19a0d2d467\"" Jul 2 00:18:45.508247 containerd[1460]: time="2024-07-02T00:18:45.507717368Z" level=info msg="Ensure that sandbox 226279d9da9cc06c16a7c7d0061bccff729a887fe6dadd447ee24a19a0d2d467 in task-service has been cleanup successfully" Jul 2 00:18:45.511923 kubelet[2555]: I0702 00:18:45.510031 2555 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="661e55041f1972bf9d38b9223fcf97849743408eac967c426eccfffe2507b499" Jul 2 00:18:45.512174 containerd[1460]: time="2024-07-02T00:18:45.511457335Z" level=info msg="StopPodSandbox for \"661e55041f1972bf9d38b9223fcf97849743408eac967c426eccfffe2507b499\"" Jul 2 00:18:45.512174 containerd[1460]: time="2024-07-02T00:18:45.511757976Z" level=info msg="Ensure that sandbox 661e55041f1972bf9d38b9223fcf97849743408eac967c426eccfffe2507b499 in task-service has been cleanup successfully" Jul 2 00:18:45.515312 kubelet[2555]: I0702 00:18:45.515280 2555 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f95609100b6becc88a27e3d5f937698f4c32fba82df406f9fd8f6cb9fb80dcda" Jul 2 00:18:45.517031 containerd[1460]: time="2024-07-02T00:18:45.516990481Z" level=info msg="StopPodSandbox for \"f95609100b6becc88a27e3d5f937698f4c32fba82df406f9fd8f6cb9fb80dcda\"" Jul 2 00:18:45.517260 containerd[1460]: time="2024-07-02T00:18:45.517238239Z" level=info msg="Ensure that sandbox f95609100b6becc88a27e3d5f937698f4c32fba82df406f9fd8f6cb9fb80dcda in task-service has been cleanup successfully" Jul 2 00:18:45.589143 containerd[1460]: time="2024-07-02T00:18:45.588968478Z" level=error msg="StopPodSandbox for \"f95609100b6becc88a27e3d5f937698f4c32fba82df406f9fd8f6cb9fb80dcda\" failed" error="failed to destroy network for sandbox \"f95609100b6becc88a27e3d5f937698f4c32fba82df406f9fd8f6cb9fb80dcda\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:18:45.589718 kubelet[2555]: E0702 00:18:45.589437 2555 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f95609100b6becc88a27e3d5f937698f4c32fba82df406f9fd8f6cb9fb80dcda\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f95609100b6becc88a27e3d5f937698f4c32fba82df406f9fd8f6cb9fb80dcda" Jul 2 00:18:45.589718 kubelet[2555]: E0702 00:18:45.589504 2555 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f95609100b6becc88a27e3d5f937698f4c32fba82df406f9fd8f6cb9fb80dcda"} Jul 2 00:18:45.589718 kubelet[2555]: E0702 00:18:45.589569 2555 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8d27d654-6b07-46b5-9c59-a377d9f9d512\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f95609100b6becc88a27e3d5f937698f4c32fba82df406f9fd8f6cb9fb80dcda\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:18:45.589718 kubelet[2555]: E0702 00:18:45.589618 2555 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8d27d654-6b07-46b5-9c59-a377d9f9d512\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f95609100b6becc88a27e3d5f937698f4c32fba82df406f9fd8f6cb9fb80dcda\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-84d6cb5bbb-c2tkd" podUID="8d27d654-6b07-46b5-9c59-a377d9f9d512" Jul 2 00:18:45.605725 containerd[1460]: time="2024-07-02T00:18:45.605059515Z" level=error msg="StopPodSandbox for \"226279d9da9cc06c16a7c7d0061bccff729a887fe6dadd447ee24a19a0d2d467\" failed" error="failed to destroy network for sandbox \"226279d9da9cc06c16a7c7d0061bccff729a887fe6dadd447ee24a19a0d2d467\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:18:45.606024 kubelet[2555]: E0702 00:18:45.605356 2555 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"226279d9da9cc06c16a7c7d0061bccff729a887fe6dadd447ee24a19a0d2d467\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="226279d9da9cc06c16a7c7d0061bccff729a887fe6dadd447ee24a19a0d2d467" Jul 2 00:18:45.606024 kubelet[2555]: E0702 00:18:45.605982 2555 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"226279d9da9cc06c16a7c7d0061bccff729a887fe6dadd447ee24a19a0d2d467"} Jul 2 00:18:45.606126 kubelet[2555]: E0702 00:18:45.606035 2555 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"824774ed-b893-4741-9700-12c3e4efcaed\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"226279d9da9cc06c16a7c7d0061bccff729a887fe6dadd447ee24a19a0d2d467\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:18:45.606126 kubelet[2555]: E0702 00:18:45.606069 2555 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"824774ed-b893-4741-9700-12c3e4efcaed\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"226279d9da9cc06c16a7c7d0061bccff729a887fe6dadd447ee24a19a0d2d467\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-ccp64" podUID="824774ed-b893-4741-9700-12c3e4efcaed" Jul 2 00:18:45.610992 containerd[1460]: time="2024-07-02T00:18:45.610925943Z" level=error msg="StopPodSandbox for \"661e55041f1972bf9d38b9223fcf97849743408eac967c426eccfffe2507b499\" failed" error="failed to destroy network for sandbox \"661e55041f1972bf9d38b9223fcf97849743408eac967c426eccfffe2507b499\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:18:45.611304 kubelet[2555]: E0702 00:18:45.611238 2555 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"661e55041f1972bf9d38b9223fcf97849743408eac967c426eccfffe2507b499\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="661e55041f1972bf9d38b9223fcf97849743408eac967c426eccfffe2507b499" Jul 2 00:18:45.611304 kubelet[2555]: E0702 00:18:45.611284 2555 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"661e55041f1972bf9d38b9223fcf97849743408eac967c426eccfffe2507b499"} Jul 2 00:18:45.611496 kubelet[2555]: E0702 00:18:45.611342 2555 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"88002364-1346-4c15-9674-d6ca67a6704b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"661e55041f1972bf9d38b9223fcf97849743408eac967c426eccfffe2507b499\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:18:45.611496 kubelet[2555]: E0702 00:18:45.611383 2555 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"88002364-1346-4c15-9674-d6ca67a6704b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"661e55041f1972bf9d38b9223fcf97849743408eac967c426eccfffe2507b499\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-9gzwx" podUID="88002364-1346-4c15-9674-d6ca67a6704b" Jul 2 00:18:45.894370 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-226279d9da9cc06c16a7c7d0061bccff729a887fe6dadd447ee24a19a0d2d467-shm.mount: Deactivated successfully. Jul 2 00:18:45.894482 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-661e55041f1972bf9d38b9223fcf97849743408eac967c426eccfffe2507b499-shm.mount: Deactivated successfully. Jul 2 00:18:46.288449 systemd[1]: Created slice kubepods-besteffort-pod40064fc9_24a4_4ccf_9623_b652332a27c6.slice - libcontainer container kubepods-besteffort-pod40064fc9_24a4_4ccf_9623_b652332a27c6.slice. Jul 2 00:18:46.292258 containerd[1460]: time="2024-07-02T00:18:46.292181288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9kkwj,Uid:40064fc9-24a4-4ccf-9623-b652332a27c6,Namespace:calico-system,Attempt:0,}" Jul 2 00:18:46.472000 containerd[1460]: time="2024-07-02T00:18:46.468981144Z" level=error msg="Failed to destroy network for sandbox \"16af76db66000a6965552600f4ead695b8494e1443873473780c4ecbe8530c86\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:18:46.472000 containerd[1460]: time="2024-07-02T00:18:46.469429405Z" level=error msg="encountered an error cleaning up failed sandbox \"16af76db66000a6965552600f4ead695b8494e1443873473780c4ecbe8530c86\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:18:46.472000 containerd[1460]: time="2024-07-02T00:18:46.469490691Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9kkwj,Uid:40064fc9-24a4-4ccf-9623-b652332a27c6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"16af76db66000a6965552600f4ead695b8494e1443873473780c4ecbe8530c86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:18:46.472537 kubelet[2555]: E0702 00:18:46.472060 2555 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16af76db66000a6965552600f4ead695b8494e1443873473780c4ecbe8530c86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:18:46.472537 kubelet[2555]: E0702 00:18:46.472190 2555 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16af76db66000a6965552600f4ead695b8494e1443873473780c4ecbe8530c86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9kkwj" Jul 2 00:18:46.472537 kubelet[2555]: E0702 00:18:46.472220 2555 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16af76db66000a6965552600f4ead695b8494e1443873473780c4ecbe8530c86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9kkwj" Jul 2 00:18:46.473112 kubelet[2555]: E0702 00:18:46.472317 2555 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9kkwj_calico-system(40064fc9-24a4-4ccf-9623-b652332a27c6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9kkwj_calico-system(40064fc9-24a4-4ccf-9623-b652332a27c6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"16af76db66000a6965552600f4ead695b8494e1443873473780c4ecbe8530c86\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9kkwj" podUID="40064fc9-24a4-4ccf-9623-b652332a27c6" Jul 2 00:18:46.478426 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-16af76db66000a6965552600f4ead695b8494e1443873473780c4ecbe8530c86-shm.mount: Deactivated successfully. Jul 2 00:18:46.520246 kubelet[2555]: I0702 00:18:46.520210 2555 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16af76db66000a6965552600f4ead695b8494e1443873473780c4ecbe8530c86" Jul 2 00:18:46.521693 containerd[1460]: time="2024-07-02T00:18:46.521501888Z" level=info msg="StopPodSandbox for \"16af76db66000a6965552600f4ead695b8494e1443873473780c4ecbe8530c86\"" Jul 2 00:18:46.522606 containerd[1460]: time="2024-07-02T00:18:46.521961679Z" level=info msg="Ensure that sandbox 16af76db66000a6965552600f4ead695b8494e1443873473780c4ecbe8530c86 in task-service has been cleanup successfully" Jul 2 00:18:46.567989 containerd[1460]: time="2024-07-02T00:18:46.567755203Z" level=error msg="StopPodSandbox for \"16af76db66000a6965552600f4ead695b8494e1443873473780c4ecbe8530c86\" failed" error="failed to destroy network for sandbox \"16af76db66000a6965552600f4ead695b8494e1443873473780c4ecbe8530c86\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:18:46.569076 kubelet[2555]: E0702 00:18:46.569038 2555 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"16af76db66000a6965552600f4ead695b8494e1443873473780c4ecbe8530c86\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="16af76db66000a6965552600f4ead695b8494e1443873473780c4ecbe8530c86" Jul 2 00:18:46.569294 kubelet[2555]: E0702 00:18:46.569095 2555 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"16af76db66000a6965552600f4ead695b8494e1443873473780c4ecbe8530c86"} Jul 2 00:18:46.569294 kubelet[2555]: E0702 00:18:46.569134 2555 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"40064fc9-24a4-4ccf-9623-b652332a27c6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"16af76db66000a6965552600f4ead695b8494e1443873473780c4ecbe8530c86\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:18:46.569294 kubelet[2555]: E0702 00:18:46.569182 2555 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"40064fc9-24a4-4ccf-9623-b652332a27c6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"16af76db66000a6965552600f4ead695b8494e1443873473780c4ecbe8530c86\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9kkwj" podUID="40064fc9-24a4-4ccf-9623-b652332a27c6" Jul 2 00:18:54.011085 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1180567081.mount: Deactivated successfully. Jul 2 00:18:54.079662 containerd[1460]: time="2024-07-02T00:18:54.079520589Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=115238750" Jul 2 00:18:54.087931 containerd[1460]: time="2024-07-02T00:18:54.087697969Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:18:54.089082 containerd[1460]: time="2024-07-02T00:18:54.088956264Z" level=info msg="ImageCreate event name:\"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:18:54.093162 containerd[1460]: time="2024-07-02T00:18:54.090458737Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:18:54.093162 containerd[1460]: time="2024-07-02T00:18:54.091677437Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"115238612\" in 8.587888794s" Jul 2 00:18:54.093162 containerd[1460]: time="2024-07-02T00:18:54.091728548Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\"" Jul 2 00:18:54.174080 containerd[1460]: time="2024-07-02T00:18:54.174016599Z" level=info msg="CreateContainer within sandbox \"c2899f82f116e8994d759357409bc5c8b820663f04ec172a858dc38861e3fc22\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 2 00:18:54.217155 containerd[1460]: time="2024-07-02T00:18:54.217057295Z" level=info msg="CreateContainer within sandbox \"c2899f82f116e8994d759357409bc5c8b820663f04ec172a858dc38861e3fc22\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"6e276e3287e9b736f3485c1c8565248333e36206b322c31fbc036e31b97263ce\"" Jul 2 00:18:54.219322 containerd[1460]: time="2024-07-02T00:18:54.218021701Z" level=info msg="StartContainer for \"6e276e3287e9b736f3485c1c8565248333e36206b322c31fbc036e31b97263ce\"" Jul 2 00:18:54.281125 systemd[1]: Started cri-containerd-6e276e3287e9b736f3485c1c8565248333e36206b322c31fbc036e31b97263ce.scope - libcontainer container 6e276e3287e9b736f3485c1c8565248333e36206b322c31fbc036e31b97263ce. Jul 2 00:18:54.364323 containerd[1460]: time="2024-07-02T00:18:54.364109627Z" level=info msg="StartContainer for \"6e276e3287e9b736f3485c1c8565248333e36206b322c31fbc036e31b97263ce\" returns successfully" Jul 2 00:18:54.492708 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 2 00:18:54.492918 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 2 00:18:54.572012 kubelet[2555]: E0702 00:18:54.571675 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:18:54.614817 kubelet[2555]: I0702 00:18:54.614694 2555 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-th4hz" podStartSLOduration=1.996273422 podStartE2EDuration="16.614613163s" podCreationTimestamp="2024-07-02 00:18:38 +0000 UTC" firstStartedPulling="2024-07-02 00:18:39.473972439 +0000 UTC m=+27.458376492" lastFinishedPulling="2024-07-02 00:18:54.092312136 +0000 UTC m=+42.076716233" observedRunningTime="2024-07-02 00:18:54.607457009 +0000 UTC m=+42.591861081" watchObservedRunningTime="2024-07-02 00:18:54.614613163 +0000 UTC m=+42.599017236" Jul 2 00:18:56.918759 systemd-networkd[1370]: vxlan.calico: Link UP Jul 2 00:18:56.918765 systemd-networkd[1370]: vxlan.calico: Gained carrier Jul 2 00:18:57.309718 containerd[1460]: time="2024-07-02T00:18:57.308351910Z" level=info msg="StopPodSandbox for \"661e55041f1972bf9d38b9223fcf97849743408eac967c426eccfffe2507b499\"" Jul 2 00:18:57.627058 containerd[1460]: 2024-07-02 00:18:57.446 [INFO][4088] k8s.go 608: Cleaning up netns ContainerID="661e55041f1972bf9d38b9223fcf97849743408eac967c426eccfffe2507b499" Jul 2 00:18:57.627058 containerd[1460]: 2024-07-02 00:18:57.448 [INFO][4088] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="661e55041f1972bf9d38b9223fcf97849743408eac967c426eccfffe2507b499" iface="eth0" netns="/var/run/netns/cni-d0a0c6d6-0d77-3ef1-07f0-10bb46a7b6c7" Jul 2 00:18:57.627058 containerd[1460]: 2024-07-02 00:18:57.449 [INFO][4088] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="661e55041f1972bf9d38b9223fcf97849743408eac967c426eccfffe2507b499" iface="eth0" netns="/var/run/netns/cni-d0a0c6d6-0d77-3ef1-07f0-10bb46a7b6c7" Jul 2 00:18:57.627058 containerd[1460]: 2024-07-02 00:18:57.449 [INFO][4088] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="661e55041f1972bf9d38b9223fcf97849743408eac967c426eccfffe2507b499" iface="eth0" netns="/var/run/netns/cni-d0a0c6d6-0d77-3ef1-07f0-10bb46a7b6c7" Jul 2 00:18:57.627058 containerd[1460]: 2024-07-02 00:18:57.449 [INFO][4088] k8s.go 615: Releasing IP address(es) ContainerID="661e55041f1972bf9d38b9223fcf97849743408eac967c426eccfffe2507b499" Jul 2 00:18:57.627058 containerd[1460]: 2024-07-02 00:18:57.449 [INFO][4088] utils.go 188: Calico CNI releasing IP address ContainerID="661e55041f1972bf9d38b9223fcf97849743408eac967c426eccfffe2507b499" Jul 2 00:18:57.627058 containerd[1460]: 2024-07-02 00:18:57.602 [INFO][4094] ipam_plugin.go 411: Releasing address using handleID ContainerID="661e55041f1972bf9d38b9223fcf97849743408eac967c426eccfffe2507b499" HandleID="k8s-pod-network.661e55041f1972bf9d38b9223fcf97849743408eac967c426eccfffe2507b499" Workload="ci--3975.1.1--8--31c642c6eb-k8s-coredns--76f75df574--9gzwx-eth0" Jul 2 00:18:57.627058 containerd[1460]: 2024-07-02 00:18:57.603 [INFO][4094] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:18:57.627058 containerd[1460]: 2024-07-02 00:18:57.604 [INFO][4094] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:18:57.627058 containerd[1460]: 2024-07-02 00:18:57.616 [WARNING][4094] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="661e55041f1972bf9d38b9223fcf97849743408eac967c426eccfffe2507b499" HandleID="k8s-pod-network.661e55041f1972bf9d38b9223fcf97849743408eac967c426eccfffe2507b499" Workload="ci--3975.1.1--8--31c642c6eb-k8s-coredns--76f75df574--9gzwx-eth0" Jul 2 00:18:57.627058 containerd[1460]: 2024-07-02 00:18:57.616 [INFO][4094] ipam_plugin.go 439: Releasing address using workloadID ContainerID="661e55041f1972bf9d38b9223fcf97849743408eac967c426eccfffe2507b499" HandleID="k8s-pod-network.661e55041f1972bf9d38b9223fcf97849743408eac967c426eccfffe2507b499" Workload="ci--3975.1.1--8--31c642c6eb-k8s-coredns--76f75df574--9gzwx-eth0" Jul 2 00:18:57.627058 containerd[1460]: 2024-07-02 00:18:57.619 [INFO][4094] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:18:57.627058 containerd[1460]: 2024-07-02 00:18:57.623 [INFO][4088] k8s.go 621: Teardown processing complete. ContainerID="661e55041f1972bf9d38b9223fcf97849743408eac967c426eccfffe2507b499" Jul 2 00:18:57.631313 containerd[1460]: time="2024-07-02T00:18:57.627801486Z" level=info msg="TearDown network for sandbox \"661e55041f1972bf9d38b9223fcf97849743408eac967c426eccfffe2507b499\" successfully" Jul 2 00:18:57.631313 containerd[1460]: time="2024-07-02T00:18:57.627918318Z" level=info msg="StopPodSandbox for \"661e55041f1972bf9d38b9223fcf97849743408eac967c426eccfffe2507b499\" returns successfully" Jul 2 00:18:57.631372 systemd[1]: run-netns-cni\x2dd0a0c6d6\x2d0d77\x2d3ef1\x2d07f0\x2d10bb46a7b6c7.mount: Deactivated successfully. Jul 2 00:18:57.633438 kubelet[2555]: E0702 00:18:57.632123 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:18:57.635403 containerd[1460]: time="2024-07-02T00:18:57.633369824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-9gzwx,Uid:88002364-1346-4c15-9674-d6ca67a6704b,Namespace:kube-system,Attempt:1,}" Jul 2 00:18:57.868436 systemd-networkd[1370]: cali848e62dc396: Link UP Jul 2 00:18:57.869850 systemd-networkd[1370]: cali848e62dc396: Gained carrier Jul 2 00:18:57.898140 containerd[1460]: 2024-07-02 00:18:57.725 [INFO][4105] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.1.1--8--31c642c6eb-k8s-coredns--76f75df574--9gzwx-eth0 coredns-76f75df574- kube-system 88002364-1346-4c15-9674-d6ca67a6704b 839 0 2024-07-02 00:18:24 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3975.1.1-8-31c642c6eb coredns-76f75df574-9gzwx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali848e62dc396 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="16b83ea293f57cd6c3b144121d3faf3124685d1f90f76d043c15760aedb13f12" Namespace="kube-system" Pod="coredns-76f75df574-9gzwx" WorkloadEndpoint="ci--3975.1.1--8--31c642c6eb-k8s-coredns--76f75df574--9gzwx-" Jul 2 00:18:57.898140 containerd[1460]: 2024-07-02 00:18:57.726 [INFO][4105] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="16b83ea293f57cd6c3b144121d3faf3124685d1f90f76d043c15760aedb13f12" Namespace="kube-system" Pod="coredns-76f75df574-9gzwx" WorkloadEndpoint="ci--3975.1.1--8--31c642c6eb-k8s-coredns--76f75df574--9gzwx-eth0" Jul 2 00:18:57.898140 containerd[1460]: 2024-07-02 00:18:57.785 [INFO][4112] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="16b83ea293f57cd6c3b144121d3faf3124685d1f90f76d043c15760aedb13f12" HandleID="k8s-pod-network.16b83ea293f57cd6c3b144121d3faf3124685d1f90f76d043c15760aedb13f12" Workload="ci--3975.1.1--8--31c642c6eb-k8s-coredns--76f75df574--9gzwx-eth0" Jul 2 00:18:57.898140 containerd[1460]: 2024-07-02 00:18:57.804 [INFO][4112] ipam_plugin.go 264: Auto assigning IP ContainerID="16b83ea293f57cd6c3b144121d3faf3124685d1f90f76d043c15760aedb13f12" HandleID="k8s-pod-network.16b83ea293f57cd6c3b144121d3faf3124685d1f90f76d043c15760aedb13f12" Workload="ci--3975.1.1--8--31c642c6eb-k8s-coredns--76f75df574--9gzwx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290a20), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3975.1.1-8-31c642c6eb", "pod":"coredns-76f75df574-9gzwx", "timestamp":"2024-07-02 00:18:57.785657651 +0000 UTC"}, Hostname:"ci-3975.1.1-8-31c642c6eb", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:18:57.898140 containerd[1460]: 2024-07-02 00:18:57.804 [INFO][4112] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:18:57.898140 containerd[1460]: 2024-07-02 00:18:57.804 [INFO][4112] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:18:57.898140 containerd[1460]: 2024-07-02 00:18:57.805 [INFO][4112] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.1.1-8-31c642c6eb' Jul 2 00:18:57.898140 containerd[1460]: 2024-07-02 00:18:57.809 [INFO][4112] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.16b83ea293f57cd6c3b144121d3faf3124685d1f90f76d043c15760aedb13f12" host="ci-3975.1.1-8-31c642c6eb" Jul 2 00:18:57.898140 containerd[1460]: 2024-07-02 00:18:57.820 [INFO][4112] ipam.go 372: Looking up existing affinities for host host="ci-3975.1.1-8-31c642c6eb" Jul 2 00:18:57.898140 containerd[1460]: 2024-07-02 00:18:57.829 [INFO][4112] ipam.go 489: Trying affinity for 192.168.13.64/26 host="ci-3975.1.1-8-31c642c6eb" Jul 2 00:18:57.898140 containerd[1460]: 2024-07-02 00:18:57.832 [INFO][4112] ipam.go 155: Attempting to load block cidr=192.168.13.64/26 host="ci-3975.1.1-8-31c642c6eb" Jul 2 00:18:57.898140 containerd[1460]: 2024-07-02 00:18:57.836 [INFO][4112] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.13.64/26 host="ci-3975.1.1-8-31c642c6eb" Jul 2 00:18:57.898140 containerd[1460]: 2024-07-02 00:18:57.836 [INFO][4112] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.13.64/26 handle="k8s-pod-network.16b83ea293f57cd6c3b144121d3faf3124685d1f90f76d043c15760aedb13f12" host="ci-3975.1.1-8-31c642c6eb" Jul 2 00:18:57.898140 containerd[1460]: 2024-07-02 00:18:57.839 [INFO][4112] ipam.go 1685: Creating new handle: k8s-pod-network.16b83ea293f57cd6c3b144121d3faf3124685d1f90f76d043c15760aedb13f12 Jul 2 00:18:57.898140 containerd[1460]: 2024-07-02 00:18:57.845 [INFO][4112] ipam.go 1203: Writing block in order to claim IPs block=192.168.13.64/26 handle="k8s-pod-network.16b83ea293f57cd6c3b144121d3faf3124685d1f90f76d043c15760aedb13f12" host="ci-3975.1.1-8-31c642c6eb" Jul 2 00:18:57.898140 containerd[1460]: 2024-07-02 00:18:57.854 [INFO][4112] ipam.go 1216: Successfully claimed IPs: [192.168.13.65/26] block=192.168.13.64/26 handle="k8s-pod-network.16b83ea293f57cd6c3b144121d3faf3124685d1f90f76d043c15760aedb13f12" host="ci-3975.1.1-8-31c642c6eb" Jul 2 00:18:57.898140 containerd[1460]: 2024-07-02 00:18:57.854 [INFO][4112] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.13.65/26] handle="k8s-pod-network.16b83ea293f57cd6c3b144121d3faf3124685d1f90f76d043c15760aedb13f12" host="ci-3975.1.1-8-31c642c6eb" Jul 2 00:18:57.898140 containerd[1460]: 2024-07-02 00:18:57.855 [INFO][4112] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:18:57.898140 containerd[1460]: 2024-07-02 00:18:57.855 [INFO][4112] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.13.65/26] IPv6=[] ContainerID="16b83ea293f57cd6c3b144121d3faf3124685d1f90f76d043c15760aedb13f12" HandleID="k8s-pod-network.16b83ea293f57cd6c3b144121d3faf3124685d1f90f76d043c15760aedb13f12" Workload="ci--3975.1.1--8--31c642c6eb-k8s-coredns--76f75df574--9gzwx-eth0" Jul 2 00:18:57.900805 containerd[1460]: 2024-07-02 00:18:57.860 [INFO][4105] k8s.go 386: Populated endpoint ContainerID="16b83ea293f57cd6c3b144121d3faf3124685d1f90f76d043c15760aedb13f12" Namespace="kube-system" Pod="coredns-76f75df574-9gzwx" WorkloadEndpoint="ci--3975.1.1--8--31c642c6eb-k8s-coredns--76f75df574--9gzwx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--8--31c642c6eb-k8s-coredns--76f75df574--9gzwx-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"88002364-1346-4c15-9674-d6ca67a6704b", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 18, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-8-31c642c6eb", ContainerID:"", Pod:"coredns-76f75df574-9gzwx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.13.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali848e62dc396", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:18:57.900805 containerd[1460]: 2024-07-02 00:18:57.860 [INFO][4105] k8s.go 387: Calico CNI using IPs: [192.168.13.65/32] ContainerID="16b83ea293f57cd6c3b144121d3faf3124685d1f90f76d043c15760aedb13f12" Namespace="kube-system" Pod="coredns-76f75df574-9gzwx" WorkloadEndpoint="ci--3975.1.1--8--31c642c6eb-k8s-coredns--76f75df574--9gzwx-eth0" Jul 2 00:18:57.900805 containerd[1460]: 2024-07-02 00:18:57.861 [INFO][4105] dataplane_linux.go 68: Setting the host side veth name to cali848e62dc396 ContainerID="16b83ea293f57cd6c3b144121d3faf3124685d1f90f76d043c15760aedb13f12" Namespace="kube-system" Pod="coredns-76f75df574-9gzwx" WorkloadEndpoint="ci--3975.1.1--8--31c642c6eb-k8s-coredns--76f75df574--9gzwx-eth0" Jul 2 00:18:57.900805 containerd[1460]: 2024-07-02 00:18:57.869 [INFO][4105] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="16b83ea293f57cd6c3b144121d3faf3124685d1f90f76d043c15760aedb13f12" Namespace="kube-system" Pod="coredns-76f75df574-9gzwx" WorkloadEndpoint="ci--3975.1.1--8--31c642c6eb-k8s-coredns--76f75df574--9gzwx-eth0" Jul 2 00:18:57.900805 containerd[1460]: 2024-07-02 00:18:57.870 [INFO][4105] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="16b83ea293f57cd6c3b144121d3faf3124685d1f90f76d043c15760aedb13f12" Namespace="kube-system" Pod="coredns-76f75df574-9gzwx" WorkloadEndpoint="ci--3975.1.1--8--31c642c6eb-k8s-coredns--76f75df574--9gzwx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--8--31c642c6eb-k8s-coredns--76f75df574--9gzwx-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"88002364-1346-4c15-9674-d6ca67a6704b", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 18, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-8-31c642c6eb", ContainerID:"16b83ea293f57cd6c3b144121d3faf3124685d1f90f76d043c15760aedb13f12", Pod:"coredns-76f75df574-9gzwx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.13.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali848e62dc396", MAC:"a2:2b:7e:f7:b9:38", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:18:57.900805 containerd[1460]: 2024-07-02 00:18:57.887 [INFO][4105] k8s.go 500: Wrote updated endpoint to datastore ContainerID="16b83ea293f57cd6c3b144121d3faf3124685d1f90f76d043c15760aedb13f12" Namespace="kube-system" Pod="coredns-76f75df574-9gzwx" WorkloadEndpoint="ci--3975.1.1--8--31c642c6eb-k8s-coredns--76f75df574--9gzwx-eth0" Jul 2 00:18:57.952289 containerd[1460]: time="2024-07-02T00:18:57.951950708Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:18:57.952289 containerd[1460]: time="2024-07-02T00:18:57.952032096Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:18:57.952289 containerd[1460]: time="2024-07-02T00:18:57.952101979Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:18:57.952289 containerd[1460]: time="2024-07-02T00:18:57.952120597Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:18:57.982093 systemd[1]: Started cri-containerd-16b83ea293f57cd6c3b144121d3faf3124685d1f90f76d043c15760aedb13f12.scope - libcontainer container 16b83ea293f57cd6c3b144121d3faf3124685d1f90f76d043c15760aedb13f12. Jul 2 00:18:58.071145 containerd[1460]: time="2024-07-02T00:18:58.071089430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-9gzwx,Uid:88002364-1346-4c15-9674-d6ca67a6704b,Namespace:kube-system,Attempt:1,} returns sandbox id \"16b83ea293f57cd6c3b144121d3faf3124685d1f90f76d043c15760aedb13f12\"" Jul 2 00:18:58.073309 kubelet[2555]: E0702 00:18:58.073271 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:18:58.116872 containerd[1460]: time="2024-07-02T00:18:58.116601515Z" level=info msg="CreateContainer within sandbox \"16b83ea293f57cd6c3b144121d3faf3124685d1f90f76d043c15760aedb13f12\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 00:18:58.147845 containerd[1460]: time="2024-07-02T00:18:58.147104324Z" level=info msg="CreateContainer within sandbox \"16b83ea293f57cd6c3b144121d3faf3124685d1f90f76d043c15760aedb13f12\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8d968a4df0fb2dd1be6c9c9452d34032b4978ef987bda8f305fff524aad065aa\"" Jul 2 00:18:58.148605 containerd[1460]: time="2024-07-02T00:18:58.148288660Z" level=info msg="StartContainer for \"8d968a4df0fb2dd1be6c9c9452d34032b4978ef987bda8f305fff524aad065aa\"" Jul 2 00:18:58.190256 systemd[1]: Started cri-containerd-8d968a4df0fb2dd1be6c9c9452d34032b4978ef987bda8f305fff524aad065aa.scope - libcontainer container 8d968a4df0fb2dd1be6c9c9452d34032b4978ef987bda8f305fff524aad065aa. Jul 2 00:18:58.239182 containerd[1460]: time="2024-07-02T00:18:58.239115655Z" level=info msg="StartContainer for \"8d968a4df0fb2dd1be6c9c9452d34032b4978ef987bda8f305fff524aad065aa\" returns successfully" Jul 2 00:18:58.585223 kubelet[2555]: E0702 00:18:58.583289 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:18:58.618934 kubelet[2555]: I0702 00:18:58.618634 2555 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-9gzwx" podStartSLOduration=34.607772352 podStartE2EDuration="34.607772352s" podCreationTimestamp="2024-07-02 00:18:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:18:58.606411725 +0000 UTC m=+46.590815795" watchObservedRunningTime="2024-07-02 00:18:58.607772352 +0000 UTC m=+46.592176425" Jul 2 00:18:58.748033 systemd-networkd[1370]: vxlan.calico: Gained IPv6LL Jul 2 00:18:59.067252 systemd-networkd[1370]: cali848e62dc396: Gained IPv6LL Jul 2 00:18:59.586280 kubelet[2555]: E0702 00:18:59.586248 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:19:00.279564 containerd[1460]: time="2024-07-02T00:19:00.279389800Z" level=info msg="StopPodSandbox for \"f95609100b6becc88a27e3d5f937698f4c32fba82df406f9fd8f6cb9fb80dcda\"" Jul 2 00:19:00.414933 containerd[1460]: 2024-07-02 00:19:00.360 [INFO][4228] k8s.go 608: Cleaning up netns ContainerID="f95609100b6becc88a27e3d5f937698f4c32fba82df406f9fd8f6cb9fb80dcda" Jul 2 00:19:00.414933 containerd[1460]: 2024-07-02 00:19:00.361 [INFO][4228] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="f95609100b6becc88a27e3d5f937698f4c32fba82df406f9fd8f6cb9fb80dcda" iface="eth0" netns="/var/run/netns/cni-ad4c76e4-58ed-369c-7b13-336346f92445" Jul 2 00:19:00.414933 containerd[1460]: 2024-07-02 00:19:00.361 [INFO][4228] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="f95609100b6becc88a27e3d5f937698f4c32fba82df406f9fd8f6cb9fb80dcda" iface="eth0" netns="/var/run/netns/cni-ad4c76e4-58ed-369c-7b13-336346f92445" Jul 2 00:19:00.414933 containerd[1460]: 2024-07-02 00:19:00.362 [INFO][4228] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="f95609100b6becc88a27e3d5f937698f4c32fba82df406f9fd8f6cb9fb80dcda" iface="eth0" netns="/var/run/netns/cni-ad4c76e4-58ed-369c-7b13-336346f92445" Jul 2 00:19:00.414933 containerd[1460]: 2024-07-02 00:19:00.362 [INFO][4228] k8s.go 615: Releasing IP address(es) ContainerID="f95609100b6becc88a27e3d5f937698f4c32fba82df406f9fd8f6cb9fb80dcda" Jul 2 00:19:00.414933 containerd[1460]: 2024-07-02 00:19:00.362 [INFO][4228] utils.go 188: Calico CNI releasing IP address ContainerID="f95609100b6becc88a27e3d5f937698f4c32fba82df406f9fd8f6cb9fb80dcda" Jul 2 00:19:00.414933 containerd[1460]: 2024-07-02 00:19:00.398 [INFO][4235] ipam_plugin.go 411: Releasing address using handleID ContainerID="f95609100b6becc88a27e3d5f937698f4c32fba82df406f9fd8f6cb9fb80dcda" HandleID="k8s-pod-network.f95609100b6becc88a27e3d5f937698f4c32fba82df406f9fd8f6cb9fb80dcda" Workload="ci--3975.1.1--8--31c642c6eb-k8s-calico--kube--controllers--84d6cb5bbb--c2tkd-eth0" Jul 2 00:19:00.414933 containerd[1460]: 2024-07-02 00:19:00.398 [INFO][4235] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:19:00.414933 containerd[1460]: 2024-07-02 00:19:00.398 [INFO][4235] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:19:00.414933 containerd[1460]: 2024-07-02 00:19:00.406 [WARNING][4235] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="f95609100b6becc88a27e3d5f937698f4c32fba82df406f9fd8f6cb9fb80dcda" HandleID="k8s-pod-network.f95609100b6becc88a27e3d5f937698f4c32fba82df406f9fd8f6cb9fb80dcda" Workload="ci--3975.1.1--8--31c642c6eb-k8s-calico--kube--controllers--84d6cb5bbb--c2tkd-eth0" Jul 2 00:19:00.414933 containerd[1460]: 2024-07-02 00:19:00.406 [INFO][4235] ipam_plugin.go 439: Releasing address using workloadID ContainerID="f95609100b6becc88a27e3d5f937698f4c32fba82df406f9fd8f6cb9fb80dcda" HandleID="k8s-pod-network.f95609100b6becc88a27e3d5f937698f4c32fba82df406f9fd8f6cb9fb80dcda" Workload="ci--3975.1.1--8--31c642c6eb-k8s-calico--kube--controllers--84d6cb5bbb--c2tkd-eth0" Jul 2 00:19:00.414933 containerd[1460]: 2024-07-02 00:19:00.409 [INFO][4235] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:19:00.414933 containerd[1460]: 2024-07-02 00:19:00.412 [INFO][4228] k8s.go 621: Teardown processing complete. ContainerID="f95609100b6becc88a27e3d5f937698f4c32fba82df406f9fd8f6cb9fb80dcda" Jul 2 00:19:00.419346 containerd[1460]: time="2024-07-02T00:19:00.419018475Z" level=info msg="TearDown network for sandbox \"f95609100b6becc88a27e3d5f937698f4c32fba82df406f9fd8f6cb9fb80dcda\" successfully" Jul 2 00:19:00.419346 containerd[1460]: time="2024-07-02T00:19:00.419075250Z" level=info msg="StopPodSandbox for \"f95609100b6becc88a27e3d5f937698f4c32fba82df406f9fd8f6cb9fb80dcda\" returns successfully" Jul 2 00:19:00.420915 containerd[1460]: time="2024-07-02T00:19:00.420042737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84d6cb5bbb-c2tkd,Uid:8d27d654-6b07-46b5-9c59-a377d9f9d512,Namespace:calico-system,Attempt:1,}" Jul 2 00:19:00.420520 systemd[1]: run-netns-cni\x2dad4c76e4\x2d58ed\x2d369c\x2d7b13\x2d336346f92445.mount: Deactivated successfully. Jul 2 00:19:00.599384 kubelet[2555]: E0702 00:19:00.599240 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:19:00.667527 systemd-networkd[1370]: cali4a23ac7bc62: Link UP Jul 2 00:19:00.669509 systemd-networkd[1370]: cali4a23ac7bc62: Gained carrier Jul 2 00:19:00.691167 containerd[1460]: 2024-07-02 00:19:00.524 [INFO][4241] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.1.1--8--31c642c6eb-k8s-calico--kube--controllers--84d6cb5bbb--c2tkd-eth0 calico-kube-controllers-84d6cb5bbb- calico-system 8d27d654-6b07-46b5-9c59-a377d9f9d512 867 0 2024-07-02 00:18:33 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:84d6cb5bbb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-3975.1.1-8-31c642c6eb calico-kube-controllers-84d6cb5bbb-c2tkd eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali4a23ac7bc62 [] []}} ContainerID="e75d9aba7d0f1c598dfaa3cd5f125e7737bd4b081c19cb5573ed651bbb82d311" Namespace="calico-system" Pod="calico-kube-controllers-84d6cb5bbb-c2tkd" WorkloadEndpoint="ci--3975.1.1--8--31c642c6eb-k8s-calico--kube--controllers--84d6cb5bbb--c2tkd-" Jul 2 00:19:00.691167 containerd[1460]: 2024-07-02 00:19:00.524 [INFO][4241] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e75d9aba7d0f1c598dfaa3cd5f125e7737bd4b081c19cb5573ed651bbb82d311" Namespace="calico-system" Pod="calico-kube-controllers-84d6cb5bbb-c2tkd" WorkloadEndpoint="ci--3975.1.1--8--31c642c6eb-k8s-calico--kube--controllers--84d6cb5bbb--c2tkd-eth0" Jul 2 00:19:00.691167 containerd[1460]: 2024-07-02 00:19:00.579 [INFO][4253] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e75d9aba7d0f1c598dfaa3cd5f125e7737bd4b081c19cb5573ed651bbb82d311" HandleID="k8s-pod-network.e75d9aba7d0f1c598dfaa3cd5f125e7737bd4b081c19cb5573ed651bbb82d311" Workload="ci--3975.1.1--8--31c642c6eb-k8s-calico--kube--controllers--84d6cb5bbb--c2tkd-eth0" Jul 2 00:19:00.691167 containerd[1460]: 2024-07-02 00:19:00.594 [INFO][4253] ipam_plugin.go 264: Auto assigning IP ContainerID="e75d9aba7d0f1c598dfaa3cd5f125e7737bd4b081c19cb5573ed651bbb82d311" HandleID="k8s-pod-network.e75d9aba7d0f1c598dfaa3cd5f125e7737bd4b081c19cb5573ed651bbb82d311" Workload="ci--3975.1.1--8--31c642c6eb-k8s-calico--kube--controllers--84d6cb5bbb--c2tkd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000319640), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3975.1.1-8-31c642c6eb", "pod":"calico-kube-controllers-84d6cb5bbb-c2tkd", "timestamp":"2024-07-02 00:19:00.579396549 +0000 UTC"}, Hostname:"ci-3975.1.1-8-31c642c6eb", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:19:00.691167 containerd[1460]: 2024-07-02 00:19:00.594 [INFO][4253] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:19:00.691167 containerd[1460]: 2024-07-02 00:19:00.594 [INFO][4253] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:19:00.691167 containerd[1460]: 2024-07-02 00:19:00.594 [INFO][4253] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.1.1-8-31c642c6eb' Jul 2 00:19:00.691167 containerd[1460]: 2024-07-02 00:19:00.600 [INFO][4253] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e75d9aba7d0f1c598dfaa3cd5f125e7737bd4b081c19cb5573ed651bbb82d311" host="ci-3975.1.1-8-31c642c6eb" Jul 2 00:19:00.691167 containerd[1460]: 2024-07-02 00:19:00.610 [INFO][4253] ipam.go 372: Looking up existing affinities for host host="ci-3975.1.1-8-31c642c6eb" Jul 2 00:19:00.691167 containerd[1460]: 2024-07-02 00:19:00.620 [INFO][4253] ipam.go 489: Trying affinity for 192.168.13.64/26 host="ci-3975.1.1-8-31c642c6eb" Jul 2 00:19:00.691167 containerd[1460]: 2024-07-02 00:19:00.625 [INFO][4253] ipam.go 155: Attempting to load block cidr=192.168.13.64/26 host="ci-3975.1.1-8-31c642c6eb" Jul 2 00:19:00.691167 containerd[1460]: 2024-07-02 00:19:00.629 [INFO][4253] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.13.64/26 host="ci-3975.1.1-8-31c642c6eb" Jul 2 00:19:00.691167 containerd[1460]: 2024-07-02 00:19:00.629 [INFO][4253] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.13.64/26 handle="k8s-pod-network.e75d9aba7d0f1c598dfaa3cd5f125e7737bd4b081c19cb5573ed651bbb82d311" host="ci-3975.1.1-8-31c642c6eb" Jul 2 00:19:00.691167 containerd[1460]: 2024-07-02 00:19:00.633 [INFO][4253] ipam.go 1685: Creating new handle: k8s-pod-network.e75d9aba7d0f1c598dfaa3cd5f125e7737bd4b081c19cb5573ed651bbb82d311 Jul 2 00:19:00.691167 containerd[1460]: 2024-07-02 00:19:00.639 [INFO][4253] ipam.go 1203: Writing block in order to claim IPs block=192.168.13.64/26 handle="k8s-pod-network.e75d9aba7d0f1c598dfaa3cd5f125e7737bd4b081c19cb5573ed651bbb82d311" host="ci-3975.1.1-8-31c642c6eb" Jul 2 00:19:00.691167 containerd[1460]: 2024-07-02 00:19:00.658 [INFO][4253] ipam.go 1216: Successfully claimed IPs: [192.168.13.66/26] block=192.168.13.64/26 handle="k8s-pod-network.e75d9aba7d0f1c598dfaa3cd5f125e7737bd4b081c19cb5573ed651bbb82d311" host="ci-3975.1.1-8-31c642c6eb" Jul 2 00:19:00.691167 containerd[1460]: 2024-07-02 00:19:00.658 [INFO][4253] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.13.66/26] handle="k8s-pod-network.e75d9aba7d0f1c598dfaa3cd5f125e7737bd4b081c19cb5573ed651bbb82d311" host="ci-3975.1.1-8-31c642c6eb" Jul 2 00:19:00.691167 containerd[1460]: 2024-07-02 00:19:00.658 [INFO][4253] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:19:00.691167 containerd[1460]: 2024-07-02 00:19:00.658 [INFO][4253] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.13.66/26] IPv6=[] ContainerID="e75d9aba7d0f1c598dfaa3cd5f125e7737bd4b081c19cb5573ed651bbb82d311" HandleID="k8s-pod-network.e75d9aba7d0f1c598dfaa3cd5f125e7737bd4b081c19cb5573ed651bbb82d311" Workload="ci--3975.1.1--8--31c642c6eb-k8s-calico--kube--controllers--84d6cb5bbb--c2tkd-eth0" Jul 2 00:19:00.692583 containerd[1460]: 2024-07-02 00:19:00.662 [INFO][4241] k8s.go 386: Populated endpoint ContainerID="e75d9aba7d0f1c598dfaa3cd5f125e7737bd4b081c19cb5573ed651bbb82d311" Namespace="calico-system" Pod="calico-kube-controllers-84d6cb5bbb-c2tkd" WorkloadEndpoint="ci--3975.1.1--8--31c642c6eb-k8s-calico--kube--controllers--84d6cb5bbb--c2tkd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--8--31c642c6eb-k8s-calico--kube--controllers--84d6cb5bbb--c2tkd-eth0", GenerateName:"calico-kube-controllers-84d6cb5bbb-", Namespace:"calico-system", SelfLink:"", UID:"8d27d654-6b07-46b5-9c59-a377d9f9d512", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 18, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84d6cb5bbb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-8-31c642c6eb", ContainerID:"", Pod:"calico-kube-controllers-84d6cb5bbb-c2tkd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.13.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4a23ac7bc62", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:19:00.692583 containerd[1460]: 2024-07-02 00:19:00.662 [INFO][4241] k8s.go 387: Calico CNI using IPs: [192.168.13.66/32] ContainerID="e75d9aba7d0f1c598dfaa3cd5f125e7737bd4b081c19cb5573ed651bbb82d311" Namespace="calico-system" Pod="calico-kube-controllers-84d6cb5bbb-c2tkd" WorkloadEndpoint="ci--3975.1.1--8--31c642c6eb-k8s-calico--kube--controllers--84d6cb5bbb--c2tkd-eth0" Jul 2 00:19:00.692583 containerd[1460]: 2024-07-02 00:19:00.662 [INFO][4241] dataplane_linux.go 68: Setting the host side veth name to cali4a23ac7bc62 ContainerID="e75d9aba7d0f1c598dfaa3cd5f125e7737bd4b081c19cb5573ed651bbb82d311" Namespace="calico-system" Pod="calico-kube-controllers-84d6cb5bbb-c2tkd" WorkloadEndpoint="ci--3975.1.1--8--31c642c6eb-k8s-calico--kube--controllers--84d6cb5bbb--c2tkd-eth0" Jul 2 00:19:00.692583 containerd[1460]: 2024-07-02 00:19:00.668 [INFO][4241] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="e75d9aba7d0f1c598dfaa3cd5f125e7737bd4b081c19cb5573ed651bbb82d311" Namespace="calico-system" Pod="calico-kube-controllers-84d6cb5bbb-c2tkd" WorkloadEndpoint="ci--3975.1.1--8--31c642c6eb-k8s-calico--kube--controllers--84d6cb5bbb--c2tkd-eth0" Jul 2 00:19:00.692583 containerd[1460]: 2024-07-02 00:19:00.669 [INFO][4241] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e75d9aba7d0f1c598dfaa3cd5f125e7737bd4b081c19cb5573ed651bbb82d311" Namespace="calico-system" Pod="calico-kube-controllers-84d6cb5bbb-c2tkd" WorkloadEndpoint="ci--3975.1.1--8--31c642c6eb-k8s-calico--kube--controllers--84d6cb5bbb--c2tkd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--8--31c642c6eb-k8s-calico--kube--controllers--84d6cb5bbb--c2tkd-eth0", GenerateName:"calico-kube-controllers-84d6cb5bbb-", Namespace:"calico-system", SelfLink:"", UID:"8d27d654-6b07-46b5-9c59-a377d9f9d512", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 18, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84d6cb5bbb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-8-31c642c6eb", ContainerID:"e75d9aba7d0f1c598dfaa3cd5f125e7737bd4b081c19cb5573ed651bbb82d311", Pod:"calico-kube-controllers-84d6cb5bbb-c2tkd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.13.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4a23ac7bc62", MAC:"5a:8e:8d:b4:d6:1b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:19:00.692583 containerd[1460]: 2024-07-02 00:19:00.688 [INFO][4241] k8s.go 500: Wrote updated endpoint to datastore ContainerID="e75d9aba7d0f1c598dfaa3cd5f125e7737bd4b081c19cb5573ed651bbb82d311" Namespace="calico-system" Pod="calico-kube-controllers-84d6cb5bbb-c2tkd" WorkloadEndpoint="ci--3975.1.1--8--31c642c6eb-k8s-calico--kube--controllers--84d6cb5bbb--c2tkd-eth0" Jul 2 00:19:00.739951 containerd[1460]: time="2024-07-02T00:19:00.739720569Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:19:00.739951 containerd[1460]: time="2024-07-02T00:19:00.739836871Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:19:00.740625 containerd[1460]: time="2024-07-02T00:19:00.739922684Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:19:00.740625 containerd[1460]: time="2024-07-02T00:19:00.739944848Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:19:00.782164 systemd[1]: Started cri-containerd-e75d9aba7d0f1c598dfaa3cd5f125e7737bd4b081c19cb5573ed651bbb82d311.scope - libcontainer container e75d9aba7d0f1c598dfaa3cd5f125e7737bd4b081c19cb5573ed651bbb82d311. Jul 2 00:19:00.844917 containerd[1460]: time="2024-07-02T00:19:00.844803476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84d6cb5bbb-c2tkd,Uid:8d27d654-6b07-46b5-9c59-a377d9f9d512,Namespace:calico-system,Attempt:1,} returns sandbox id \"e75d9aba7d0f1c598dfaa3cd5f125e7737bd4b081c19cb5573ed651bbb82d311\"" Jul 2 00:19:00.848948 containerd[1460]: time="2024-07-02T00:19:00.848665438Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Jul 2 00:19:01.280730 containerd[1460]: time="2024-07-02T00:19:01.280225030Z" level=info msg="StopPodSandbox for \"16af76db66000a6965552600f4ead695b8494e1443873473780c4ecbe8530c86\"" Jul 2 00:19:01.282384 containerd[1460]: time="2024-07-02T00:19:01.282330493Z" level=info msg="StopPodSandbox for \"226279d9da9cc06c16a7c7d0061bccff729a887fe6dadd447ee24a19a0d2d467\"" Jul 2 00:19:01.469244 containerd[1460]: 2024-07-02 00:19:01.380 [INFO][4342] k8s.go 608: Cleaning up netns ContainerID="16af76db66000a6965552600f4ead695b8494e1443873473780c4ecbe8530c86" Jul 2 00:19:01.469244 containerd[1460]: 2024-07-02 00:19:01.381 [INFO][4342] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="16af76db66000a6965552600f4ead695b8494e1443873473780c4ecbe8530c86" iface="eth0" netns="/var/run/netns/cni-c5746fb7-2e33-3b68-2809-b31b04f6d6e1" Jul 2 00:19:01.469244 containerd[1460]: 2024-07-02 00:19:01.381 [INFO][4342] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="16af76db66000a6965552600f4ead695b8494e1443873473780c4ecbe8530c86" iface="eth0" netns="/var/run/netns/cni-c5746fb7-2e33-3b68-2809-b31b04f6d6e1" Jul 2 00:19:01.469244 containerd[1460]: 2024-07-02 00:19:01.381 [INFO][4342] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="16af76db66000a6965552600f4ead695b8494e1443873473780c4ecbe8530c86" iface="eth0" netns="/var/run/netns/cni-c5746fb7-2e33-3b68-2809-b31b04f6d6e1" Jul 2 00:19:01.469244 containerd[1460]: 2024-07-02 00:19:01.381 [INFO][4342] k8s.go 615: Releasing IP address(es) ContainerID="16af76db66000a6965552600f4ead695b8494e1443873473780c4ecbe8530c86" Jul 2 00:19:01.469244 containerd[1460]: 2024-07-02 00:19:01.381 [INFO][4342] utils.go 188: Calico CNI releasing IP address ContainerID="16af76db66000a6965552600f4ead695b8494e1443873473780c4ecbe8530c86" Jul 2 00:19:01.469244 containerd[1460]: 2024-07-02 00:19:01.435 [INFO][4351] ipam_plugin.go 411: Releasing address using handleID ContainerID="16af76db66000a6965552600f4ead695b8494e1443873473780c4ecbe8530c86" HandleID="k8s-pod-network.16af76db66000a6965552600f4ead695b8494e1443873473780c4ecbe8530c86" Workload="ci--3975.1.1--8--31c642c6eb-k8s-csi--node--driver--9kkwj-eth0" Jul 2 00:19:01.469244 containerd[1460]: 2024-07-02 00:19:01.435 [INFO][4351] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:19:01.469244 containerd[1460]: 2024-07-02 00:19:01.435 [INFO][4351] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:19:01.469244 containerd[1460]: 2024-07-02 00:19:01.450 [WARNING][4351] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="16af76db66000a6965552600f4ead695b8494e1443873473780c4ecbe8530c86" HandleID="k8s-pod-network.16af76db66000a6965552600f4ead695b8494e1443873473780c4ecbe8530c86" Workload="ci--3975.1.1--8--31c642c6eb-k8s-csi--node--driver--9kkwj-eth0" Jul 2 00:19:01.469244 containerd[1460]: 2024-07-02 00:19:01.450 [INFO][4351] ipam_plugin.go 439: Releasing address using workloadID ContainerID="16af76db66000a6965552600f4ead695b8494e1443873473780c4ecbe8530c86" HandleID="k8s-pod-network.16af76db66000a6965552600f4ead695b8494e1443873473780c4ecbe8530c86" Workload="ci--3975.1.1--8--31c642c6eb-k8s-csi--node--driver--9kkwj-eth0" Jul 2 00:19:01.469244 containerd[1460]: 2024-07-02 00:19:01.457 [INFO][4351] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:19:01.469244 containerd[1460]: 2024-07-02 00:19:01.465 [INFO][4342] k8s.go 621: Teardown processing complete. ContainerID="16af76db66000a6965552600f4ead695b8494e1443873473780c4ecbe8530c86" Jul 2 00:19:01.473388 containerd[1460]: time="2024-07-02T00:19:01.471992846Z" level=info msg="TearDown network for sandbox \"16af76db66000a6965552600f4ead695b8494e1443873473780c4ecbe8530c86\" successfully" Jul 2 00:19:01.473388 containerd[1460]: time="2024-07-02T00:19:01.472103499Z" level=info msg="StopPodSandbox for \"16af76db66000a6965552600f4ead695b8494e1443873473780c4ecbe8530c86\" returns successfully" Jul 2 00:19:01.476980 containerd[1460]: time="2024-07-02T00:19:01.473787224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9kkwj,Uid:40064fc9-24a4-4ccf-9623-b652332a27c6,Namespace:calico-system,Attempt:1,}" Jul 2 00:19:01.476561 systemd[1]: run-netns-cni\x2dc5746fb7\x2d2e33\x2d3b68\x2d2809\x2db31b04f6d6e1.mount: Deactivated successfully. Jul 2 00:19:01.493536 containerd[1460]: 2024-07-02 00:19:01.397 [INFO][4335] k8s.go 608: Cleaning up netns ContainerID="226279d9da9cc06c16a7c7d0061bccff729a887fe6dadd447ee24a19a0d2d467" Jul 2 00:19:01.493536 containerd[1460]: 2024-07-02 00:19:01.397 [INFO][4335] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="226279d9da9cc06c16a7c7d0061bccff729a887fe6dadd447ee24a19a0d2d467" iface="eth0" netns="/var/run/netns/cni-c84fb3c3-08ba-55ce-8a4a-888288efc2e6" Jul 2 00:19:01.493536 containerd[1460]: 2024-07-02 00:19:01.399 [INFO][4335] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="226279d9da9cc06c16a7c7d0061bccff729a887fe6dadd447ee24a19a0d2d467" iface="eth0" netns="/var/run/netns/cni-c84fb3c3-08ba-55ce-8a4a-888288efc2e6" Jul 2 00:19:01.493536 containerd[1460]: 2024-07-02 00:19:01.401 [INFO][4335] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="226279d9da9cc06c16a7c7d0061bccff729a887fe6dadd447ee24a19a0d2d467" iface="eth0" netns="/var/run/netns/cni-c84fb3c3-08ba-55ce-8a4a-888288efc2e6" Jul 2 00:19:01.493536 containerd[1460]: 2024-07-02 00:19:01.401 [INFO][4335] k8s.go 615: Releasing IP address(es) ContainerID="226279d9da9cc06c16a7c7d0061bccff729a887fe6dadd447ee24a19a0d2d467" Jul 2 00:19:01.493536 containerd[1460]: 2024-07-02 00:19:01.401 [INFO][4335] utils.go 188: Calico CNI releasing IP address ContainerID="226279d9da9cc06c16a7c7d0061bccff729a887fe6dadd447ee24a19a0d2d467" Jul 2 00:19:01.493536 containerd[1460]: 2024-07-02 00:19:01.464 [INFO][4356] ipam_plugin.go 411: Releasing address using handleID ContainerID="226279d9da9cc06c16a7c7d0061bccff729a887fe6dadd447ee24a19a0d2d467" HandleID="k8s-pod-network.226279d9da9cc06c16a7c7d0061bccff729a887fe6dadd447ee24a19a0d2d467" Workload="ci--3975.1.1--8--31c642c6eb-k8s-coredns--76f75df574--ccp64-eth0" Jul 2 00:19:01.493536 containerd[1460]: 2024-07-02 00:19:01.464 [INFO][4356] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:19:01.493536 containerd[1460]: 2024-07-02 00:19:01.464 [INFO][4356] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:19:01.493536 containerd[1460]: 2024-07-02 00:19:01.482 [WARNING][4356] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="226279d9da9cc06c16a7c7d0061bccff729a887fe6dadd447ee24a19a0d2d467" HandleID="k8s-pod-network.226279d9da9cc06c16a7c7d0061bccff729a887fe6dadd447ee24a19a0d2d467" Workload="ci--3975.1.1--8--31c642c6eb-k8s-coredns--76f75df574--ccp64-eth0" Jul 2 00:19:01.493536 containerd[1460]: 2024-07-02 00:19:01.482 [INFO][4356] ipam_plugin.go 439: Releasing address using workloadID ContainerID="226279d9da9cc06c16a7c7d0061bccff729a887fe6dadd447ee24a19a0d2d467" HandleID="k8s-pod-network.226279d9da9cc06c16a7c7d0061bccff729a887fe6dadd447ee24a19a0d2d467" Workload="ci--3975.1.1--8--31c642c6eb-k8s-coredns--76f75df574--ccp64-eth0" Jul 2 00:19:01.493536 containerd[1460]: 2024-07-02 00:19:01.486 [INFO][4356] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:19:01.493536 containerd[1460]: 2024-07-02 00:19:01.490 [INFO][4335] k8s.go 621: Teardown processing complete. ContainerID="226279d9da9cc06c16a7c7d0061bccff729a887fe6dadd447ee24a19a0d2d467" Jul 2 00:19:01.496935 containerd[1460]: time="2024-07-02T00:19:01.494650822Z" level=info msg="TearDown network for sandbox \"226279d9da9cc06c16a7c7d0061bccff729a887fe6dadd447ee24a19a0d2d467\" successfully" Jul 2 00:19:01.496935 containerd[1460]: time="2024-07-02T00:19:01.494707150Z" level=info msg="StopPodSandbox for \"226279d9da9cc06c16a7c7d0061bccff729a887fe6dadd447ee24a19a0d2d467\" returns successfully" Jul 2 00:19:01.497194 kubelet[2555]: E0702 00:19:01.495528 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:19:01.499323 containerd[1460]: time="2024-07-02T00:19:01.499220599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-ccp64,Uid:824774ed-b893-4741-9700-12c3e4efcaed,Namespace:kube-system,Attempt:1,}" Jul 2 00:19:01.501142 systemd[1]: run-netns-cni\x2dc84fb3c3\x2d08ba\x2d55ce\x2d8a4a\x2d888288efc2e6.mount: Deactivated successfully. Jul 2 00:19:01.845106 systemd-networkd[1370]: calieb9a0a29a8d: Link UP Jul 2 00:19:01.847349 systemd-networkd[1370]: calieb9a0a29a8d: Gained carrier Jul 2 00:19:01.887198 containerd[1460]: 2024-07-02 00:19:01.649 [INFO][4375] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.1.1--8--31c642c6eb-k8s-coredns--76f75df574--ccp64-eth0 coredns-76f75df574- kube-system 824774ed-b893-4741-9700-12c3e4efcaed 876 0 2024-07-02 00:18:24 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3975.1.1-8-31c642c6eb coredns-76f75df574-ccp64 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calieb9a0a29a8d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="a60570c569581e45f249172cc0e8416c685c5b9d0c9c2e003d09a2470f878fc7" Namespace="kube-system" Pod="coredns-76f75df574-ccp64" WorkloadEndpoint="ci--3975.1.1--8--31c642c6eb-k8s-coredns--76f75df574--ccp64-" Jul 2 00:19:01.887198 containerd[1460]: 2024-07-02 00:19:01.650 [INFO][4375] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a60570c569581e45f249172cc0e8416c685c5b9d0c9c2e003d09a2470f878fc7" Namespace="kube-system" Pod="coredns-76f75df574-ccp64" WorkloadEndpoint="ci--3975.1.1--8--31c642c6eb-k8s-coredns--76f75df574--ccp64-eth0" Jul 2 00:19:01.887198 containerd[1460]: 2024-07-02 00:19:01.735 [INFO][4393] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a60570c569581e45f249172cc0e8416c685c5b9d0c9c2e003d09a2470f878fc7" HandleID="k8s-pod-network.a60570c569581e45f249172cc0e8416c685c5b9d0c9c2e003d09a2470f878fc7" Workload="ci--3975.1.1--8--31c642c6eb-k8s-coredns--76f75df574--ccp64-eth0" Jul 2 00:19:01.887198 containerd[1460]: 2024-07-02 00:19:01.760 [INFO][4393] ipam_plugin.go 264: Auto assigning IP ContainerID="a60570c569581e45f249172cc0e8416c685c5b9d0c9c2e003d09a2470f878fc7" HandleID="k8s-pod-network.a60570c569581e45f249172cc0e8416c685c5b9d0c9c2e003d09a2470f878fc7" Workload="ci--3975.1.1--8--31c642c6eb-k8s-coredns--76f75df574--ccp64-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000f62c0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3975.1.1-8-31c642c6eb", "pod":"coredns-76f75df574-ccp64", "timestamp":"2024-07-02 00:19:01.735683604 +0000 UTC"}, Hostname:"ci-3975.1.1-8-31c642c6eb", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:19:01.887198 containerd[1460]: 2024-07-02 00:19:01.761 [INFO][4393] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:19:01.887198 containerd[1460]: 2024-07-02 00:19:01.761 [INFO][4393] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:19:01.887198 containerd[1460]: 2024-07-02 00:19:01.761 [INFO][4393] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.1.1-8-31c642c6eb' Jul 2 00:19:01.887198 containerd[1460]: 2024-07-02 00:19:01.766 [INFO][4393] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a60570c569581e45f249172cc0e8416c685c5b9d0c9c2e003d09a2470f878fc7" host="ci-3975.1.1-8-31c642c6eb" Jul 2 00:19:01.887198 containerd[1460]: 2024-07-02 00:19:01.774 [INFO][4393] ipam.go 372: Looking up existing affinities for host host="ci-3975.1.1-8-31c642c6eb" Jul 2 00:19:01.887198 containerd[1460]: 2024-07-02 00:19:01.786 [INFO][4393] ipam.go 489: Trying affinity for 192.168.13.64/26 host="ci-3975.1.1-8-31c642c6eb" Jul 2 00:19:01.887198 containerd[1460]: 2024-07-02 00:19:01.793 [INFO][4393] ipam.go 155: Attempting to load block cidr=192.168.13.64/26 host="ci-3975.1.1-8-31c642c6eb" Jul 2 00:19:01.887198 containerd[1460]: 2024-07-02 00:19:01.800 [INFO][4393] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.13.64/26 host="ci-3975.1.1-8-31c642c6eb" Jul 2 00:19:01.887198 containerd[1460]: 2024-07-02 00:19:01.801 [INFO][4393] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.13.64/26 handle="k8s-pod-network.a60570c569581e45f249172cc0e8416c685c5b9d0c9c2e003d09a2470f878fc7" host="ci-3975.1.1-8-31c642c6eb" Jul 2 00:19:01.887198 containerd[1460]: 2024-07-02 00:19:01.806 [INFO][4393] ipam.go 1685: Creating new handle: k8s-pod-network.a60570c569581e45f249172cc0e8416c685c5b9d0c9c2e003d09a2470f878fc7 Jul 2 00:19:01.887198 containerd[1460]: 2024-07-02 00:19:01.815 [INFO][4393] ipam.go 1203: Writing block in order to claim IPs block=192.168.13.64/26 handle="k8s-pod-network.a60570c569581e45f249172cc0e8416c685c5b9d0c9c2e003d09a2470f878fc7" host="ci-3975.1.1-8-31c642c6eb" Jul 2 00:19:01.887198 containerd[1460]: 2024-07-02 00:19:01.827 [INFO][4393] ipam.go 1216: Successfully claimed IPs: [192.168.13.67/26] block=192.168.13.64/26 handle="k8s-pod-network.a60570c569581e45f249172cc0e8416c685c5b9d0c9c2e003d09a2470f878fc7" host="ci-3975.1.1-8-31c642c6eb" Jul 2 00:19:01.887198 containerd[1460]: 2024-07-02 00:19:01.827 [INFO][4393] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.13.67/26] handle="k8s-pod-network.a60570c569581e45f249172cc0e8416c685c5b9d0c9c2e003d09a2470f878fc7" host="ci-3975.1.1-8-31c642c6eb" Jul 2 00:19:01.887198 containerd[1460]: 2024-07-02 00:19:01.828 [INFO][4393] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:19:01.887198 containerd[1460]: 2024-07-02 00:19:01.828 [INFO][4393] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.13.67/26] IPv6=[] ContainerID="a60570c569581e45f249172cc0e8416c685c5b9d0c9c2e003d09a2470f878fc7" HandleID="k8s-pod-network.a60570c569581e45f249172cc0e8416c685c5b9d0c9c2e003d09a2470f878fc7" Workload="ci--3975.1.1--8--31c642c6eb-k8s-coredns--76f75df574--ccp64-eth0" Jul 2 00:19:01.891838 containerd[1460]: 2024-07-02 00:19:01.838 [INFO][4375] k8s.go 386: Populated endpoint ContainerID="a60570c569581e45f249172cc0e8416c685c5b9d0c9c2e003d09a2470f878fc7" Namespace="kube-system" Pod="coredns-76f75df574-ccp64" WorkloadEndpoint="ci--3975.1.1--8--31c642c6eb-k8s-coredns--76f75df574--ccp64-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--8--31c642c6eb-k8s-coredns--76f75df574--ccp64-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"824774ed-b893-4741-9700-12c3e4efcaed", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 18, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-8-31c642c6eb", ContainerID:"", Pod:"coredns-76f75df574-ccp64", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.13.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calieb9a0a29a8d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:19:01.891838 containerd[1460]: 2024-07-02 00:19:01.838 [INFO][4375] k8s.go 387: Calico CNI using IPs: [192.168.13.67/32] ContainerID="a60570c569581e45f249172cc0e8416c685c5b9d0c9c2e003d09a2470f878fc7" Namespace="kube-system" Pod="coredns-76f75df574-ccp64" WorkloadEndpoint="ci--3975.1.1--8--31c642c6eb-k8s-coredns--76f75df574--ccp64-eth0" Jul 2 00:19:01.891838 containerd[1460]: 2024-07-02 00:19:01.838 [INFO][4375] dataplane_linux.go 68: Setting the host side veth name to calieb9a0a29a8d ContainerID="a60570c569581e45f249172cc0e8416c685c5b9d0c9c2e003d09a2470f878fc7" Namespace="kube-system" Pod="coredns-76f75df574-ccp64" WorkloadEndpoint="ci--3975.1.1--8--31c642c6eb-k8s-coredns--76f75df574--ccp64-eth0" Jul 2 00:19:01.891838 containerd[1460]: 2024-07-02 00:19:01.843 [INFO][4375] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="a60570c569581e45f249172cc0e8416c685c5b9d0c9c2e003d09a2470f878fc7" Namespace="kube-system" Pod="coredns-76f75df574-ccp64" WorkloadEndpoint="ci--3975.1.1--8--31c642c6eb-k8s-coredns--76f75df574--ccp64-eth0" Jul 2 00:19:01.891838 containerd[1460]: 2024-07-02 00:19:01.848 [INFO][4375] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a60570c569581e45f249172cc0e8416c685c5b9d0c9c2e003d09a2470f878fc7" Namespace="kube-system" Pod="coredns-76f75df574-ccp64" WorkloadEndpoint="ci--3975.1.1--8--31c642c6eb-k8s-coredns--76f75df574--ccp64-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--8--31c642c6eb-k8s-coredns--76f75df574--ccp64-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"824774ed-b893-4741-9700-12c3e4efcaed", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 18, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-8-31c642c6eb", ContainerID:"a60570c569581e45f249172cc0e8416c685c5b9d0c9c2e003d09a2470f878fc7", Pod:"coredns-76f75df574-ccp64", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.13.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calieb9a0a29a8d", MAC:"72:41:ad:6c:c2:7a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:19:01.891838 containerd[1460]: 2024-07-02 00:19:01.883 [INFO][4375] k8s.go 500: Wrote updated endpoint to datastore ContainerID="a60570c569581e45f249172cc0e8416c685c5b9d0c9c2e003d09a2470f878fc7" Namespace="kube-system" Pod="coredns-76f75df574-ccp64" WorkloadEndpoint="ci--3975.1.1--8--31c642c6eb-k8s-coredns--76f75df574--ccp64-eth0" Jul 2 00:19:01.966039 containerd[1460]: time="2024-07-02T00:19:01.965051493Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:19:01.966039 containerd[1460]: time="2024-07-02T00:19:01.965158187Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:19:01.966039 containerd[1460]: time="2024-07-02T00:19:01.965246051Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:19:01.966039 containerd[1460]: time="2024-07-02T00:19:01.965264339Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:19:01.983122 systemd-networkd[1370]: calie2282ffe0d1: Link UP Jul 2 00:19:01.985524 systemd-networkd[1370]: calie2282ffe0d1: Gained carrier Jul 2 00:19:02.010598 systemd[1]: Started cri-containerd-a60570c569581e45f249172cc0e8416c685c5b9d0c9c2e003d09a2470f878fc7.scope - libcontainer container a60570c569581e45f249172cc0e8416c685c5b9d0c9c2e003d09a2470f878fc7. Jul 2 00:19:02.034568 containerd[1460]: 2024-07-02 00:19:01.602 [INFO][4364] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.1.1--8--31c642c6eb-k8s-csi--node--driver--9kkwj-eth0 csi-node-driver- calico-system 40064fc9-24a4-4ccf-9623-b652332a27c6 875 0 2024-07-02 00:18:32 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7d7f6c786c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ci-3975.1.1-8-31c642c6eb csi-node-driver-9kkwj eth0 default [] [] [kns.calico-system ksa.calico-system.default] calie2282ffe0d1 [] []}} ContainerID="4302e6e21816bac37fb426fbf23da887d01eaaa935e2457c6dbaa3d7b90248e8" Namespace="calico-system" Pod="csi-node-driver-9kkwj" WorkloadEndpoint="ci--3975.1.1--8--31c642c6eb-k8s-csi--node--driver--9kkwj-" Jul 2 00:19:02.034568 containerd[1460]: 2024-07-02 00:19:01.602 [INFO][4364] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4302e6e21816bac37fb426fbf23da887d01eaaa935e2457c6dbaa3d7b90248e8" Namespace="calico-system" Pod="csi-node-driver-9kkwj" WorkloadEndpoint="ci--3975.1.1--8--31c642c6eb-k8s-csi--node--driver--9kkwj-eth0" Jul 2 00:19:02.034568 containerd[1460]: 2024-07-02 00:19:01.735 [INFO][4389] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4302e6e21816bac37fb426fbf23da887d01eaaa935e2457c6dbaa3d7b90248e8" HandleID="k8s-pod-network.4302e6e21816bac37fb426fbf23da887d01eaaa935e2457c6dbaa3d7b90248e8" Workload="ci--3975.1.1--8--31c642c6eb-k8s-csi--node--driver--9kkwj-eth0" Jul 2 00:19:02.034568 containerd[1460]: 2024-07-02 00:19:01.763 [INFO][4389] ipam_plugin.go 264: Auto assigning IP ContainerID="4302e6e21816bac37fb426fbf23da887d01eaaa935e2457c6dbaa3d7b90248e8" HandleID="k8s-pod-network.4302e6e21816bac37fb426fbf23da887d01eaaa935e2457c6dbaa3d7b90248e8" Workload="ci--3975.1.1--8--31c642c6eb-k8s-csi--node--driver--9kkwj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003b8450), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3975.1.1-8-31c642c6eb", "pod":"csi-node-driver-9kkwj", "timestamp":"2024-07-02 00:19:01.73568498 +0000 UTC"}, Hostname:"ci-3975.1.1-8-31c642c6eb", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:19:02.034568 containerd[1460]: 2024-07-02 00:19:01.763 [INFO][4389] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:19:02.034568 containerd[1460]: 2024-07-02 00:19:01.828 [INFO][4389] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:19:02.034568 containerd[1460]: 2024-07-02 00:19:01.829 [INFO][4389] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.1.1-8-31c642c6eb' Jul 2 00:19:02.034568 containerd[1460]: 2024-07-02 00:19:01.836 [INFO][4389] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4302e6e21816bac37fb426fbf23da887d01eaaa935e2457c6dbaa3d7b90248e8" host="ci-3975.1.1-8-31c642c6eb" Jul 2 00:19:02.034568 containerd[1460]: 2024-07-02 00:19:01.865 [INFO][4389] ipam.go 372: Looking up existing affinities for host host="ci-3975.1.1-8-31c642c6eb" Jul 2 00:19:02.034568 containerd[1460]: 2024-07-02 00:19:01.893 [INFO][4389] ipam.go 489: Trying affinity for 192.168.13.64/26 host="ci-3975.1.1-8-31c642c6eb" Jul 2 00:19:02.034568 containerd[1460]: 2024-07-02 00:19:01.899 [INFO][4389] ipam.go 155: Attempting to load block cidr=192.168.13.64/26 host="ci-3975.1.1-8-31c642c6eb" Jul 2 00:19:02.034568 containerd[1460]: 2024-07-02 00:19:01.907 [INFO][4389] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.13.64/26 host="ci-3975.1.1-8-31c642c6eb" Jul 2 00:19:02.034568 containerd[1460]: 2024-07-02 00:19:01.907 [INFO][4389] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.13.64/26 handle="k8s-pod-network.4302e6e21816bac37fb426fbf23da887d01eaaa935e2457c6dbaa3d7b90248e8" host="ci-3975.1.1-8-31c642c6eb" Jul 2 00:19:02.034568 containerd[1460]: 2024-07-02 00:19:01.918 [INFO][4389] ipam.go 1685: Creating new handle: k8s-pod-network.4302e6e21816bac37fb426fbf23da887d01eaaa935e2457c6dbaa3d7b90248e8 Jul 2 00:19:02.034568 containerd[1460]: 2024-07-02 00:19:01.936 [INFO][4389] ipam.go 1203: Writing block in order to claim IPs block=192.168.13.64/26 handle="k8s-pod-network.4302e6e21816bac37fb426fbf23da887d01eaaa935e2457c6dbaa3d7b90248e8" host="ci-3975.1.1-8-31c642c6eb" Jul 2 00:19:02.034568 containerd[1460]: 2024-07-02 00:19:01.959 [INFO][4389] ipam.go 1216: Successfully claimed IPs: [192.168.13.68/26] block=192.168.13.64/26 handle="k8s-pod-network.4302e6e21816bac37fb426fbf23da887d01eaaa935e2457c6dbaa3d7b90248e8" host="ci-3975.1.1-8-31c642c6eb" Jul 2 00:19:02.034568 containerd[1460]: 2024-07-02 00:19:01.959 [INFO][4389] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.13.68/26] handle="k8s-pod-network.4302e6e21816bac37fb426fbf23da887d01eaaa935e2457c6dbaa3d7b90248e8" host="ci-3975.1.1-8-31c642c6eb" Jul 2 00:19:02.034568 containerd[1460]: 2024-07-02 00:19:01.959 [INFO][4389] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:19:02.034568 containerd[1460]: 2024-07-02 00:19:01.959 [INFO][4389] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.13.68/26] IPv6=[] ContainerID="4302e6e21816bac37fb426fbf23da887d01eaaa935e2457c6dbaa3d7b90248e8" HandleID="k8s-pod-network.4302e6e21816bac37fb426fbf23da887d01eaaa935e2457c6dbaa3d7b90248e8" Workload="ci--3975.1.1--8--31c642c6eb-k8s-csi--node--driver--9kkwj-eth0" Jul 2 00:19:02.035248 containerd[1460]: 2024-07-02 00:19:01.970 [INFO][4364] k8s.go 386: Populated endpoint ContainerID="4302e6e21816bac37fb426fbf23da887d01eaaa935e2457c6dbaa3d7b90248e8" Namespace="calico-system" Pod="csi-node-driver-9kkwj" WorkloadEndpoint="ci--3975.1.1--8--31c642c6eb-k8s-csi--node--driver--9kkwj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--8--31c642c6eb-k8s-csi--node--driver--9kkwj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"40064fc9-24a4-4ccf-9623-b652332a27c6", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 18, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-8-31c642c6eb", ContainerID:"", Pod:"csi-node-driver-9kkwj", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.13.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calie2282ffe0d1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:19:02.035248 containerd[1460]: 2024-07-02 00:19:01.971 [INFO][4364] k8s.go 387: Calico CNI using IPs: [192.168.13.68/32] ContainerID="4302e6e21816bac37fb426fbf23da887d01eaaa935e2457c6dbaa3d7b90248e8" Namespace="calico-system" Pod="csi-node-driver-9kkwj" WorkloadEndpoint="ci--3975.1.1--8--31c642c6eb-k8s-csi--node--driver--9kkwj-eth0" Jul 2 00:19:02.035248 containerd[1460]: 2024-07-02 00:19:01.971 [INFO][4364] dataplane_linux.go 68: Setting the host side veth name to calie2282ffe0d1 ContainerID="4302e6e21816bac37fb426fbf23da887d01eaaa935e2457c6dbaa3d7b90248e8" Namespace="calico-system" Pod="csi-node-driver-9kkwj" WorkloadEndpoint="ci--3975.1.1--8--31c642c6eb-k8s-csi--node--driver--9kkwj-eth0" Jul 2 00:19:02.035248 containerd[1460]: 2024-07-02 00:19:01.987 [INFO][4364] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="4302e6e21816bac37fb426fbf23da887d01eaaa935e2457c6dbaa3d7b90248e8" Namespace="calico-system" Pod="csi-node-driver-9kkwj" WorkloadEndpoint="ci--3975.1.1--8--31c642c6eb-k8s-csi--node--driver--9kkwj-eth0" Jul 2 00:19:02.035248 containerd[1460]: 2024-07-02 00:19:01.990 [INFO][4364] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4302e6e21816bac37fb426fbf23da887d01eaaa935e2457c6dbaa3d7b90248e8" Namespace="calico-system" Pod="csi-node-driver-9kkwj" WorkloadEndpoint="ci--3975.1.1--8--31c642c6eb-k8s-csi--node--driver--9kkwj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--8--31c642c6eb-k8s-csi--node--driver--9kkwj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"40064fc9-24a4-4ccf-9623-b652332a27c6", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 18, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-8-31c642c6eb", ContainerID:"4302e6e21816bac37fb426fbf23da887d01eaaa935e2457c6dbaa3d7b90248e8", Pod:"csi-node-driver-9kkwj", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.13.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calie2282ffe0d1", MAC:"62:a8:98:47:9c:ae", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:19:02.035248 containerd[1460]: 2024-07-02 00:19:02.029 [INFO][4364] k8s.go 500: Wrote updated endpoint to datastore ContainerID="4302e6e21816bac37fb426fbf23da887d01eaaa935e2457c6dbaa3d7b90248e8" Namespace="calico-system" Pod="csi-node-driver-9kkwj" WorkloadEndpoint="ci--3975.1.1--8--31c642c6eb-k8s-csi--node--driver--9kkwj-eth0" Jul 2 00:19:02.106149 containerd[1460]: time="2024-07-02T00:19:02.103766169Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:19:02.111242 containerd[1460]: time="2024-07-02T00:19:02.108738043Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:19:02.111242 containerd[1460]: time="2024-07-02T00:19:02.108825110Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:19:02.111242 containerd[1460]: time="2024-07-02T00:19:02.108840927Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:19:02.155929 containerd[1460]: time="2024-07-02T00:19:02.155137092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-ccp64,Uid:824774ed-b893-4741-9700-12c3e4efcaed,Namespace:kube-system,Attempt:1,} returns sandbox id \"a60570c569581e45f249172cc0e8416c685c5b9d0c9c2e003d09a2470f878fc7\"" Jul 2 00:19:02.168965 kubelet[2555]: E0702 00:19:02.168898 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:19:02.178249 containerd[1460]: time="2024-07-02T00:19:02.177059729Z" level=info msg="CreateContainer within sandbox \"a60570c569581e45f249172cc0e8416c685c5b9d0c9c2e003d09a2470f878fc7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 00:19:02.183457 systemd[1]: Started cri-containerd-4302e6e21816bac37fb426fbf23da887d01eaaa935e2457c6dbaa3d7b90248e8.scope - libcontainer container 4302e6e21816bac37fb426fbf23da887d01eaaa935e2457c6dbaa3d7b90248e8. Jul 2 00:19:02.228368 containerd[1460]: time="2024-07-02T00:19:02.228283374Z" level=info msg="CreateContainer within sandbox \"a60570c569581e45f249172cc0e8416c685c5b9d0c9c2e003d09a2470f878fc7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ecc8154679f27996dd00db8ef547f6dd20f143b4389e583ecc6965fed36e3c50\"" Jul 2 00:19:02.232940 containerd[1460]: time="2024-07-02T00:19:02.231485925Z" level=info msg="StartContainer for \"ecc8154679f27996dd00db8ef547f6dd20f143b4389e583ecc6965fed36e3c50\"" Jul 2 00:19:02.324177 containerd[1460]: time="2024-07-02T00:19:02.324125886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9kkwj,Uid:40064fc9-24a4-4ccf-9623-b652332a27c6,Namespace:calico-system,Attempt:1,} returns sandbox id \"4302e6e21816bac37fb426fbf23da887d01eaaa935e2457c6dbaa3d7b90248e8\"" Jul 2 00:19:02.348194 systemd[1]: Started cri-containerd-ecc8154679f27996dd00db8ef547f6dd20f143b4389e583ecc6965fed36e3c50.scope - libcontainer container ecc8154679f27996dd00db8ef547f6dd20f143b4389e583ecc6965fed36e3c50. Jul 2 00:19:02.430331 containerd[1460]: time="2024-07-02T00:19:02.430106514Z" level=info msg="StartContainer for \"ecc8154679f27996dd00db8ef547f6dd20f143b4389e583ecc6965fed36e3c50\" returns successfully" Jul 2 00:19:02.460638 systemd-networkd[1370]: cali4a23ac7bc62: Gained IPv6LL Jul 2 00:19:02.626112 kubelet[2555]: E0702 00:19:02.626047 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:19:02.704342 kubelet[2555]: I0702 00:19:02.704024 2555 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-ccp64" podStartSLOduration=38.702741206 podStartE2EDuration="38.702741206s" podCreationTimestamp="2024-07-02 00:18:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:19:02.65798132 +0000 UTC m=+50.642385390" watchObservedRunningTime="2024-07-02 00:19:02.702741206 +0000 UTC m=+50.687145284" Jul 2 00:19:03.355233 systemd-networkd[1370]: calieb9a0a29a8d: Gained IPv6LL Jul 2 00:19:03.602872 containerd[1460]: time="2024-07-02T00:19:03.602668018Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:19:03.605031 containerd[1460]: time="2024-07-02T00:19:03.604321174Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=33505793" Jul 2 00:19:03.606685 containerd[1460]: time="2024-07-02T00:19:03.606508300Z" level=info msg="ImageCreate event name:\"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:19:03.612978 containerd[1460]: time="2024-07-02T00:19:03.611006111Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:19:03.612978 containerd[1460]: time="2024-07-02T00:19:03.612260581Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"34953521\" in 2.763524794s" Jul 2 00:19:03.612978 containerd[1460]: time="2024-07-02T00:19:03.612305992Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\"" Jul 2 00:19:03.613388 systemd-networkd[1370]: calie2282ffe0d1: Gained IPv6LL Jul 2 00:19:03.620041 containerd[1460]: time="2024-07-02T00:19:03.618249270Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Jul 2 00:19:03.642487 kubelet[2555]: E0702 00:19:03.642398 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:19:03.685512 containerd[1460]: time="2024-07-02T00:19:03.683656718Z" level=info msg="CreateContainer within sandbox \"e75d9aba7d0f1c598dfaa3cd5f125e7737bd4b081c19cb5573ed651bbb82d311\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 2 00:19:03.734512 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2502507779.mount: Deactivated successfully. Jul 2 00:19:03.738038 containerd[1460]: time="2024-07-02T00:19:03.737719099Z" level=info msg="CreateContainer within sandbox \"e75d9aba7d0f1c598dfaa3cd5f125e7737bd4b081c19cb5573ed651bbb82d311\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"6139864a308eb323bbe45415fa276fb94dfc1d01a72a352107c7f8f672c8f174\"" Jul 2 00:19:03.741952 containerd[1460]: time="2024-07-02T00:19:03.740478247Z" level=info msg="StartContainer for \"6139864a308eb323bbe45415fa276fb94dfc1d01a72a352107c7f8f672c8f174\"" Jul 2 00:19:03.797323 systemd[1]: Started cri-containerd-6139864a308eb323bbe45415fa276fb94dfc1d01a72a352107c7f8f672c8f174.scope - libcontainer container 6139864a308eb323bbe45415fa276fb94dfc1d01a72a352107c7f8f672c8f174. Jul 2 00:19:03.875932 containerd[1460]: time="2024-07-02T00:19:03.875766783Z" level=info msg="StartContainer for \"6139864a308eb323bbe45415fa276fb94dfc1d01a72a352107c7f8f672c8f174\" returns successfully" Jul 2 00:19:04.649398 kubelet[2555]: E0702 00:19:04.649336 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:19:04.709941 kubelet[2555]: I0702 00:19:04.708582 2555 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-84d6cb5bbb-c2tkd" podStartSLOduration=28.942508712 podStartE2EDuration="31.708515141s" podCreationTimestamp="2024-07-02 00:18:33 +0000 UTC" firstStartedPulling="2024-07-02 00:19:00.847676553 +0000 UTC m=+48.832080613" lastFinishedPulling="2024-07-02 00:19:03.61368298 +0000 UTC m=+51.598087042" observedRunningTime="2024-07-02 00:19:04.707874402 +0000 UTC m=+52.692278475" watchObservedRunningTime="2024-07-02 00:19:04.708515141 +0000 UTC m=+52.692919230" Jul 2 00:19:05.340296 containerd[1460]: time="2024-07-02T00:19:05.340226428Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:19:05.345070 containerd[1460]: time="2024-07-02T00:19:05.343944927Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7641062" Jul 2 00:19:05.347631 containerd[1460]: time="2024-07-02T00:19:05.346175165Z" level=info msg="ImageCreate event name:\"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:19:05.353112 containerd[1460]: time="2024-07-02T00:19:05.352699304Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:19:05.354468 containerd[1460]: time="2024-07-02T00:19:05.354375030Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"9088822\" in 1.736057961s" Jul 2 00:19:05.354468 containerd[1460]: time="2024-07-02T00:19:05.354435446Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\"" Jul 2 00:19:05.369066 containerd[1460]: time="2024-07-02T00:19:05.367840525Z" level=info msg="CreateContainer within sandbox \"4302e6e21816bac37fb426fbf23da887d01eaaa935e2457c6dbaa3d7b90248e8\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 2 00:19:05.477558 containerd[1460]: time="2024-07-02T00:19:05.477499142Z" level=info msg="CreateContainer within sandbox \"4302e6e21816bac37fb426fbf23da887d01eaaa935e2457c6dbaa3d7b90248e8\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"002eb22a135b6cc07e43881873b27c64d2b538f2b78d03153de64555c3fa6080\"" Jul 2 00:19:05.479260 containerd[1460]: time="2024-07-02T00:19:05.479209378Z" level=info msg="StartContainer for \"002eb22a135b6cc07e43881873b27c64d2b538f2b78d03153de64555c3fa6080\"" Jul 2 00:19:05.549093 systemd[1]: Started cri-containerd-002eb22a135b6cc07e43881873b27c64d2b538f2b78d03153de64555c3fa6080.scope - libcontainer container 002eb22a135b6cc07e43881873b27c64d2b538f2b78d03153de64555c3fa6080. Jul 2 00:19:05.736694 containerd[1460]: time="2024-07-02T00:19:05.736622960Z" level=info msg="StartContainer for \"002eb22a135b6cc07e43881873b27c64d2b538f2b78d03153de64555c3fa6080\" returns successfully" Jul 2 00:19:05.739801 containerd[1460]: time="2024-07-02T00:19:05.739714807Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Jul 2 00:19:06.292660 systemd[1]: Started sshd@7-64.227.97.255:22-147.75.109.163:41998.service - OpenSSH per-connection server daemon (147.75.109.163:41998). Jul 2 00:19:06.474770 sshd[4666]: Accepted publickey for core from 147.75.109.163 port 41998 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:19:06.477187 sshd[4666]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:19:06.488258 systemd-logind[1446]: New session 8 of user core. Jul 2 00:19:06.497110 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 2 00:19:06.758442 sshd[4666]: pam_unix(sshd:session): session closed for user core Jul 2 00:19:06.764628 systemd[1]: sshd@7-64.227.97.255:22-147.75.109.163:41998.service: Deactivated successfully. Jul 2 00:19:06.770595 systemd[1]: session-8.scope: Deactivated successfully. Jul 2 00:19:06.773613 systemd-logind[1446]: Session 8 logged out. Waiting for processes to exit. Jul 2 00:19:06.776665 systemd-logind[1446]: Removed session 8. Jul 2 00:19:07.316011 containerd[1460]: time="2024-07-02T00:19:07.315946417Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:19:07.318246 containerd[1460]: time="2024-07-02T00:19:07.318176366Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=10147655" Jul 2 00:19:07.319540 containerd[1460]: time="2024-07-02T00:19:07.319299607Z" level=info msg="ImageCreate event name:\"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:19:07.322186 containerd[1460]: time="2024-07-02T00:19:07.321628844Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:19:07.323112 containerd[1460]: time="2024-07-02T00:19:07.323068452Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"11595367\" in 1.583301796s" Jul 2 00:19:07.323211 containerd[1460]: time="2024-07-02T00:19:07.323116244Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\"" Jul 2 00:19:07.326811 containerd[1460]: time="2024-07-02T00:19:07.326766493Z" level=info msg="CreateContainer within sandbox \"4302e6e21816bac37fb426fbf23da887d01eaaa935e2457c6dbaa3d7b90248e8\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 2 00:19:07.351787 containerd[1460]: time="2024-07-02T00:19:07.351188299Z" level=info msg="CreateContainer within sandbox \"4302e6e21816bac37fb426fbf23da887d01eaaa935e2457c6dbaa3d7b90248e8\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"4b0cdaa7645f8fc6eb2ede07062cce567410276797cc68bdbdd77f7d5b1a8f39\"" Jul 2 00:19:07.354127 containerd[1460]: time="2024-07-02T00:19:07.352553776Z" level=info msg="StartContainer for \"4b0cdaa7645f8fc6eb2ede07062cce567410276797cc68bdbdd77f7d5b1a8f39\"" Jul 2 00:19:07.420125 systemd[1]: Started cri-containerd-4b0cdaa7645f8fc6eb2ede07062cce567410276797cc68bdbdd77f7d5b1a8f39.scope - libcontainer container 4b0cdaa7645f8fc6eb2ede07062cce567410276797cc68bdbdd77f7d5b1a8f39. Jul 2 00:19:07.513234 containerd[1460]: time="2024-07-02T00:19:07.513173058Z" level=info msg="StartContainer for \"4b0cdaa7645f8fc6eb2ede07062cce567410276797cc68bdbdd77f7d5b1a8f39\" returns successfully" Jul 2 00:19:08.473735 kubelet[2555]: I0702 00:19:08.473584 2555 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 2 00:19:08.480725 kubelet[2555]: I0702 00:19:08.480602 2555 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 2 00:19:11.785622 systemd[1]: Started sshd@8-64.227.97.255:22-147.75.109.163:42000.service - OpenSSH per-connection server daemon (147.75.109.163:42000). Jul 2 00:19:11.951564 sshd[4737]: Accepted publickey for core from 147.75.109.163 port 42000 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:19:11.956743 sshd[4737]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:19:11.971132 systemd-logind[1446]: New session 9 of user core. Jul 2 00:19:11.978376 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 2 00:19:12.269425 containerd[1460]: time="2024-07-02T00:19:12.264540599Z" level=info msg="StopPodSandbox for \"226279d9da9cc06c16a7c7d0061bccff729a887fe6dadd447ee24a19a0d2d467\"" Jul 2 00:19:12.561687 sshd[4737]: pam_unix(sshd:session): session closed for user core Jul 2 00:19:12.575696 systemd[1]: sshd@8-64.227.97.255:22-147.75.109.163:42000.service: Deactivated successfully. Jul 2 00:19:12.583282 systemd[1]: session-9.scope: Deactivated successfully. Jul 2 00:19:12.592467 systemd-logind[1446]: Session 9 logged out. Waiting for processes to exit. Jul 2 00:19:12.597065 systemd-logind[1446]: Removed session 9. Jul 2 00:19:12.635473 containerd[1460]: 2024-07-02 00:19:12.526 [WARNING][4763] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="226279d9da9cc06c16a7c7d0061bccff729a887fe6dadd447ee24a19a0d2d467" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--8--31c642c6eb-k8s-coredns--76f75df574--ccp64-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"824774ed-b893-4741-9700-12c3e4efcaed", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 18, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-8-31c642c6eb", ContainerID:"a60570c569581e45f249172cc0e8416c685c5b9d0c9c2e003d09a2470f878fc7", Pod:"coredns-76f75df574-ccp64", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.13.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calieb9a0a29a8d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:19:12.635473 containerd[1460]: 2024-07-02 00:19:12.527 [INFO][4763] k8s.go 608: Cleaning up netns ContainerID="226279d9da9cc06c16a7c7d0061bccff729a887fe6dadd447ee24a19a0d2d467" Jul 2 00:19:12.635473 containerd[1460]: 2024-07-02 00:19:12.529 [INFO][4763] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="226279d9da9cc06c16a7c7d0061bccff729a887fe6dadd447ee24a19a0d2d467" iface="eth0" netns="" Jul 2 00:19:12.635473 containerd[1460]: 2024-07-02 00:19:12.530 [INFO][4763] k8s.go 615: Releasing IP address(es) ContainerID="226279d9da9cc06c16a7c7d0061bccff729a887fe6dadd447ee24a19a0d2d467" Jul 2 00:19:12.635473 containerd[1460]: 2024-07-02 00:19:12.530 [INFO][4763] utils.go 188: Calico CNI releasing IP address ContainerID="226279d9da9cc06c16a7c7d0061bccff729a887fe6dadd447ee24a19a0d2d467" Jul 2 00:19:12.635473 containerd[1460]: 2024-07-02 00:19:12.597 [INFO][4772] ipam_plugin.go 411: Releasing address using handleID ContainerID="226279d9da9cc06c16a7c7d0061bccff729a887fe6dadd447ee24a19a0d2d467" HandleID="k8s-pod-network.226279d9da9cc06c16a7c7d0061bccff729a887fe6dadd447ee24a19a0d2d467" Workload="ci--3975.1.1--8--31c642c6eb-k8s-coredns--76f75df574--ccp64-eth0" Jul 2 00:19:12.635473 containerd[1460]: 2024-07-02 00:19:12.598 [INFO][4772] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:19:12.635473 containerd[1460]: 2024-07-02 00:19:12.598 [INFO][4772] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:19:12.635473 containerd[1460]: 2024-07-02 00:19:12.614 [WARNING][4772] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="226279d9da9cc06c16a7c7d0061bccff729a887fe6dadd447ee24a19a0d2d467" HandleID="k8s-pod-network.226279d9da9cc06c16a7c7d0061bccff729a887fe6dadd447ee24a19a0d2d467" Workload="ci--3975.1.1--8--31c642c6eb-k8s-coredns--76f75df574--ccp64-eth0" Jul 2 00:19:12.635473 containerd[1460]: 2024-07-02 00:19:12.616 [INFO][4772] ipam_plugin.go 439: Releasing address using workloadID ContainerID="226279d9da9cc06c16a7c7d0061bccff729a887fe6dadd447ee24a19a0d2d467" HandleID="k8s-pod-network.226279d9da9cc06c16a7c7d0061bccff729a887fe6dadd447ee24a19a0d2d467" Workload="ci--3975.1.1--8--31c642c6eb-k8s-coredns--76f75df574--ccp64-eth0" Jul 2 00:19:12.635473 containerd[1460]: 2024-07-02 00:19:12.629 [INFO][4772] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:19:12.635473 containerd[1460]: 2024-07-02 00:19:12.632 [INFO][4763] k8s.go 621: Teardown processing complete. ContainerID="226279d9da9cc06c16a7c7d0061bccff729a887fe6dadd447ee24a19a0d2d467" Jul 2 00:19:12.638600 containerd[1460]: time="2024-07-02T00:19:12.636000652Z" level=info msg="TearDown network for sandbox \"226279d9da9cc06c16a7c7d0061bccff729a887fe6dadd447ee24a19a0d2d467\" successfully" Jul 2 00:19:12.638600 containerd[1460]: time="2024-07-02T00:19:12.636044940Z" level=info msg="StopPodSandbox for \"226279d9da9cc06c16a7c7d0061bccff729a887fe6dadd447ee24a19a0d2d467\" returns successfully" Jul 2 00:19:12.652518 containerd[1460]: time="2024-07-02T00:19:12.651060440Z" level=info msg="RemovePodSandbox for \"226279d9da9cc06c16a7c7d0061bccff729a887fe6dadd447ee24a19a0d2d467\"" Jul 2 00:19:12.657666 containerd[1460]: time="2024-07-02T00:19:12.657567872Z" level=info msg="Forcibly stopping sandbox \"226279d9da9cc06c16a7c7d0061bccff729a887fe6dadd447ee24a19a0d2d467\"" Jul 2 00:19:12.891018 containerd[1460]: 2024-07-02 00:19:12.773 [WARNING][4793] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="226279d9da9cc06c16a7c7d0061bccff729a887fe6dadd447ee24a19a0d2d467" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--8--31c642c6eb-k8s-coredns--76f75df574--ccp64-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"824774ed-b893-4741-9700-12c3e4efcaed", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 18, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-8-31c642c6eb", ContainerID:"a60570c569581e45f249172cc0e8416c685c5b9d0c9c2e003d09a2470f878fc7", Pod:"coredns-76f75df574-ccp64", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.13.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calieb9a0a29a8d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:19:12.891018 containerd[1460]: 2024-07-02 00:19:12.774 [INFO][4793] k8s.go 608: Cleaning up netns ContainerID="226279d9da9cc06c16a7c7d0061bccff729a887fe6dadd447ee24a19a0d2d467" Jul 2 00:19:12.891018 containerd[1460]: 2024-07-02 00:19:12.774 [INFO][4793] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="226279d9da9cc06c16a7c7d0061bccff729a887fe6dadd447ee24a19a0d2d467" iface="eth0" netns="" Jul 2 00:19:12.891018 containerd[1460]: 2024-07-02 00:19:12.774 [INFO][4793] k8s.go 615: Releasing IP address(es) ContainerID="226279d9da9cc06c16a7c7d0061bccff729a887fe6dadd447ee24a19a0d2d467" Jul 2 00:19:12.891018 containerd[1460]: 2024-07-02 00:19:12.774 [INFO][4793] utils.go 188: Calico CNI releasing IP address ContainerID="226279d9da9cc06c16a7c7d0061bccff729a887fe6dadd447ee24a19a0d2d467" Jul 2 00:19:12.891018 containerd[1460]: 2024-07-02 00:19:12.857 [INFO][4799] ipam_plugin.go 411: Releasing address using handleID ContainerID="226279d9da9cc06c16a7c7d0061bccff729a887fe6dadd447ee24a19a0d2d467" HandleID="k8s-pod-network.226279d9da9cc06c16a7c7d0061bccff729a887fe6dadd447ee24a19a0d2d467" Workload="ci--3975.1.1--8--31c642c6eb-k8s-coredns--76f75df574--ccp64-eth0" Jul 2 00:19:12.891018 containerd[1460]: 2024-07-02 00:19:12.857 [INFO][4799] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:19:12.891018 containerd[1460]: 2024-07-02 00:19:12.857 [INFO][4799] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:19:12.891018 containerd[1460]: 2024-07-02 00:19:12.873 [WARNING][4799] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="226279d9da9cc06c16a7c7d0061bccff729a887fe6dadd447ee24a19a0d2d467" HandleID="k8s-pod-network.226279d9da9cc06c16a7c7d0061bccff729a887fe6dadd447ee24a19a0d2d467" Workload="ci--3975.1.1--8--31c642c6eb-k8s-coredns--76f75df574--ccp64-eth0" Jul 2 00:19:12.891018 containerd[1460]: 2024-07-02 00:19:12.874 [INFO][4799] ipam_plugin.go 439: Releasing address using workloadID ContainerID="226279d9da9cc06c16a7c7d0061bccff729a887fe6dadd447ee24a19a0d2d467" HandleID="k8s-pod-network.226279d9da9cc06c16a7c7d0061bccff729a887fe6dadd447ee24a19a0d2d467" Workload="ci--3975.1.1--8--31c642c6eb-k8s-coredns--76f75df574--ccp64-eth0" Jul 2 00:19:12.891018 containerd[1460]: 2024-07-02 00:19:12.881 [INFO][4799] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:19:12.891018 containerd[1460]: 2024-07-02 00:19:12.885 [INFO][4793] k8s.go 621: Teardown processing complete. ContainerID="226279d9da9cc06c16a7c7d0061bccff729a887fe6dadd447ee24a19a0d2d467" Jul 2 00:19:12.891018 containerd[1460]: time="2024-07-02T00:19:12.889437923Z" level=info msg="TearDown network for sandbox \"226279d9da9cc06c16a7c7d0061bccff729a887fe6dadd447ee24a19a0d2d467\" successfully" Jul 2 00:19:12.963755 containerd[1460]: time="2024-07-02T00:19:12.963616737Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"226279d9da9cc06c16a7c7d0061bccff729a887fe6dadd447ee24a19a0d2d467\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 00:19:12.964000 containerd[1460]: time="2024-07-02T00:19:12.963911428Z" level=info msg="RemovePodSandbox \"226279d9da9cc06c16a7c7d0061bccff729a887fe6dadd447ee24a19a0d2d467\" returns successfully" Jul 2 00:19:12.965517 containerd[1460]: time="2024-07-02T00:19:12.965428012Z" level=info msg="StopPodSandbox for \"661e55041f1972bf9d38b9223fcf97849743408eac967c426eccfffe2507b499\"" Jul 2 00:19:13.149275 containerd[1460]: 2024-07-02 00:19:13.055 [WARNING][4818] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="661e55041f1972bf9d38b9223fcf97849743408eac967c426eccfffe2507b499" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--8--31c642c6eb-k8s-coredns--76f75df574--9gzwx-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"88002364-1346-4c15-9674-d6ca67a6704b", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 18, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-8-31c642c6eb", ContainerID:"16b83ea293f57cd6c3b144121d3faf3124685d1f90f76d043c15760aedb13f12", Pod:"coredns-76f75df574-9gzwx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.13.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali848e62dc396", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:19:13.149275 containerd[1460]: 2024-07-02 00:19:13.055 [INFO][4818] k8s.go 608: Cleaning up netns ContainerID="661e55041f1972bf9d38b9223fcf97849743408eac967c426eccfffe2507b499" Jul 2 00:19:13.149275 containerd[1460]: 2024-07-02 00:19:13.055 [INFO][4818] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="661e55041f1972bf9d38b9223fcf97849743408eac967c426eccfffe2507b499" iface="eth0" netns="" Jul 2 00:19:13.149275 containerd[1460]: 2024-07-02 00:19:13.055 [INFO][4818] k8s.go 615: Releasing IP address(es) ContainerID="661e55041f1972bf9d38b9223fcf97849743408eac967c426eccfffe2507b499" Jul 2 00:19:13.149275 containerd[1460]: 2024-07-02 00:19:13.056 [INFO][4818] utils.go 188: Calico CNI releasing IP address ContainerID="661e55041f1972bf9d38b9223fcf97849743408eac967c426eccfffe2507b499" Jul 2 00:19:13.149275 containerd[1460]: 2024-07-02 00:19:13.123 [INFO][4824] ipam_plugin.go 411: Releasing address using handleID ContainerID="661e55041f1972bf9d38b9223fcf97849743408eac967c426eccfffe2507b499" HandleID="k8s-pod-network.661e55041f1972bf9d38b9223fcf97849743408eac967c426eccfffe2507b499" Workload="ci--3975.1.1--8--31c642c6eb-k8s-coredns--76f75df574--9gzwx-eth0" Jul 2 00:19:13.149275 containerd[1460]: 2024-07-02 00:19:13.123 [INFO][4824] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:19:13.149275 containerd[1460]: 2024-07-02 00:19:13.124 [INFO][4824] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:19:13.149275 containerd[1460]: 2024-07-02 00:19:13.137 [WARNING][4824] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="661e55041f1972bf9d38b9223fcf97849743408eac967c426eccfffe2507b499" HandleID="k8s-pod-network.661e55041f1972bf9d38b9223fcf97849743408eac967c426eccfffe2507b499" Workload="ci--3975.1.1--8--31c642c6eb-k8s-coredns--76f75df574--9gzwx-eth0" Jul 2 00:19:13.149275 containerd[1460]: 2024-07-02 00:19:13.137 [INFO][4824] ipam_plugin.go 439: Releasing address using workloadID ContainerID="661e55041f1972bf9d38b9223fcf97849743408eac967c426eccfffe2507b499" HandleID="k8s-pod-network.661e55041f1972bf9d38b9223fcf97849743408eac967c426eccfffe2507b499" Workload="ci--3975.1.1--8--31c642c6eb-k8s-coredns--76f75df574--9gzwx-eth0" Jul 2 00:19:13.149275 containerd[1460]: 2024-07-02 00:19:13.142 [INFO][4824] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:19:13.149275 containerd[1460]: 2024-07-02 00:19:13.146 [INFO][4818] k8s.go 621: Teardown processing complete. ContainerID="661e55041f1972bf9d38b9223fcf97849743408eac967c426eccfffe2507b499" Jul 2 00:19:13.149275 containerd[1460]: time="2024-07-02T00:19:13.148794297Z" level=info msg="TearDown network for sandbox \"661e55041f1972bf9d38b9223fcf97849743408eac967c426eccfffe2507b499\" successfully" Jul 2 00:19:13.149275 containerd[1460]: time="2024-07-02T00:19:13.148835188Z" level=info msg="StopPodSandbox for \"661e55041f1972bf9d38b9223fcf97849743408eac967c426eccfffe2507b499\" returns successfully" Jul 2 00:19:13.151825 containerd[1460]: time="2024-07-02T00:19:13.151041313Z" level=info msg="RemovePodSandbox for \"661e55041f1972bf9d38b9223fcf97849743408eac967c426eccfffe2507b499\"" Jul 2 00:19:13.151825 containerd[1460]: time="2024-07-02T00:19:13.151091808Z" level=info msg="Forcibly stopping sandbox \"661e55041f1972bf9d38b9223fcf97849743408eac967c426eccfffe2507b499\"" Jul 2 00:19:13.359521 containerd[1460]: 2024-07-02 00:19:13.255 [WARNING][4842] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="661e55041f1972bf9d38b9223fcf97849743408eac967c426eccfffe2507b499" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--8--31c642c6eb-k8s-coredns--76f75df574--9gzwx-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"88002364-1346-4c15-9674-d6ca67a6704b", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 18, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-8-31c642c6eb", ContainerID:"16b83ea293f57cd6c3b144121d3faf3124685d1f90f76d043c15760aedb13f12", Pod:"coredns-76f75df574-9gzwx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.13.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali848e62dc396", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:19:13.359521 containerd[1460]: 2024-07-02 00:19:13.255 [INFO][4842] k8s.go 608: Cleaning up netns ContainerID="661e55041f1972bf9d38b9223fcf97849743408eac967c426eccfffe2507b499" Jul 2 00:19:13.359521 containerd[1460]: 2024-07-02 00:19:13.256 [INFO][4842] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="661e55041f1972bf9d38b9223fcf97849743408eac967c426eccfffe2507b499" iface="eth0" netns="" Jul 2 00:19:13.359521 containerd[1460]: 2024-07-02 00:19:13.256 [INFO][4842] k8s.go 615: Releasing IP address(es) ContainerID="661e55041f1972bf9d38b9223fcf97849743408eac967c426eccfffe2507b499" Jul 2 00:19:13.359521 containerd[1460]: 2024-07-02 00:19:13.256 [INFO][4842] utils.go 188: Calico CNI releasing IP address ContainerID="661e55041f1972bf9d38b9223fcf97849743408eac967c426eccfffe2507b499" Jul 2 00:19:13.359521 containerd[1460]: 2024-07-02 00:19:13.317 [INFO][4848] ipam_plugin.go 411: Releasing address using handleID ContainerID="661e55041f1972bf9d38b9223fcf97849743408eac967c426eccfffe2507b499" HandleID="k8s-pod-network.661e55041f1972bf9d38b9223fcf97849743408eac967c426eccfffe2507b499" Workload="ci--3975.1.1--8--31c642c6eb-k8s-coredns--76f75df574--9gzwx-eth0" Jul 2 00:19:13.359521 containerd[1460]: 2024-07-02 00:19:13.318 [INFO][4848] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:19:13.359521 containerd[1460]: 2024-07-02 00:19:13.318 [INFO][4848] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:19:13.359521 containerd[1460]: 2024-07-02 00:19:13.333 [WARNING][4848] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="661e55041f1972bf9d38b9223fcf97849743408eac967c426eccfffe2507b499" HandleID="k8s-pod-network.661e55041f1972bf9d38b9223fcf97849743408eac967c426eccfffe2507b499" Workload="ci--3975.1.1--8--31c642c6eb-k8s-coredns--76f75df574--9gzwx-eth0" Jul 2 00:19:13.359521 containerd[1460]: 2024-07-02 00:19:13.335 [INFO][4848] ipam_plugin.go 439: Releasing address using workloadID ContainerID="661e55041f1972bf9d38b9223fcf97849743408eac967c426eccfffe2507b499" HandleID="k8s-pod-network.661e55041f1972bf9d38b9223fcf97849743408eac967c426eccfffe2507b499" Workload="ci--3975.1.1--8--31c642c6eb-k8s-coredns--76f75df574--9gzwx-eth0" Jul 2 00:19:13.359521 containerd[1460]: 2024-07-02 00:19:13.353 [INFO][4848] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:19:13.359521 containerd[1460]: 2024-07-02 00:19:13.356 [INFO][4842] k8s.go 621: Teardown processing complete. ContainerID="661e55041f1972bf9d38b9223fcf97849743408eac967c426eccfffe2507b499" Jul 2 00:19:13.366731 containerd[1460]: time="2024-07-02T00:19:13.362439498Z" level=info msg="TearDown network for sandbox \"661e55041f1972bf9d38b9223fcf97849743408eac967c426eccfffe2507b499\" successfully" Jul 2 00:19:13.396648 containerd[1460]: time="2024-07-02T00:19:13.396559972Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"661e55041f1972bf9d38b9223fcf97849743408eac967c426eccfffe2507b499\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 00:19:13.396998 containerd[1460]: time="2024-07-02T00:19:13.396959953Z" level=info msg="RemovePodSandbox \"661e55041f1972bf9d38b9223fcf97849743408eac967c426eccfffe2507b499\" returns successfully" Jul 2 00:19:13.398038 containerd[1460]: time="2024-07-02T00:19:13.397944263Z" level=info msg="StopPodSandbox for \"16af76db66000a6965552600f4ead695b8494e1443873473780c4ecbe8530c86\"" Jul 2 00:19:13.566524 containerd[1460]: 2024-07-02 00:19:13.480 [WARNING][4866] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="16af76db66000a6965552600f4ead695b8494e1443873473780c4ecbe8530c86" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--8--31c642c6eb-k8s-csi--node--driver--9kkwj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"40064fc9-24a4-4ccf-9623-b652332a27c6", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 18, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-8-31c642c6eb", ContainerID:"4302e6e21816bac37fb426fbf23da887d01eaaa935e2457c6dbaa3d7b90248e8", Pod:"csi-node-driver-9kkwj", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.13.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calie2282ffe0d1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:19:13.566524 containerd[1460]: 2024-07-02 00:19:13.481 [INFO][4866] k8s.go 608: Cleaning up netns ContainerID="16af76db66000a6965552600f4ead695b8494e1443873473780c4ecbe8530c86" Jul 2 00:19:13.566524 containerd[1460]: 2024-07-02 00:19:13.481 [INFO][4866] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="16af76db66000a6965552600f4ead695b8494e1443873473780c4ecbe8530c86" iface="eth0" netns="" Jul 2 00:19:13.566524 containerd[1460]: 2024-07-02 00:19:13.481 [INFO][4866] k8s.go 615: Releasing IP address(es) ContainerID="16af76db66000a6965552600f4ead695b8494e1443873473780c4ecbe8530c86" Jul 2 00:19:13.566524 containerd[1460]: 2024-07-02 00:19:13.481 [INFO][4866] utils.go 188: Calico CNI releasing IP address ContainerID="16af76db66000a6965552600f4ead695b8494e1443873473780c4ecbe8530c86" Jul 2 00:19:13.566524 containerd[1460]: 2024-07-02 00:19:13.535 [INFO][4872] ipam_plugin.go 411: Releasing address using handleID ContainerID="16af76db66000a6965552600f4ead695b8494e1443873473780c4ecbe8530c86" HandleID="k8s-pod-network.16af76db66000a6965552600f4ead695b8494e1443873473780c4ecbe8530c86" Workload="ci--3975.1.1--8--31c642c6eb-k8s-csi--node--driver--9kkwj-eth0" Jul 2 00:19:13.566524 containerd[1460]: 2024-07-02 00:19:13.535 [INFO][4872] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:19:13.566524 containerd[1460]: 2024-07-02 00:19:13.535 [INFO][4872] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:19:13.566524 containerd[1460]: 2024-07-02 00:19:13.553 [WARNING][4872] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="16af76db66000a6965552600f4ead695b8494e1443873473780c4ecbe8530c86" HandleID="k8s-pod-network.16af76db66000a6965552600f4ead695b8494e1443873473780c4ecbe8530c86" Workload="ci--3975.1.1--8--31c642c6eb-k8s-csi--node--driver--9kkwj-eth0" Jul 2 00:19:13.566524 containerd[1460]: 2024-07-02 00:19:13.553 [INFO][4872] ipam_plugin.go 439: Releasing address using workloadID ContainerID="16af76db66000a6965552600f4ead695b8494e1443873473780c4ecbe8530c86" HandleID="k8s-pod-network.16af76db66000a6965552600f4ead695b8494e1443873473780c4ecbe8530c86" Workload="ci--3975.1.1--8--31c642c6eb-k8s-csi--node--driver--9kkwj-eth0" Jul 2 00:19:13.566524 containerd[1460]: 2024-07-02 00:19:13.557 [INFO][4872] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:19:13.566524 containerd[1460]: 2024-07-02 00:19:13.563 [INFO][4866] k8s.go 621: Teardown processing complete. ContainerID="16af76db66000a6965552600f4ead695b8494e1443873473780c4ecbe8530c86" Jul 2 00:19:13.568727 containerd[1460]: time="2024-07-02T00:19:13.566577181Z" level=info msg="TearDown network for sandbox \"16af76db66000a6965552600f4ead695b8494e1443873473780c4ecbe8530c86\" successfully" Jul 2 00:19:13.568727 containerd[1460]: time="2024-07-02T00:19:13.566618839Z" level=info msg="StopPodSandbox for \"16af76db66000a6965552600f4ead695b8494e1443873473780c4ecbe8530c86\" returns successfully" Jul 2 00:19:13.568727 containerd[1460]: time="2024-07-02T00:19:13.567524301Z" level=info msg="RemovePodSandbox for \"16af76db66000a6965552600f4ead695b8494e1443873473780c4ecbe8530c86\"" Jul 2 00:19:13.568727 containerd[1460]: time="2024-07-02T00:19:13.567590673Z" level=info msg="Forcibly stopping sandbox \"16af76db66000a6965552600f4ead695b8494e1443873473780c4ecbe8530c86\"" Jul 2 00:19:13.742624 containerd[1460]: 2024-07-02 00:19:13.652 [WARNING][4890] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="16af76db66000a6965552600f4ead695b8494e1443873473780c4ecbe8530c86" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--8--31c642c6eb-k8s-csi--node--driver--9kkwj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"40064fc9-24a4-4ccf-9623-b652332a27c6", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 18, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-8-31c642c6eb", ContainerID:"4302e6e21816bac37fb426fbf23da887d01eaaa935e2457c6dbaa3d7b90248e8", Pod:"csi-node-driver-9kkwj", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.13.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calie2282ffe0d1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:19:13.742624 containerd[1460]: 2024-07-02 00:19:13.652 [INFO][4890] k8s.go 608: Cleaning up netns ContainerID="16af76db66000a6965552600f4ead695b8494e1443873473780c4ecbe8530c86" Jul 2 00:19:13.742624 containerd[1460]: 2024-07-02 00:19:13.652 [INFO][4890] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="16af76db66000a6965552600f4ead695b8494e1443873473780c4ecbe8530c86" iface="eth0" netns="" Jul 2 00:19:13.742624 containerd[1460]: 2024-07-02 00:19:13.652 [INFO][4890] k8s.go 615: Releasing IP address(es) ContainerID="16af76db66000a6965552600f4ead695b8494e1443873473780c4ecbe8530c86" Jul 2 00:19:13.742624 containerd[1460]: 2024-07-02 00:19:13.652 [INFO][4890] utils.go 188: Calico CNI releasing IP address ContainerID="16af76db66000a6965552600f4ead695b8494e1443873473780c4ecbe8530c86" Jul 2 00:19:13.742624 containerd[1460]: 2024-07-02 00:19:13.706 [INFO][4896] ipam_plugin.go 411: Releasing address using handleID ContainerID="16af76db66000a6965552600f4ead695b8494e1443873473780c4ecbe8530c86" HandleID="k8s-pod-network.16af76db66000a6965552600f4ead695b8494e1443873473780c4ecbe8530c86" Workload="ci--3975.1.1--8--31c642c6eb-k8s-csi--node--driver--9kkwj-eth0" Jul 2 00:19:13.742624 containerd[1460]: 2024-07-02 00:19:13.706 [INFO][4896] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:19:13.742624 containerd[1460]: 2024-07-02 00:19:13.706 [INFO][4896] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:19:13.742624 containerd[1460]: 2024-07-02 00:19:13.717 [WARNING][4896] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="16af76db66000a6965552600f4ead695b8494e1443873473780c4ecbe8530c86" HandleID="k8s-pod-network.16af76db66000a6965552600f4ead695b8494e1443873473780c4ecbe8530c86" Workload="ci--3975.1.1--8--31c642c6eb-k8s-csi--node--driver--9kkwj-eth0" Jul 2 00:19:13.742624 containerd[1460]: 2024-07-02 00:19:13.717 [INFO][4896] ipam_plugin.go 439: Releasing address using workloadID ContainerID="16af76db66000a6965552600f4ead695b8494e1443873473780c4ecbe8530c86" HandleID="k8s-pod-network.16af76db66000a6965552600f4ead695b8494e1443873473780c4ecbe8530c86" Workload="ci--3975.1.1--8--31c642c6eb-k8s-csi--node--driver--9kkwj-eth0" Jul 2 00:19:13.742624 containerd[1460]: 2024-07-02 00:19:13.731 [INFO][4896] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:19:13.742624 containerd[1460]: 2024-07-02 00:19:13.738 [INFO][4890] k8s.go 621: Teardown processing complete. ContainerID="16af76db66000a6965552600f4ead695b8494e1443873473780c4ecbe8530c86" Jul 2 00:19:13.742624 containerd[1460]: time="2024-07-02T00:19:13.742586060Z" level=info msg="TearDown network for sandbox \"16af76db66000a6965552600f4ead695b8494e1443873473780c4ecbe8530c86\" successfully" Jul 2 00:19:13.785626 containerd[1460]: time="2024-07-02T00:19:13.785528309Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"16af76db66000a6965552600f4ead695b8494e1443873473780c4ecbe8530c86\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 00:19:13.786442 containerd[1460]: time="2024-07-02T00:19:13.785652157Z" level=info msg="RemovePodSandbox \"16af76db66000a6965552600f4ead695b8494e1443873473780c4ecbe8530c86\" returns successfully" Jul 2 00:19:13.787134 containerd[1460]: time="2024-07-02T00:19:13.787083869Z" level=info msg="StopPodSandbox for \"f95609100b6becc88a27e3d5f937698f4c32fba82df406f9fd8f6cb9fb80dcda\"" Jul 2 00:19:14.073075 containerd[1460]: 2024-07-02 00:19:13.952 [WARNING][4914] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f95609100b6becc88a27e3d5f937698f4c32fba82df406f9fd8f6cb9fb80dcda" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--8--31c642c6eb-k8s-calico--kube--controllers--84d6cb5bbb--c2tkd-eth0", GenerateName:"calico-kube-controllers-84d6cb5bbb-", Namespace:"calico-system", SelfLink:"", UID:"8d27d654-6b07-46b5-9c59-a377d9f9d512", ResourceVersion:"918", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 18, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84d6cb5bbb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-8-31c642c6eb", ContainerID:"e75d9aba7d0f1c598dfaa3cd5f125e7737bd4b081c19cb5573ed651bbb82d311", Pod:"calico-kube-controllers-84d6cb5bbb-c2tkd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.13.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4a23ac7bc62", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:19:14.073075 containerd[1460]: 2024-07-02 00:19:13.953 [INFO][4914] k8s.go 608: Cleaning up netns ContainerID="f95609100b6becc88a27e3d5f937698f4c32fba82df406f9fd8f6cb9fb80dcda" Jul 2 00:19:14.073075 containerd[1460]: 2024-07-02 00:19:13.953 [INFO][4914] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="f95609100b6becc88a27e3d5f937698f4c32fba82df406f9fd8f6cb9fb80dcda" iface="eth0" netns="" Jul 2 00:19:14.073075 containerd[1460]: 2024-07-02 00:19:13.953 [INFO][4914] k8s.go 615: Releasing IP address(es) ContainerID="f95609100b6becc88a27e3d5f937698f4c32fba82df406f9fd8f6cb9fb80dcda" Jul 2 00:19:14.073075 containerd[1460]: 2024-07-02 00:19:13.953 [INFO][4914] utils.go 188: Calico CNI releasing IP address ContainerID="f95609100b6becc88a27e3d5f937698f4c32fba82df406f9fd8f6cb9fb80dcda" Jul 2 00:19:14.073075 containerd[1460]: 2024-07-02 00:19:14.046 [INFO][4920] ipam_plugin.go 411: Releasing address using handleID ContainerID="f95609100b6becc88a27e3d5f937698f4c32fba82df406f9fd8f6cb9fb80dcda" HandleID="k8s-pod-network.f95609100b6becc88a27e3d5f937698f4c32fba82df406f9fd8f6cb9fb80dcda" Workload="ci--3975.1.1--8--31c642c6eb-k8s-calico--kube--controllers--84d6cb5bbb--c2tkd-eth0" Jul 2 00:19:14.073075 containerd[1460]: 2024-07-02 00:19:14.046 [INFO][4920] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:19:14.073075 containerd[1460]: 2024-07-02 00:19:14.046 [INFO][4920] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:19:14.073075 containerd[1460]: 2024-07-02 00:19:14.060 [WARNING][4920] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="f95609100b6becc88a27e3d5f937698f4c32fba82df406f9fd8f6cb9fb80dcda" HandleID="k8s-pod-network.f95609100b6becc88a27e3d5f937698f4c32fba82df406f9fd8f6cb9fb80dcda" Workload="ci--3975.1.1--8--31c642c6eb-k8s-calico--kube--controllers--84d6cb5bbb--c2tkd-eth0" Jul 2 00:19:14.073075 containerd[1460]: 2024-07-02 00:19:14.060 [INFO][4920] ipam_plugin.go 439: Releasing address using workloadID ContainerID="f95609100b6becc88a27e3d5f937698f4c32fba82df406f9fd8f6cb9fb80dcda" HandleID="k8s-pod-network.f95609100b6becc88a27e3d5f937698f4c32fba82df406f9fd8f6cb9fb80dcda" Workload="ci--3975.1.1--8--31c642c6eb-k8s-calico--kube--controllers--84d6cb5bbb--c2tkd-eth0" Jul 2 00:19:14.073075 containerd[1460]: 2024-07-02 00:19:14.066 [INFO][4920] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:19:14.073075 containerd[1460]: 2024-07-02 00:19:14.070 [INFO][4914] k8s.go 621: Teardown processing complete. ContainerID="f95609100b6becc88a27e3d5f937698f4c32fba82df406f9fd8f6cb9fb80dcda" Jul 2 00:19:14.075195 containerd[1460]: time="2024-07-02T00:19:14.074439188Z" level=info msg="TearDown network for sandbox \"f95609100b6becc88a27e3d5f937698f4c32fba82df406f9fd8f6cb9fb80dcda\" successfully" Jul 2 00:19:14.075195 containerd[1460]: time="2024-07-02T00:19:14.074492346Z" level=info msg="StopPodSandbox for \"f95609100b6becc88a27e3d5f937698f4c32fba82df406f9fd8f6cb9fb80dcda\" returns successfully" Jul 2 00:19:14.077296 containerd[1460]: time="2024-07-02T00:19:14.075847059Z" level=info msg="RemovePodSandbox for \"f95609100b6becc88a27e3d5f937698f4c32fba82df406f9fd8f6cb9fb80dcda\"" Jul 2 00:19:14.077296 containerd[1460]: time="2024-07-02T00:19:14.075942204Z" level=info msg="Forcibly stopping sandbox \"f95609100b6becc88a27e3d5f937698f4c32fba82df406f9fd8f6cb9fb80dcda\"" Jul 2 00:19:14.259438 containerd[1460]: 2024-07-02 00:19:14.174 [WARNING][4938] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f95609100b6becc88a27e3d5f937698f4c32fba82df406f9fd8f6cb9fb80dcda" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--8--31c642c6eb-k8s-calico--kube--controllers--84d6cb5bbb--c2tkd-eth0", GenerateName:"calico-kube-controllers-84d6cb5bbb-", Namespace:"calico-system", SelfLink:"", UID:"8d27d654-6b07-46b5-9c59-a377d9f9d512", ResourceVersion:"918", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 18, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84d6cb5bbb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-8-31c642c6eb", ContainerID:"e75d9aba7d0f1c598dfaa3cd5f125e7737bd4b081c19cb5573ed651bbb82d311", Pod:"calico-kube-controllers-84d6cb5bbb-c2tkd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.13.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4a23ac7bc62", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:19:14.259438 containerd[1460]: 2024-07-02 00:19:14.174 [INFO][4938] k8s.go 608: Cleaning up netns ContainerID="f95609100b6becc88a27e3d5f937698f4c32fba82df406f9fd8f6cb9fb80dcda" Jul 2 00:19:14.259438 containerd[1460]: 2024-07-02 00:19:14.174 [INFO][4938] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="f95609100b6becc88a27e3d5f937698f4c32fba82df406f9fd8f6cb9fb80dcda" iface="eth0" netns="" Jul 2 00:19:14.259438 containerd[1460]: 2024-07-02 00:19:14.174 [INFO][4938] k8s.go 615: Releasing IP address(es) ContainerID="f95609100b6becc88a27e3d5f937698f4c32fba82df406f9fd8f6cb9fb80dcda" Jul 2 00:19:14.259438 containerd[1460]: 2024-07-02 00:19:14.174 [INFO][4938] utils.go 188: Calico CNI releasing IP address ContainerID="f95609100b6becc88a27e3d5f937698f4c32fba82df406f9fd8f6cb9fb80dcda" Jul 2 00:19:14.259438 containerd[1460]: 2024-07-02 00:19:14.230 [INFO][4945] ipam_plugin.go 411: Releasing address using handleID ContainerID="f95609100b6becc88a27e3d5f937698f4c32fba82df406f9fd8f6cb9fb80dcda" HandleID="k8s-pod-network.f95609100b6becc88a27e3d5f937698f4c32fba82df406f9fd8f6cb9fb80dcda" Workload="ci--3975.1.1--8--31c642c6eb-k8s-calico--kube--controllers--84d6cb5bbb--c2tkd-eth0" Jul 2 00:19:14.259438 containerd[1460]: 2024-07-02 00:19:14.230 [INFO][4945] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:19:14.259438 containerd[1460]: 2024-07-02 00:19:14.231 [INFO][4945] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:19:14.259438 containerd[1460]: 2024-07-02 00:19:14.244 [WARNING][4945] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="f95609100b6becc88a27e3d5f937698f4c32fba82df406f9fd8f6cb9fb80dcda" HandleID="k8s-pod-network.f95609100b6becc88a27e3d5f937698f4c32fba82df406f9fd8f6cb9fb80dcda" Workload="ci--3975.1.1--8--31c642c6eb-k8s-calico--kube--controllers--84d6cb5bbb--c2tkd-eth0" Jul 2 00:19:14.259438 containerd[1460]: 2024-07-02 00:19:14.244 [INFO][4945] ipam_plugin.go 439: Releasing address using workloadID ContainerID="f95609100b6becc88a27e3d5f937698f4c32fba82df406f9fd8f6cb9fb80dcda" HandleID="k8s-pod-network.f95609100b6becc88a27e3d5f937698f4c32fba82df406f9fd8f6cb9fb80dcda" Workload="ci--3975.1.1--8--31c642c6eb-k8s-calico--kube--controllers--84d6cb5bbb--c2tkd-eth0" Jul 2 00:19:14.259438 containerd[1460]: 2024-07-02 00:19:14.251 [INFO][4945] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:19:14.259438 containerd[1460]: 2024-07-02 00:19:14.254 [INFO][4938] k8s.go 621: Teardown processing complete. ContainerID="f95609100b6becc88a27e3d5f937698f4c32fba82df406f9fd8f6cb9fb80dcda" Jul 2 00:19:14.259438 containerd[1460]: time="2024-07-02T00:19:14.257616562Z" level=info msg="TearDown network for sandbox \"f95609100b6becc88a27e3d5f937698f4c32fba82df406f9fd8f6cb9fb80dcda\" successfully" Jul 2 00:19:14.277055 containerd[1460]: time="2024-07-02T00:19:14.276971863Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f95609100b6becc88a27e3d5f937698f4c32fba82df406f9fd8f6cb9fb80dcda\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 00:19:14.277615 containerd[1460]: time="2024-07-02T00:19:14.277534862Z" level=info msg="RemovePodSandbox \"f95609100b6becc88a27e3d5f937698f4c32fba82df406f9fd8f6cb9fb80dcda\" returns successfully" Jul 2 00:19:14.278504 containerd[1460]: time="2024-07-02T00:19:14.278460575Z" level=info msg="StopPodSandbox for \"589ce236ef4df78c2f02eaa7224b9b351990cc55548e6106e047cb04d97a3687\"" Jul 2 00:19:14.278635 containerd[1460]: time="2024-07-02T00:19:14.278607174Z" level=info msg="TearDown network for sandbox \"589ce236ef4df78c2f02eaa7224b9b351990cc55548e6106e047cb04d97a3687\" successfully" Jul 2 00:19:14.278635 containerd[1460]: time="2024-07-02T00:19:14.278629409Z" level=info msg="StopPodSandbox for \"589ce236ef4df78c2f02eaa7224b9b351990cc55548e6106e047cb04d97a3687\" returns successfully" Jul 2 00:19:14.279661 containerd[1460]: time="2024-07-02T00:19:14.279075556Z" level=info msg="RemovePodSandbox for \"589ce236ef4df78c2f02eaa7224b9b351990cc55548e6106e047cb04d97a3687\"" Jul 2 00:19:14.279661 containerd[1460]: time="2024-07-02T00:19:14.279106501Z" level=info msg="Forcibly stopping sandbox \"589ce236ef4df78c2f02eaa7224b9b351990cc55548e6106e047cb04d97a3687\"" Jul 2 00:19:14.286382 containerd[1460]: time="2024-07-02T00:19:14.279206832Z" level=info msg="TearDown network for sandbox \"589ce236ef4df78c2f02eaa7224b9b351990cc55548e6106e047cb04d97a3687\" successfully" Jul 2 00:19:14.300918 containerd[1460]: time="2024-07-02T00:19:14.299365417Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"589ce236ef4df78c2f02eaa7224b9b351990cc55548e6106e047cb04d97a3687\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 00:19:14.300918 containerd[1460]: time="2024-07-02T00:19:14.299519198Z" level=info msg="RemovePodSandbox \"589ce236ef4df78c2f02eaa7224b9b351990cc55548e6106e047cb04d97a3687\" returns successfully" Jul 2 00:19:14.300918 containerd[1460]: time="2024-07-02T00:19:14.300659150Z" level=info msg="StopPodSandbox for \"a668524987b1cd96d35f0a10eea62128ea378b8fb3b65e75c653fb0e6fac857e\"" Jul 2 00:19:14.301252 containerd[1460]: time="2024-07-02T00:19:14.301022518Z" level=info msg="TearDown network for sandbox \"a668524987b1cd96d35f0a10eea62128ea378b8fb3b65e75c653fb0e6fac857e\" successfully" Jul 2 00:19:14.301252 containerd[1460]: time="2024-07-02T00:19:14.301075016Z" level=info msg="StopPodSandbox for \"a668524987b1cd96d35f0a10eea62128ea378b8fb3b65e75c653fb0e6fac857e\" returns successfully" Jul 2 00:19:14.302033 containerd[1460]: time="2024-07-02T00:19:14.301973785Z" level=info msg="RemovePodSandbox for \"a668524987b1cd96d35f0a10eea62128ea378b8fb3b65e75c653fb0e6fac857e\"" Jul 2 00:19:14.302156 containerd[1460]: time="2024-07-02T00:19:14.302061533Z" level=info msg="Forcibly stopping sandbox \"a668524987b1cd96d35f0a10eea62128ea378b8fb3b65e75c653fb0e6fac857e\"" Jul 2 00:19:14.302189 containerd[1460]: time="2024-07-02T00:19:14.302142007Z" level=info msg="TearDown network for sandbox \"a668524987b1cd96d35f0a10eea62128ea378b8fb3b65e75c653fb0e6fac857e\" successfully" Jul 2 00:19:14.309454 containerd[1460]: time="2024-07-02T00:19:14.309360728Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a668524987b1cd96d35f0a10eea62128ea378b8fb3b65e75c653fb0e6fac857e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 00:19:14.309454 containerd[1460]: time="2024-07-02T00:19:14.309465738Z" level=info msg="RemovePodSandbox \"a668524987b1cd96d35f0a10eea62128ea378b8fb3b65e75c653fb0e6fac857e\" returns successfully" Jul 2 00:19:16.013081 kubelet[2555]: E0702 00:19:16.012489 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:19:16.218013 kubelet[2555]: I0702 00:19:16.216322 2555 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-9kkwj" podStartSLOduration=39.2215901 podStartE2EDuration="44.216215542s" podCreationTimestamp="2024-07-02 00:18:32 +0000 UTC" firstStartedPulling="2024-07-02 00:19:02.328762294 +0000 UTC m=+50.313166347" lastFinishedPulling="2024-07-02 00:19:07.323387737 +0000 UTC m=+55.307791789" observedRunningTime="2024-07-02 00:19:07.691079889 +0000 UTC m=+55.675483963" watchObservedRunningTime="2024-07-02 00:19:16.216215542 +0000 UTC m=+64.200619620" Jul 2 00:19:16.801067 kubelet[2555]: E0702 00:19:16.801027 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:19:17.583345 systemd[1]: Started sshd@9-64.227.97.255:22-147.75.109.163:47876.service - OpenSSH per-connection server daemon (147.75.109.163:47876). Jul 2 00:19:17.664517 sshd[5027]: Accepted publickey for core from 147.75.109.163 port 47876 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:19:17.667313 sshd[5027]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:19:17.677509 systemd-logind[1446]: New session 10 of user core. Jul 2 00:19:17.684343 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 2 00:19:17.903656 sshd[5027]: pam_unix(sshd:session): session closed for user core Jul 2 00:19:17.911951 systemd-logind[1446]: Session 10 logged out. Waiting for processes to exit. Jul 2 00:19:17.912831 systemd[1]: sshd@9-64.227.97.255:22-147.75.109.163:47876.service: Deactivated successfully. Jul 2 00:19:17.918324 systemd[1]: session-10.scope: Deactivated successfully. Jul 2 00:19:17.923279 systemd-logind[1446]: Removed session 10. Jul 2 00:19:22.924601 systemd[1]: Started sshd@10-64.227.97.255:22-147.75.109.163:37766.service - OpenSSH per-connection server daemon (147.75.109.163:37766). Jul 2 00:19:22.984360 sshd[5043]: Accepted publickey for core from 147.75.109.163 port 37766 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:19:22.986728 sshd[5043]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:19:22.995988 systemd-logind[1446]: New session 11 of user core. Jul 2 00:19:23.004376 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 2 00:19:23.178999 sshd[5043]: pam_unix(sshd:session): session closed for user core Jul 2 00:19:23.190973 systemd[1]: sshd@10-64.227.97.255:22-147.75.109.163:37766.service: Deactivated successfully. Jul 2 00:19:23.196376 systemd[1]: session-11.scope: Deactivated successfully. Jul 2 00:19:23.198883 systemd-logind[1446]: Session 11 logged out. Waiting for processes to exit. Jul 2 00:19:23.210048 systemd[1]: Started sshd@11-64.227.97.255:22-147.75.109.163:37770.service - OpenSSH per-connection server daemon (147.75.109.163:37770). Jul 2 00:19:23.215701 systemd-logind[1446]: Removed session 11. Jul 2 00:19:23.263795 sshd[5057]: Accepted publickey for core from 147.75.109.163 port 37770 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:19:23.266562 sshd[5057]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:19:23.274021 systemd-logind[1446]: New session 12 of user core. Jul 2 00:19:23.279216 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 2 00:19:23.520878 sshd[5057]: pam_unix(sshd:session): session closed for user core Jul 2 00:19:23.535757 systemd[1]: sshd@11-64.227.97.255:22-147.75.109.163:37770.service: Deactivated successfully. Jul 2 00:19:23.542077 systemd[1]: session-12.scope: Deactivated successfully. Jul 2 00:19:23.547604 systemd-logind[1446]: Session 12 logged out. Waiting for processes to exit. Jul 2 00:19:23.553768 systemd[1]: Started sshd@12-64.227.97.255:22-147.75.109.163:37774.service - OpenSSH per-connection server daemon (147.75.109.163:37774). Jul 2 00:19:23.558735 systemd-logind[1446]: Removed session 12. Jul 2 00:19:23.643835 sshd[5068]: Accepted publickey for core from 147.75.109.163 port 37774 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:19:23.646353 sshd[5068]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:19:23.654829 systemd-logind[1446]: New session 13 of user core. Jul 2 00:19:23.661214 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 2 00:19:23.851174 sshd[5068]: pam_unix(sshd:session): session closed for user core Jul 2 00:19:23.858167 systemd[1]: sshd@12-64.227.97.255:22-147.75.109.163:37774.service: Deactivated successfully. Jul 2 00:19:23.863962 systemd[1]: session-13.scope: Deactivated successfully. Jul 2 00:19:23.867168 systemd-logind[1446]: Session 13 logged out. Waiting for processes to exit. Jul 2 00:19:23.869506 systemd-logind[1446]: Removed session 13. Jul 2 00:19:28.278295 kubelet[2555]: E0702 00:19:28.277665 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:19:28.874400 systemd[1]: Started sshd@13-64.227.97.255:22-147.75.109.163:37778.service - OpenSSH per-connection server daemon (147.75.109.163:37778). Jul 2 00:19:28.929203 sshd[5088]: Accepted publickey for core from 147.75.109.163 port 37778 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:19:28.931610 sshd[5088]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:19:28.938352 systemd-logind[1446]: New session 14 of user core. Jul 2 00:19:28.946220 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 2 00:19:29.123295 sshd[5088]: pam_unix(sshd:session): session closed for user core Jul 2 00:19:29.130631 systemd[1]: sshd@13-64.227.97.255:22-147.75.109.163:37778.service: Deactivated successfully. Jul 2 00:19:29.135692 systemd[1]: session-14.scope: Deactivated successfully. Jul 2 00:19:29.137135 systemd-logind[1446]: Session 14 logged out. Waiting for processes to exit. Jul 2 00:19:29.138308 systemd-logind[1446]: Removed session 14. Jul 2 00:19:30.282309 kubelet[2555]: E0702 00:19:30.282190 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:19:34.151513 systemd[1]: Started sshd@14-64.227.97.255:22-147.75.109.163:60548.service - OpenSSH per-connection server daemon (147.75.109.163:60548). Jul 2 00:19:34.234489 sshd[5104]: Accepted publickey for core from 147.75.109.163 port 60548 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:19:34.238792 sshd[5104]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:19:34.255026 systemd-logind[1446]: New session 15 of user core. Jul 2 00:19:34.261457 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 2 00:19:34.587836 sshd[5104]: pam_unix(sshd:session): session closed for user core Jul 2 00:19:34.596225 systemd[1]: sshd@14-64.227.97.255:22-147.75.109.163:60548.service: Deactivated successfully. Jul 2 00:19:34.605644 systemd[1]: session-15.scope: Deactivated successfully. Jul 2 00:19:34.609537 systemd-logind[1446]: Session 15 logged out. Waiting for processes to exit. Jul 2 00:19:34.613882 systemd-logind[1446]: Removed session 15. Jul 2 00:19:39.614465 systemd[1]: Started sshd@15-64.227.97.255:22-147.75.109.163:60562.service - OpenSSH per-connection server daemon (147.75.109.163:60562). Jul 2 00:19:39.782950 sshd[5148]: Accepted publickey for core from 147.75.109.163 port 60562 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:19:39.787392 sshd[5148]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:19:39.813200 systemd-logind[1446]: New session 16 of user core. Jul 2 00:19:39.816687 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 2 00:19:40.265254 sshd[5148]: pam_unix(sshd:session): session closed for user core Jul 2 00:19:40.271718 systemd-logind[1446]: Session 16 logged out. Waiting for processes to exit. Jul 2 00:19:40.272037 systemd[1]: sshd@15-64.227.97.255:22-147.75.109.163:60562.service: Deactivated successfully. Jul 2 00:19:40.276810 systemd[1]: session-16.scope: Deactivated successfully. Jul 2 00:19:40.288595 systemd-logind[1446]: Removed session 16. Jul 2 00:19:42.279604 kubelet[2555]: E0702 00:19:42.278984 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:19:45.284953 systemd[1]: Started sshd@16-64.227.97.255:22-147.75.109.163:33586.service - OpenSSH per-connection server daemon (147.75.109.163:33586). Jul 2 00:19:45.377914 sshd[5182]: Accepted publickey for core from 147.75.109.163 port 33586 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:19:45.379799 sshd[5182]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:19:45.390134 systemd-logind[1446]: New session 17 of user core. Jul 2 00:19:45.397926 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 2 00:19:45.590462 sshd[5182]: pam_unix(sshd:session): session closed for user core Jul 2 00:19:45.603152 systemd[1]: sshd@16-64.227.97.255:22-147.75.109.163:33586.service: Deactivated successfully. Jul 2 00:19:45.606656 systemd[1]: session-17.scope: Deactivated successfully. Jul 2 00:19:45.611761 systemd-logind[1446]: Session 17 logged out. Waiting for processes to exit. Jul 2 00:19:45.616551 systemd[1]: Started sshd@17-64.227.97.255:22-147.75.109.163:33590.service - OpenSSH per-connection server daemon (147.75.109.163:33590). Jul 2 00:19:45.618363 systemd-logind[1446]: Removed session 17. Jul 2 00:19:45.690399 sshd[5195]: Accepted publickey for core from 147.75.109.163 port 33590 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:19:45.693227 sshd[5195]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:19:45.701089 systemd-logind[1446]: New session 18 of user core. Jul 2 00:19:45.707383 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 2 00:19:46.115127 sshd[5195]: pam_unix(sshd:session): session closed for user core Jul 2 00:19:46.130128 systemd[1]: Started sshd@18-64.227.97.255:22-147.75.109.163:33598.service - OpenSSH per-connection server daemon (147.75.109.163:33598). Jul 2 00:19:46.131179 systemd[1]: sshd@17-64.227.97.255:22-147.75.109.163:33590.service: Deactivated successfully. Jul 2 00:19:46.143190 systemd[1]: session-18.scope: Deactivated successfully. Jul 2 00:19:46.146378 systemd-logind[1446]: Session 18 logged out. Waiting for processes to exit. Jul 2 00:19:46.152673 systemd-logind[1446]: Removed session 18. Jul 2 00:19:46.238038 sshd[5225]: Accepted publickey for core from 147.75.109.163 port 33598 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:19:46.241242 sshd[5225]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:19:46.249256 systemd-logind[1446]: New session 19 of user core. Jul 2 00:19:46.260229 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 2 00:19:47.280111 kubelet[2555]: E0702 00:19:47.278300 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:19:48.426768 sshd[5225]: pam_unix(sshd:session): session closed for user core Jul 2 00:19:48.442379 systemd[1]: sshd@18-64.227.97.255:22-147.75.109.163:33598.service: Deactivated successfully. Jul 2 00:19:48.446678 systemd[1]: session-19.scope: Deactivated successfully. Jul 2 00:19:48.451353 systemd-logind[1446]: Session 19 logged out. Waiting for processes to exit. Jul 2 00:19:48.466805 systemd[1]: Started sshd@19-64.227.97.255:22-147.75.109.163:33612.service - OpenSSH per-connection server daemon (147.75.109.163:33612). Jul 2 00:19:48.476584 systemd-logind[1446]: Removed session 19. Jul 2 00:19:48.604277 sshd[5243]: Accepted publickey for core from 147.75.109.163 port 33612 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:19:48.607510 sshd[5243]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:19:48.617643 systemd-logind[1446]: New session 20 of user core. Jul 2 00:19:48.623404 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 2 00:19:49.212578 sshd[5243]: pam_unix(sshd:session): session closed for user core Jul 2 00:19:49.224277 systemd[1]: sshd@19-64.227.97.255:22-147.75.109.163:33612.service: Deactivated successfully. Jul 2 00:19:49.229466 systemd[1]: session-20.scope: Deactivated successfully. Jul 2 00:19:49.231268 systemd-logind[1446]: Session 20 logged out. Waiting for processes to exit. Jul 2 00:19:49.240683 systemd[1]: Started sshd@20-64.227.97.255:22-147.75.109.163:33624.service - OpenSSH per-connection server daemon (147.75.109.163:33624). Jul 2 00:19:49.244122 systemd-logind[1446]: Removed session 20. Jul 2 00:19:49.340979 sshd[5257]: Accepted publickey for core from 147.75.109.163 port 33624 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:19:49.342705 sshd[5257]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:19:49.351044 systemd-logind[1446]: New session 21 of user core. Jul 2 00:19:49.356167 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 2 00:19:49.504966 sshd[5257]: pam_unix(sshd:session): session closed for user core Jul 2 00:19:49.511978 systemd[1]: sshd@20-64.227.97.255:22-147.75.109.163:33624.service: Deactivated successfully. Jul 2 00:19:49.517595 systemd[1]: session-21.scope: Deactivated successfully. Jul 2 00:19:49.518736 systemd-logind[1446]: Session 21 logged out. Waiting for processes to exit. Jul 2 00:19:49.520543 systemd-logind[1446]: Removed session 21. Jul 2 00:19:52.279969 kubelet[2555]: E0702 00:19:52.278334 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:19:54.527364 systemd[1]: Started sshd@21-64.227.97.255:22-147.75.109.163:47700.service - OpenSSH per-connection server daemon (147.75.109.163:47700). Jul 2 00:19:54.585505 sshd[5275]: Accepted publickey for core from 147.75.109.163 port 47700 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:19:54.587356 sshd[5275]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:19:54.594973 systemd-logind[1446]: New session 22 of user core. Jul 2 00:19:54.607816 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 2 00:19:54.753264 sshd[5275]: pam_unix(sshd:session): session closed for user core Jul 2 00:19:54.757332 systemd-logind[1446]: Session 22 logged out. Waiting for processes to exit. Jul 2 00:19:54.757729 systemd[1]: sshd@21-64.227.97.255:22-147.75.109.163:47700.service: Deactivated successfully. Jul 2 00:19:54.761239 systemd[1]: session-22.scope: Deactivated successfully. Jul 2 00:19:54.764258 systemd-logind[1446]: Removed session 22. Jul 2 00:19:59.796411 systemd[1]: Started sshd@22-64.227.97.255:22-147.75.109.163:47712.service - OpenSSH per-connection server daemon (147.75.109.163:47712). Jul 2 00:19:59.873761 sshd[5294]: Accepted publickey for core from 147.75.109.163 port 47712 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:19:59.887065 sshd[5294]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:19:59.898979 systemd-logind[1446]: New session 23 of user core. Jul 2 00:19:59.902338 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 2 00:20:00.173836 sshd[5294]: pam_unix(sshd:session): session closed for user core Jul 2 00:20:00.190400 systemd[1]: sshd@22-64.227.97.255:22-147.75.109.163:47712.service: Deactivated successfully. Jul 2 00:20:00.196419 systemd[1]: session-23.scope: Deactivated successfully. Jul 2 00:20:00.199834 systemd-logind[1446]: Session 23 logged out. Waiting for processes to exit. Jul 2 00:20:00.208435 systemd-logind[1446]: Removed session 23. Jul 2 00:20:05.197171 systemd[1]: Started sshd@23-64.227.97.255:22-147.75.109.163:59152.service - OpenSSH per-connection server daemon (147.75.109.163:59152). Jul 2 00:20:05.282827 sshd[5312]: Accepted publickey for core from 147.75.109.163 port 59152 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:20:05.286773 sshd[5312]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:20:05.303606 systemd-logind[1446]: New session 24 of user core. Jul 2 00:20:05.310166 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 2 00:20:05.533773 sshd[5312]: pam_unix(sshd:session): session closed for user core Jul 2 00:20:05.550039 systemd[1]: sshd@23-64.227.97.255:22-147.75.109.163:59152.service: Deactivated successfully. Jul 2 00:20:05.553064 systemd[1]: session-24.scope: Deactivated successfully. Jul 2 00:20:05.555473 systemd-logind[1446]: Session 24 logged out. Waiting for processes to exit. Jul 2 00:20:05.559563 systemd-logind[1446]: Removed session 24. Jul 2 00:20:07.321338 kubelet[2555]: I0702 00:20:07.321265 2555 topology_manager.go:215] "Topology Admit Handler" podUID="653dff8f-82c5-4b91-9461-27ac7fc35774" podNamespace="calico-apiserver" podName="calico-apiserver-666cbfcdbd-cp4fm" Jul 2 00:20:07.384911 systemd[1]: Created slice kubepods-besteffort-pod653dff8f_82c5_4b91_9461_27ac7fc35774.slice - libcontainer container kubepods-besteffort-pod653dff8f_82c5_4b91_9461_27ac7fc35774.slice. Jul 2 00:20:07.438220 kubelet[2555]: I0702 00:20:07.437473 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77gmb\" (UniqueName: \"kubernetes.io/projected/653dff8f-82c5-4b91-9461-27ac7fc35774-kube-api-access-77gmb\") pod \"calico-apiserver-666cbfcdbd-cp4fm\" (UID: \"653dff8f-82c5-4b91-9461-27ac7fc35774\") " pod="calico-apiserver/calico-apiserver-666cbfcdbd-cp4fm" Jul 2 00:20:07.438220 kubelet[2555]: I0702 00:20:07.438165 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/653dff8f-82c5-4b91-9461-27ac7fc35774-calico-apiserver-certs\") pod \"calico-apiserver-666cbfcdbd-cp4fm\" (UID: \"653dff8f-82c5-4b91-9461-27ac7fc35774\") " pod="calico-apiserver/calico-apiserver-666cbfcdbd-cp4fm" Jul 2 00:20:07.545705 kubelet[2555]: E0702 00:20:07.545247 2555 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Jul 2 00:20:07.579331 kubelet[2555]: E0702 00:20:07.579085 2555 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/653dff8f-82c5-4b91-9461-27ac7fc35774-calico-apiserver-certs podName:653dff8f-82c5-4b91-9461-27ac7fc35774 nodeName:}" failed. No retries permitted until 2024-07-02 00:20:08.045370851 +0000 UTC m=+116.029774922 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/653dff8f-82c5-4b91-9461-27ac7fc35774-calico-apiserver-certs") pod "calico-apiserver-666cbfcdbd-cp4fm" (UID: "653dff8f-82c5-4b91-9461-27ac7fc35774") : secret "calico-apiserver-certs" not found Jul 2 00:20:08.281775 kubelet[2555]: E0702 00:20:08.281162 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:20:08.294465 containerd[1460]: time="2024-07-02T00:20:08.294372316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-666cbfcdbd-cp4fm,Uid:653dff8f-82c5-4b91-9461-27ac7fc35774,Namespace:calico-apiserver,Attempt:0,}" Jul 2 00:20:08.622454 systemd-networkd[1370]: cali7f548eb7414: Link UP Jul 2 00:20:08.625710 systemd-networkd[1370]: cali7f548eb7414: Gained carrier Jul 2 00:20:08.661798 containerd[1460]: 2024-07-02 00:20:08.452 [INFO][5331] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.1.1--8--31c642c6eb-k8s-calico--apiserver--666cbfcdbd--cp4fm-eth0 calico-apiserver-666cbfcdbd- calico-apiserver 653dff8f-82c5-4b91-9461-27ac7fc35774 1307 0 2024-07-02 00:20:07 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:666cbfcdbd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3975.1.1-8-31c642c6eb calico-apiserver-666cbfcdbd-cp4fm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7f548eb7414 [] []}} ContainerID="7848d5df72b618f930de3bf5a0f759ae9c8462419568a857007283ed12d06c3e" Namespace="calico-apiserver" Pod="calico-apiserver-666cbfcdbd-cp4fm" WorkloadEndpoint="ci--3975.1.1--8--31c642c6eb-k8s-calico--apiserver--666cbfcdbd--cp4fm-" Jul 2 00:20:08.661798 containerd[1460]: 2024-07-02 00:20:08.453 [INFO][5331] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7848d5df72b618f930de3bf5a0f759ae9c8462419568a857007283ed12d06c3e" Namespace="calico-apiserver" Pod="calico-apiserver-666cbfcdbd-cp4fm" WorkloadEndpoint="ci--3975.1.1--8--31c642c6eb-k8s-calico--apiserver--666cbfcdbd--cp4fm-eth0" Jul 2 00:20:08.661798 containerd[1460]: 2024-07-02 00:20:08.506 [INFO][5341] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7848d5df72b618f930de3bf5a0f759ae9c8462419568a857007283ed12d06c3e" HandleID="k8s-pod-network.7848d5df72b618f930de3bf5a0f759ae9c8462419568a857007283ed12d06c3e" Workload="ci--3975.1.1--8--31c642c6eb-k8s-calico--apiserver--666cbfcdbd--cp4fm-eth0" Jul 2 00:20:08.661798 containerd[1460]: 2024-07-02 00:20:08.523 [INFO][5341] ipam_plugin.go 264: Auto assigning IP ContainerID="7848d5df72b618f930de3bf5a0f759ae9c8462419568a857007283ed12d06c3e" HandleID="k8s-pod-network.7848d5df72b618f930de3bf5a0f759ae9c8462419568a857007283ed12d06c3e" Workload="ci--3975.1.1--8--31c642c6eb-k8s-calico--apiserver--666cbfcdbd--cp4fm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318170), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3975.1.1-8-31c642c6eb", "pod":"calico-apiserver-666cbfcdbd-cp4fm", "timestamp":"2024-07-02 00:20:08.50658487 +0000 UTC"}, Hostname:"ci-3975.1.1-8-31c642c6eb", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:20:08.661798 containerd[1460]: 2024-07-02 00:20:08.524 [INFO][5341] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:20:08.661798 containerd[1460]: 2024-07-02 00:20:08.525 [INFO][5341] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:20:08.661798 containerd[1460]: 2024-07-02 00:20:08.525 [INFO][5341] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.1.1-8-31c642c6eb' Jul 2 00:20:08.661798 containerd[1460]: 2024-07-02 00:20:08.533 [INFO][5341] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7848d5df72b618f930de3bf5a0f759ae9c8462419568a857007283ed12d06c3e" host="ci-3975.1.1-8-31c642c6eb" Jul 2 00:20:08.661798 containerd[1460]: 2024-07-02 00:20:08.549 [INFO][5341] ipam.go 372: Looking up existing affinities for host host="ci-3975.1.1-8-31c642c6eb" Jul 2 00:20:08.661798 containerd[1460]: 2024-07-02 00:20:08.565 [INFO][5341] ipam.go 489: Trying affinity for 192.168.13.64/26 host="ci-3975.1.1-8-31c642c6eb" Jul 2 00:20:08.661798 containerd[1460]: 2024-07-02 00:20:08.574 [INFO][5341] ipam.go 155: Attempting to load block cidr=192.168.13.64/26 host="ci-3975.1.1-8-31c642c6eb" Jul 2 00:20:08.661798 containerd[1460]: 2024-07-02 00:20:08.583 [INFO][5341] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.13.64/26 host="ci-3975.1.1-8-31c642c6eb" Jul 2 00:20:08.661798 containerd[1460]: 2024-07-02 00:20:08.583 [INFO][5341] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.13.64/26 handle="k8s-pod-network.7848d5df72b618f930de3bf5a0f759ae9c8462419568a857007283ed12d06c3e" host="ci-3975.1.1-8-31c642c6eb" Jul 2 00:20:08.661798 containerd[1460]: 2024-07-02 00:20:08.588 [INFO][5341] ipam.go 1685: Creating new handle: k8s-pod-network.7848d5df72b618f930de3bf5a0f759ae9c8462419568a857007283ed12d06c3e Jul 2 00:20:08.661798 containerd[1460]: 2024-07-02 00:20:08.598 [INFO][5341] ipam.go 1203: Writing block in order to claim IPs block=192.168.13.64/26 handle="k8s-pod-network.7848d5df72b618f930de3bf5a0f759ae9c8462419568a857007283ed12d06c3e" host="ci-3975.1.1-8-31c642c6eb" Jul 2 00:20:08.661798 containerd[1460]: 2024-07-02 00:20:08.609 [INFO][5341] ipam.go 1216: Successfully claimed IPs: [192.168.13.69/26] block=192.168.13.64/26 handle="k8s-pod-network.7848d5df72b618f930de3bf5a0f759ae9c8462419568a857007283ed12d06c3e" host="ci-3975.1.1-8-31c642c6eb" Jul 2 00:20:08.661798 containerd[1460]: 2024-07-02 00:20:08.609 [INFO][5341] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.13.69/26] handle="k8s-pod-network.7848d5df72b618f930de3bf5a0f759ae9c8462419568a857007283ed12d06c3e" host="ci-3975.1.1-8-31c642c6eb" Jul 2 00:20:08.661798 containerd[1460]: 2024-07-02 00:20:08.609 [INFO][5341] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:20:08.661798 containerd[1460]: 2024-07-02 00:20:08.609 [INFO][5341] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.13.69/26] IPv6=[] ContainerID="7848d5df72b618f930de3bf5a0f759ae9c8462419568a857007283ed12d06c3e" HandleID="k8s-pod-network.7848d5df72b618f930de3bf5a0f759ae9c8462419568a857007283ed12d06c3e" Workload="ci--3975.1.1--8--31c642c6eb-k8s-calico--apiserver--666cbfcdbd--cp4fm-eth0" Jul 2 00:20:08.665401 containerd[1460]: 2024-07-02 00:20:08.614 [INFO][5331] k8s.go 386: Populated endpoint ContainerID="7848d5df72b618f930de3bf5a0f759ae9c8462419568a857007283ed12d06c3e" Namespace="calico-apiserver" Pod="calico-apiserver-666cbfcdbd-cp4fm" WorkloadEndpoint="ci--3975.1.1--8--31c642c6eb-k8s-calico--apiserver--666cbfcdbd--cp4fm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--8--31c642c6eb-k8s-calico--apiserver--666cbfcdbd--cp4fm-eth0", GenerateName:"calico-apiserver-666cbfcdbd-", Namespace:"calico-apiserver", SelfLink:"", UID:"653dff8f-82c5-4b91-9461-27ac7fc35774", ResourceVersion:"1307", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 20, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"666cbfcdbd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-8-31c642c6eb", ContainerID:"", Pod:"calico-apiserver-666cbfcdbd-cp4fm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.13.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7f548eb7414", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:20:08.665401 containerd[1460]: 2024-07-02 00:20:08.615 [INFO][5331] k8s.go 387: Calico CNI using IPs: [192.168.13.69/32] ContainerID="7848d5df72b618f930de3bf5a0f759ae9c8462419568a857007283ed12d06c3e" Namespace="calico-apiserver" Pod="calico-apiserver-666cbfcdbd-cp4fm" WorkloadEndpoint="ci--3975.1.1--8--31c642c6eb-k8s-calico--apiserver--666cbfcdbd--cp4fm-eth0" Jul 2 00:20:08.665401 containerd[1460]: 2024-07-02 00:20:08.615 [INFO][5331] dataplane_linux.go 68: Setting the host side veth name to cali7f548eb7414 ContainerID="7848d5df72b618f930de3bf5a0f759ae9c8462419568a857007283ed12d06c3e" Namespace="calico-apiserver" Pod="calico-apiserver-666cbfcdbd-cp4fm" WorkloadEndpoint="ci--3975.1.1--8--31c642c6eb-k8s-calico--apiserver--666cbfcdbd--cp4fm-eth0" Jul 2 00:20:08.665401 containerd[1460]: 2024-07-02 00:20:08.622 [INFO][5331] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="7848d5df72b618f930de3bf5a0f759ae9c8462419568a857007283ed12d06c3e" Namespace="calico-apiserver" Pod="calico-apiserver-666cbfcdbd-cp4fm" WorkloadEndpoint="ci--3975.1.1--8--31c642c6eb-k8s-calico--apiserver--666cbfcdbd--cp4fm-eth0" Jul 2 00:20:08.665401 containerd[1460]: 2024-07-02 00:20:08.626 [INFO][5331] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7848d5df72b618f930de3bf5a0f759ae9c8462419568a857007283ed12d06c3e" Namespace="calico-apiserver" Pod="calico-apiserver-666cbfcdbd-cp4fm" WorkloadEndpoint="ci--3975.1.1--8--31c642c6eb-k8s-calico--apiserver--666cbfcdbd--cp4fm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--8--31c642c6eb-k8s-calico--apiserver--666cbfcdbd--cp4fm-eth0", GenerateName:"calico-apiserver-666cbfcdbd-", Namespace:"calico-apiserver", SelfLink:"", UID:"653dff8f-82c5-4b91-9461-27ac7fc35774", ResourceVersion:"1307", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 20, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"666cbfcdbd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-8-31c642c6eb", ContainerID:"7848d5df72b618f930de3bf5a0f759ae9c8462419568a857007283ed12d06c3e", Pod:"calico-apiserver-666cbfcdbd-cp4fm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.13.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7f548eb7414", MAC:"32:a4:5a:56:28:68", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:20:08.665401 containerd[1460]: 2024-07-02 00:20:08.650 [INFO][5331] k8s.go 500: Wrote updated endpoint to datastore ContainerID="7848d5df72b618f930de3bf5a0f759ae9c8462419568a857007283ed12d06c3e" Namespace="calico-apiserver" Pod="calico-apiserver-666cbfcdbd-cp4fm" WorkloadEndpoint="ci--3975.1.1--8--31c642c6eb-k8s-calico--apiserver--666cbfcdbd--cp4fm-eth0" Jul 2 00:20:08.852473 containerd[1460]: time="2024-07-02T00:20:08.851918341Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:20:08.852473 containerd[1460]: time="2024-07-02T00:20:08.852063595Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:20:08.852473 containerd[1460]: time="2024-07-02T00:20:08.852128027Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:20:08.852473 containerd[1460]: time="2024-07-02T00:20:08.852152707Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:20:08.923211 systemd[1]: run-containerd-runc-k8s.io-7848d5df72b618f930de3bf5a0f759ae9c8462419568a857007283ed12d06c3e-runc.jLNo50.mount: Deactivated successfully. Jul 2 00:20:08.936827 systemd[1]: Started cri-containerd-7848d5df72b618f930de3bf5a0f759ae9c8462419568a857007283ed12d06c3e.scope - libcontainer container 7848d5df72b618f930de3bf5a0f759ae9c8462419568a857007283ed12d06c3e. Jul 2 00:20:09.105652 containerd[1460]: time="2024-07-02T00:20:09.105516800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-666cbfcdbd-cp4fm,Uid:653dff8f-82c5-4b91-9461-27ac7fc35774,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"7848d5df72b618f930de3bf5a0f759ae9c8462419568a857007283ed12d06c3e\"" Jul 2 00:20:09.126655 containerd[1460]: time="2024-07-02T00:20:09.125930501Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jul 2 00:20:10.548544 systemd[1]: Started sshd@24-64.227.97.255:22-147.75.109.163:59162.service - OpenSSH per-connection server daemon (147.75.109.163:59162). Jul 2 00:20:10.619121 systemd-networkd[1370]: cali7f548eb7414: Gained IPv6LL Jul 2 00:20:10.655846 sshd[5408]: Accepted publickey for core from 147.75.109.163 port 59162 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:20:10.660900 sshd[5408]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:20:10.675578 systemd-logind[1446]: New session 25 of user core. Jul 2 00:20:10.682165 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 2 00:20:11.305732 sshd[5408]: pam_unix(sshd:session): session closed for user core Jul 2 00:20:11.317391 systemd[1]: sshd@24-64.227.97.255:22-147.75.109.163:59162.service: Deactivated successfully. Jul 2 00:20:11.328063 systemd[1]: session-25.scope: Deactivated successfully. Jul 2 00:20:11.334074 systemd-logind[1446]: Session 25 logged out. Waiting for processes to exit. Jul 2 00:20:11.339635 systemd-logind[1446]: Removed session 25. Jul 2 00:20:12.743633 containerd[1460]: time="2024-07-02T00:20:12.742535279Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:20:12.744564 containerd[1460]: time="2024-07-02T00:20:12.744480275Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=40421260" Jul 2 00:20:12.745963 containerd[1460]: time="2024-07-02T00:20:12.745874421Z" level=info msg="ImageCreate event name:\"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:20:12.750707 containerd[1460]: time="2024-07-02T00:20:12.750645065Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:20:12.751461 containerd[1460]: time="2024-07-02T00:20:12.751403346Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"41869036\" in 3.624352677s" Jul 2 00:20:12.751461 containerd[1460]: time="2024-07-02T00:20:12.751459486Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\"" Jul 2 00:20:12.769321 containerd[1460]: time="2024-07-02T00:20:12.769259572Z" level=info msg="CreateContainer within sandbox \"7848d5df72b618f930de3bf5a0f759ae9c8462419568a857007283ed12d06c3e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 2 00:20:12.799824 containerd[1460]: time="2024-07-02T00:20:12.799577535Z" level=info msg="CreateContainer within sandbox \"7848d5df72b618f930de3bf5a0f759ae9c8462419568a857007283ed12d06c3e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8d39cab271469cb40e9396735edaee2eb3763ec77ec63fe259069976cf74b01c\"" Jul 2 00:20:12.802944 containerd[1460]: time="2024-07-02T00:20:12.801454993Z" level=info msg="StartContainer for \"8d39cab271469cb40e9396735edaee2eb3763ec77ec63fe259069976cf74b01c\"" Jul 2 00:20:12.864962 systemd[1]: run-containerd-runc-k8s.io-8d39cab271469cb40e9396735edaee2eb3763ec77ec63fe259069976cf74b01c-runc.fwcALO.mount: Deactivated successfully. Jul 2 00:20:12.878647 systemd[1]: Started cri-containerd-8d39cab271469cb40e9396735edaee2eb3763ec77ec63fe259069976cf74b01c.scope - libcontainer container 8d39cab271469cb40e9396735edaee2eb3763ec77ec63fe259069976cf74b01c. Jul 2 00:20:12.966478 containerd[1460]: time="2024-07-02T00:20:12.966382197Z" level=info msg="StartContainer for \"8d39cab271469cb40e9396735edaee2eb3763ec77ec63fe259069976cf74b01c\" returns successfully" Jul 2 00:20:14.569911 kubelet[2555]: I0702 00:20:14.568724 2555 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-666cbfcdbd-cp4fm" podStartSLOduration=3.91750327 podStartE2EDuration="7.561306859s" podCreationTimestamp="2024-07-02 00:20:07 +0000 UTC" firstStartedPulling="2024-07-02 00:20:09.108023713 +0000 UTC m=+117.092427763" lastFinishedPulling="2024-07-02 00:20:12.751827301 +0000 UTC m=+120.736231352" observedRunningTime="2024-07-02 00:20:13.034584242 +0000 UTC m=+121.018988315" watchObservedRunningTime="2024-07-02 00:20:14.561306859 +0000 UTC m=+122.545710956" Jul 2 00:20:16.042931 systemd[1]: run-containerd-runc-k8s.io-6e276e3287e9b736f3485c1c8565248333e36206b322c31fbc036e31b97263ce-runc.wq4jpA.mount: Deactivated successfully. Jul 2 00:20:16.321293 systemd[1]: Started sshd@25-64.227.97.255:22-147.75.109.163:46900.service - OpenSSH per-connection server daemon (147.75.109.163:46900). Jul 2 00:20:16.458005 sshd[5526]: Accepted publickey for core from 147.75.109.163 port 46900 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:20:16.461059 sshd[5526]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:20:16.470106 systemd-logind[1446]: New session 26 of user core. Jul 2 00:20:16.479244 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 2 00:20:17.097973 sshd[5526]: pam_unix(sshd:session): session closed for user core Jul 2 00:20:17.109732 systemd[1]: sshd@25-64.227.97.255:22-147.75.109.163:46900.service: Deactivated successfully. Jul 2 00:20:17.116598 systemd[1]: session-26.scope: Deactivated successfully. Jul 2 00:20:17.117971 systemd-logind[1446]: Session 26 logged out. Waiting for processes to exit. Jul 2 00:20:17.121837 systemd-logind[1446]: Removed session 26. Jul 2 00:20:22.131894 systemd[1]: Started sshd@26-64.227.97.255:22-147.75.109.163:46908.service - OpenSSH per-connection server daemon (147.75.109.163:46908). Jul 2 00:20:22.213987 sshd[5553]: Accepted publickey for core from 147.75.109.163 port 46908 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:20:22.216994 sshd[5553]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:20:22.225259 systemd-logind[1446]: New session 27 of user core. Jul 2 00:20:22.234397 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 2 00:20:22.282815 kubelet[2555]: E0702 00:20:22.281960 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:20:22.534567 sshd[5553]: pam_unix(sshd:session): session closed for user core Jul 2 00:20:22.543212 systemd[1]: sshd@26-64.227.97.255:22-147.75.109.163:46908.service: Deactivated successfully. Jul 2 00:20:22.549391 systemd[1]: session-27.scope: Deactivated successfully. Jul 2 00:20:22.555914 systemd-logind[1446]: Session 27 logged out. Waiting for processes to exit. Jul 2 00:20:22.558981 systemd-logind[1446]: Removed session 27. Jul 2 00:20:26.278991 kubelet[2555]: E0702 00:20:26.278931 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:20:27.558773 systemd[1]: Started sshd@27-64.227.97.255:22-147.75.109.163:58904.service - OpenSSH per-connection server daemon (147.75.109.163:58904). Jul 2 00:20:27.616315 sshd[5569]: Accepted publickey for core from 147.75.109.163 port 58904 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:20:27.618399 sshd[5569]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:20:27.625104 systemd-logind[1446]: New session 28 of user core. Jul 2 00:20:27.636259 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 2 00:20:27.827195 sshd[5569]: pam_unix(sshd:session): session closed for user core Jul 2 00:20:27.835746 systemd[1]: sshd@27-64.227.97.255:22-147.75.109.163:58904.service: Deactivated successfully. Jul 2 00:20:27.840005 systemd[1]: session-28.scope: Deactivated successfully. Jul 2 00:20:27.844123 systemd-logind[1446]: Session 28 logged out. Waiting for processes to exit. Jul 2 00:20:27.845873 systemd-logind[1446]: Removed session 28.