Jun 26 07:14:43.939202 kernel: Linux version 6.6.35-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Tue Jun 25 17:21:28 -00 2024 Jun 26 07:14:43.939230 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=4483672da8ac4c95f5ee13a489103440a13110ce1f63977ab5a6a33d0c137bf8 Jun 26 07:14:43.939243 kernel: BIOS-provided physical RAM map: Jun 26 07:14:43.939250 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jun 26 07:14:43.939256 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jun 26 07:14:43.939263 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jun 26 07:14:43.939271 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffd7fff] usable Jun 26 07:14:43.939278 kernel: BIOS-e820: [mem 0x000000007ffd8000-0x000000007fffffff] reserved Jun 26 07:14:43.939284 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jun 26 07:14:43.939294 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jun 26 07:14:43.939301 kernel: NX (Execute Disable) protection: active Jun 26 07:14:43.939308 kernel: APIC: Static calls initialized Jun 26 07:14:43.939315 kernel: SMBIOS 2.8 present. Jun 26 07:14:43.939322 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Jun 26 07:14:43.939334 kernel: Hypervisor detected: KVM Jun 26 07:14:43.939348 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jun 26 07:14:43.939359 kernel: kvm-clock: using sched offset of 3245163189 cycles Jun 26 07:14:43.939370 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jun 26 07:14:43.939378 kernel: tsc: Detected 2494.138 MHz processor Jun 26 07:14:43.939386 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jun 26 07:14:43.939395 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jun 26 07:14:43.939407 kernel: last_pfn = 0x7ffd8 max_arch_pfn = 0x400000000 Jun 26 07:14:43.939420 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jun 26 07:14:43.939431 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jun 26 07:14:43.939445 kernel: ACPI: Early table checksum verification disabled Jun 26 07:14:43.939453 kernel: ACPI: RSDP 0x00000000000F5A50 000014 (v00 BOCHS ) Jun 26 07:14:43.939461 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 26 07:14:43.939468 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 26 07:14:43.939476 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 26 07:14:43.939484 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jun 26 07:14:43.939491 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 26 07:14:43.939503 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 26 07:14:43.939517 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 26 07:14:43.939531 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 26 07:14:43.939539 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Jun 26 07:14:43.939546 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Jun 26 07:14:43.939554 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jun 26 07:14:43.939562 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Jun 26 07:14:43.939569 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Jun 26 07:14:43.939580 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Jun 26 07:14:43.939602 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Jun 26 07:14:43.939613 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jun 26 07:14:43.939624 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jun 26 07:14:43.939637 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jun 26 07:14:43.939653 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jun 26 07:14:43.939666 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffd7fff] -> [mem 0x00000000-0x7ffd7fff] Jun 26 07:14:43.939679 kernel: NODE_DATA(0) allocated [mem 0x7ffd2000-0x7ffd7fff] Jun 26 07:14:43.939696 kernel: Zone ranges: Jun 26 07:14:43.939708 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jun 26 07:14:43.939719 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffd7fff] Jun 26 07:14:43.939732 kernel: Normal empty Jun 26 07:14:43.941809 kernel: Movable zone start for each node Jun 26 07:14:43.941826 kernel: Early memory node ranges Jun 26 07:14:43.941835 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jun 26 07:14:43.941844 kernel: node 0: [mem 0x0000000000100000-0x000000007ffd7fff] Jun 26 07:14:43.941853 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffd7fff] Jun 26 07:14:43.941867 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jun 26 07:14:43.941876 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jun 26 07:14:43.941885 kernel: On node 0, zone DMA32: 40 pages in unavailable ranges Jun 26 07:14:43.941893 kernel: ACPI: PM-Timer IO Port: 0x608 Jun 26 07:14:43.941902 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jun 26 07:14:43.941910 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jun 26 07:14:43.941919 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jun 26 07:14:43.941928 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jun 26 07:14:43.941936 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jun 26 07:14:43.941947 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jun 26 07:14:43.941956 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jun 26 07:14:43.941964 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jun 26 07:14:43.941972 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jun 26 07:14:43.941980 kernel: TSC deadline timer available Jun 26 07:14:43.941989 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jun 26 07:14:43.941997 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jun 26 07:14:43.942113 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Jun 26 07:14:43.942124 kernel: Booting paravirtualized kernel on KVM Jun 26 07:14:43.942160 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jun 26 07:14:43.942171 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jun 26 07:14:43.942183 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Jun 26 07:14:43.942194 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Jun 26 07:14:43.942205 kernel: pcpu-alloc: [0] 0 1 Jun 26 07:14:43.942216 kernel: kvm-guest: PV spinlocks disabled, no host support Jun 26 07:14:43.942231 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=4483672da8ac4c95f5ee13a489103440a13110ce1f63977ab5a6a33d0c137bf8 Jun 26 07:14:43.942243 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 26 07:14:43.942262 kernel: random: crng init done Jun 26 07:14:43.942274 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 26 07:14:43.942286 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jun 26 07:14:43.942299 kernel: Fallback order for Node 0: 0 Jun 26 07:14:43.942311 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515800 Jun 26 07:14:43.942323 kernel: Policy zone: DMA32 Jun 26 07:14:43.942335 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 26 07:14:43.942350 kernel: Memory: 1965048K/2096600K available (12288K kernel code, 2302K rwdata, 22636K rodata, 49384K init, 1964K bss, 131292K reserved, 0K cma-reserved) Jun 26 07:14:43.942362 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jun 26 07:14:43.942379 kernel: Kernel/User page tables isolation: enabled Jun 26 07:14:43.942390 kernel: ftrace: allocating 37650 entries in 148 pages Jun 26 07:14:43.942402 kernel: ftrace: allocated 148 pages with 3 groups Jun 26 07:14:43.942413 kernel: Dynamic Preempt: voluntary Jun 26 07:14:43.942427 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 26 07:14:43.942440 kernel: rcu: RCU event tracing is enabled. Jun 26 07:14:43.942452 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jun 26 07:14:43.942464 kernel: Trampoline variant of Tasks RCU enabled. Jun 26 07:14:43.942476 kernel: Rude variant of Tasks RCU enabled. Jun 26 07:14:43.942492 kernel: Tracing variant of Tasks RCU enabled. Jun 26 07:14:43.942506 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 26 07:14:43.942518 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jun 26 07:14:43.942530 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jun 26 07:14:43.942542 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 26 07:14:43.942554 kernel: Console: colour VGA+ 80x25 Jun 26 07:14:43.942565 kernel: printk: console [tty0] enabled Jun 26 07:14:43.942578 kernel: printk: console [ttyS0] enabled Jun 26 07:14:43.942589 kernel: ACPI: Core revision 20230628 Jun 26 07:14:43.942601 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jun 26 07:14:43.942618 kernel: APIC: Switch to symmetric I/O mode setup Jun 26 07:14:43.942630 kernel: x2apic enabled Jun 26 07:14:43.942642 kernel: APIC: Switched APIC routing to: physical x2apic Jun 26 07:14:43.942656 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jun 26 07:14:43.942667 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns Jun 26 07:14:43.942680 kernel: Calibrating delay loop (skipped) preset value.. 4988.27 BogoMIPS (lpj=2494138) Jun 26 07:14:43.942693 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jun 26 07:14:43.942707 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jun 26 07:14:43.942737 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jun 26 07:14:43.943862 kernel: Spectre V2 : Mitigation: Retpolines Jun 26 07:14:43.943876 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jun 26 07:14:43.943898 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jun 26 07:14:43.943911 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jun 26 07:14:43.943924 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jun 26 07:14:43.943938 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jun 26 07:14:43.943951 kernel: MDS: Mitigation: Clear CPU buffers Jun 26 07:14:43.943964 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jun 26 07:14:43.943981 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jun 26 07:14:43.943993 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jun 26 07:14:43.944007 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jun 26 07:14:43.944020 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jun 26 07:14:43.944033 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jun 26 07:14:43.944046 kernel: Freeing SMP alternatives memory: 32K Jun 26 07:14:43.944059 kernel: pid_max: default: 32768 minimum: 301 Jun 26 07:14:43.944072 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Jun 26 07:14:43.944091 kernel: SELinux: Initializing. Jun 26 07:14:43.944106 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jun 26 07:14:43.944120 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jun 26 07:14:43.944148 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Jun 26 07:14:43.944161 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jun 26 07:14:43.944208 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jun 26 07:14:43.944229 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jun 26 07:14:43.944239 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Jun 26 07:14:43.944260 kernel: signal: max sigframe size: 1776 Jun 26 07:14:43.944270 kernel: rcu: Hierarchical SRCU implementation. Jun 26 07:14:43.944280 kernel: rcu: Max phase no-delay instances is 400. Jun 26 07:14:43.944289 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jun 26 07:14:43.944298 kernel: smp: Bringing up secondary CPUs ... Jun 26 07:14:43.944307 kernel: smpboot: x86: Booting SMP configuration: Jun 26 07:14:43.944315 kernel: .... node #0, CPUs: #1 Jun 26 07:14:43.944324 kernel: smp: Brought up 1 node, 2 CPUs Jun 26 07:14:43.944333 kernel: smpboot: Max logical packages: 1 Jun 26 07:14:43.944353 kernel: smpboot: Total of 2 processors activated (9976.55 BogoMIPS) Jun 26 07:14:43.944378 kernel: devtmpfs: initialized Jun 26 07:14:43.944392 kernel: x86/mm: Memory block size: 128MB Jun 26 07:14:43.944405 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 26 07:14:43.944420 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jun 26 07:14:43.944436 kernel: pinctrl core: initialized pinctrl subsystem Jun 26 07:14:43.944445 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 26 07:14:43.944454 kernel: audit: initializing netlink subsys (disabled) Jun 26 07:14:43.944463 kernel: audit: type=2000 audit(1719386082.729:1): state=initialized audit_enabled=0 res=1 Jun 26 07:14:43.944472 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 26 07:14:43.944486 kernel: thermal_sys: Registered thermal governor 'user_space' Jun 26 07:14:43.944495 kernel: cpuidle: using governor menu Jun 26 07:14:43.944503 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 26 07:14:43.944512 kernel: dca service started, version 1.12.1 Jun 26 07:14:43.944521 kernel: PCI: Using configuration type 1 for base access Jun 26 07:14:43.944530 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jun 26 07:14:43.944539 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 26 07:14:43.944547 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jun 26 07:14:43.944556 kernel: ACPI: Added _OSI(Module Device) Jun 26 07:14:43.944568 kernel: ACPI: Added _OSI(Processor Device) Jun 26 07:14:43.944577 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jun 26 07:14:43.944586 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 26 07:14:43.944594 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 26 07:14:43.944603 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jun 26 07:14:43.944612 kernel: ACPI: Interpreter enabled Jun 26 07:14:43.944620 kernel: ACPI: PM: (supports S0 S5) Jun 26 07:14:43.944629 kernel: ACPI: Using IOAPIC for interrupt routing Jun 26 07:14:43.944638 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jun 26 07:14:43.944650 kernel: PCI: Using E820 reservations for host bridge windows Jun 26 07:14:43.944659 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jun 26 07:14:43.944668 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jun 26 07:14:43.946489 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jun 26 07:14:43.946612 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jun 26 07:14:43.946706 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jun 26 07:14:43.946735 kernel: acpiphp: Slot [3] registered Jun 26 07:14:43.946763 kernel: acpiphp: Slot [4] registered Jun 26 07:14:43.946772 kernel: acpiphp: Slot [5] registered Jun 26 07:14:43.946781 kernel: acpiphp: Slot [6] registered Jun 26 07:14:43.946790 kernel: acpiphp: Slot [7] registered Jun 26 07:14:43.946799 kernel: acpiphp: Slot [8] registered Jun 26 07:14:43.946808 kernel: acpiphp: Slot [9] registered Jun 26 07:14:43.946817 kernel: acpiphp: Slot [10] registered Jun 26 07:14:43.946826 kernel: acpiphp: Slot [11] registered Jun 26 07:14:43.946834 kernel: acpiphp: Slot [12] registered Jun 26 07:14:43.946846 kernel: acpiphp: Slot [13] registered Jun 26 07:14:43.946860 kernel: acpiphp: Slot [14] registered Jun 26 07:14:43.946869 kernel: acpiphp: Slot [15] registered Jun 26 07:14:43.946878 kernel: acpiphp: Slot [16] registered Jun 26 07:14:43.946886 kernel: acpiphp: Slot [17] registered Jun 26 07:14:43.946895 kernel: acpiphp: Slot [18] registered Jun 26 07:14:43.946912 kernel: acpiphp: Slot [19] registered Jun 26 07:14:43.946921 kernel: acpiphp: Slot [20] registered Jun 26 07:14:43.946929 kernel: acpiphp: Slot [21] registered Jun 26 07:14:43.946938 kernel: acpiphp: Slot [22] registered Jun 26 07:14:43.946949 kernel: acpiphp: Slot [23] registered Jun 26 07:14:43.946958 kernel: acpiphp: Slot [24] registered Jun 26 07:14:43.946967 kernel: acpiphp: Slot [25] registered Jun 26 07:14:43.946975 kernel: acpiphp: Slot [26] registered Jun 26 07:14:43.946984 kernel: acpiphp: Slot [27] registered Jun 26 07:14:43.946993 kernel: acpiphp: Slot [28] registered Jun 26 07:14:43.947002 kernel: acpiphp: Slot [29] registered Jun 26 07:14:43.947011 kernel: acpiphp: Slot [30] registered Jun 26 07:14:43.947020 kernel: acpiphp: Slot [31] registered Jun 26 07:14:43.947031 kernel: PCI host bridge to bus 0000:00 Jun 26 07:14:43.947136 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jun 26 07:14:43.947223 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jun 26 07:14:43.947317 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jun 26 07:14:43.947446 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jun 26 07:14:43.947556 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jun 26 07:14:43.947688 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jun 26 07:14:43.948965 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jun 26 07:14:43.949211 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jun 26 07:14:43.949321 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jun 26 07:14:43.949438 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Jun 26 07:14:43.949552 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jun 26 07:14:43.949678 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jun 26 07:14:43.950275 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jun 26 07:14:43.950404 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jun 26 07:14:43.950525 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Jun 26 07:14:43.950650 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Jun 26 07:14:43.951827 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jun 26 07:14:43.951966 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jun 26 07:14:43.952092 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jun 26 07:14:43.952220 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jun 26 07:14:43.952335 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jun 26 07:14:43.952434 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Jun 26 07:14:43.952530 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Jun 26 07:14:43.952651 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jun 26 07:14:43.953830 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jun 26 07:14:43.954001 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jun 26 07:14:43.954165 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Jun 26 07:14:43.954302 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Jun 26 07:14:43.954429 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Jun 26 07:14:43.954551 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jun 26 07:14:43.954694 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Jun 26 07:14:43.954826 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Jun 26 07:14:43.954993 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Jun 26 07:14:43.955194 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Jun 26 07:14:43.955310 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Jun 26 07:14:43.955420 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Jun 26 07:14:43.955531 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Jun 26 07:14:43.955679 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Jun 26 07:14:43.956065 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Jun 26 07:14:43.956179 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Jun 26 07:14:43.956275 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Jun 26 07:14:43.956387 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Jun 26 07:14:43.956484 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Jun 26 07:14:43.956597 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Jun 26 07:14:43.956702 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Jun 26 07:14:43.956994 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Jun 26 07:14:43.957103 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Jun 26 07:14:43.957196 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Jun 26 07:14:43.957208 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jun 26 07:14:43.957218 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jun 26 07:14:43.957227 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jun 26 07:14:43.957237 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jun 26 07:14:43.957246 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jun 26 07:14:43.957261 kernel: iommu: Default domain type: Translated Jun 26 07:14:43.957270 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jun 26 07:14:43.957279 kernel: PCI: Using ACPI for IRQ routing Jun 26 07:14:43.957288 kernel: PCI: pci_cache_line_size set to 64 bytes Jun 26 07:14:43.957297 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jun 26 07:14:43.957306 kernel: e820: reserve RAM buffer [mem 0x7ffd8000-0x7fffffff] Jun 26 07:14:43.957403 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jun 26 07:14:43.957501 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jun 26 07:14:43.957620 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jun 26 07:14:43.957638 kernel: vgaarb: loaded Jun 26 07:14:43.957647 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jun 26 07:14:43.957656 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jun 26 07:14:43.957665 kernel: clocksource: Switched to clocksource kvm-clock Jun 26 07:14:43.957674 kernel: VFS: Disk quotas dquot_6.6.0 Jun 26 07:14:43.957683 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 26 07:14:43.957692 kernel: pnp: PnP ACPI init Jun 26 07:14:43.957701 kernel: pnp: PnP ACPI: found 4 devices Jun 26 07:14:43.957710 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jun 26 07:14:43.957722 kernel: NET: Registered PF_INET protocol family Jun 26 07:14:43.957732 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jun 26 07:14:43.957754 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jun 26 07:14:43.957763 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 26 07:14:43.957772 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jun 26 07:14:43.957791 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jun 26 07:14:43.957800 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jun 26 07:14:43.957809 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jun 26 07:14:43.957818 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jun 26 07:14:43.957831 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 26 07:14:43.957840 kernel: NET: Registered PF_XDP protocol family Jun 26 07:14:43.957944 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jun 26 07:14:43.958098 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jun 26 07:14:43.958206 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jun 26 07:14:43.958293 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jun 26 07:14:43.958383 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jun 26 07:14:43.958490 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jun 26 07:14:43.958621 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jun 26 07:14:43.958636 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jun 26 07:14:43.958735 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7b0 took 29907 usecs Jun 26 07:14:43.958801 kernel: PCI: CLS 0 bytes, default 64 Jun 26 07:14:43.958812 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jun 26 07:14:43.958821 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns Jun 26 07:14:43.958831 kernel: Initialise system trusted keyrings Jun 26 07:14:43.958840 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jun 26 07:14:43.958855 kernel: Key type asymmetric registered Jun 26 07:14:43.958863 kernel: Asymmetric key parser 'x509' registered Jun 26 07:14:43.958872 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jun 26 07:14:43.958881 kernel: io scheduler mq-deadline registered Jun 26 07:14:43.958890 kernel: io scheduler kyber registered Jun 26 07:14:43.958899 kernel: io scheduler bfq registered Jun 26 07:14:43.958908 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jun 26 07:14:43.958918 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jun 26 07:14:43.958927 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jun 26 07:14:43.958939 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jun 26 07:14:43.958948 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 26 07:14:43.958957 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jun 26 07:14:43.958966 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jun 26 07:14:43.958984 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jun 26 07:14:43.958993 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jun 26 07:14:43.959002 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jun 26 07:14:43.959168 kernel: rtc_cmos 00:03: RTC can wake from S4 Jun 26 07:14:43.959259 kernel: rtc_cmos 00:03: registered as rtc0 Jun 26 07:14:43.959405 kernel: rtc_cmos 00:03: setting system clock to 2024-06-26T07:14:43 UTC (1719386083) Jun 26 07:14:43.959494 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jun 26 07:14:43.959505 kernel: intel_pstate: CPU model not supported Jun 26 07:14:43.959514 kernel: NET: Registered PF_INET6 protocol family Jun 26 07:14:43.959524 kernel: Segment Routing with IPv6 Jun 26 07:14:43.959533 kernel: In-situ OAM (IOAM) with IPv6 Jun 26 07:14:43.959543 kernel: NET: Registered PF_PACKET protocol family Jun 26 07:14:43.959557 kernel: Key type dns_resolver registered Jun 26 07:14:43.959575 kernel: IPI shorthand broadcast: enabled Jun 26 07:14:43.959609 kernel: sched_clock: Marking stable (965005141, 103389250)->(1103904676, -35510285) Jun 26 07:14:43.959619 kernel: registered taskstats version 1 Jun 26 07:14:43.959628 kernel: Loading compiled-in X.509 certificates Jun 26 07:14:43.959637 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.35-flatcar: 60204e9db5f484c670a1c92aec37e9a0c4d3ae90' Jun 26 07:14:43.959646 kernel: Key type .fscrypt registered Jun 26 07:14:43.959655 kernel: Key type fscrypt-provisioning registered Jun 26 07:14:43.959664 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 26 07:14:43.959674 kernel: ima: Allocated hash algorithm: sha1 Jun 26 07:14:43.959687 kernel: ima: No architecture policies found Jun 26 07:14:43.959696 kernel: clk: Disabling unused clocks Jun 26 07:14:43.959705 kernel: Freeing unused kernel image (initmem) memory: 49384K Jun 26 07:14:43.959714 kernel: Write protecting the kernel read-only data: 36864k Jun 26 07:14:43.959723 kernel: Freeing unused kernel image (rodata/data gap) memory: 1940K Jun 26 07:14:43.959781 kernel: Run /init as init process Jun 26 07:14:43.959793 kernel: with arguments: Jun 26 07:14:43.959803 kernel: /init Jun 26 07:14:43.959812 kernel: with environment: Jun 26 07:14:43.959825 kernel: HOME=/ Jun 26 07:14:43.959834 kernel: TERM=linux Jun 26 07:14:43.959843 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 26 07:14:43.959857 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jun 26 07:14:43.959870 systemd[1]: Detected virtualization kvm. Jun 26 07:14:43.959880 systemd[1]: Detected architecture x86-64. Jun 26 07:14:43.959890 systemd[1]: Running in initrd. Jun 26 07:14:43.959899 systemd[1]: No hostname configured, using default hostname. Jun 26 07:14:43.959912 systemd[1]: Hostname set to . Jun 26 07:14:43.959922 systemd[1]: Initializing machine ID from VM UUID. Jun 26 07:14:43.959932 systemd[1]: Queued start job for default target initrd.target. Jun 26 07:14:43.959942 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 26 07:14:43.959951 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 26 07:14:43.959962 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 26 07:14:43.959972 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 26 07:14:43.959981 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 26 07:14:43.959994 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 26 07:14:43.960006 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 26 07:14:43.960016 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 26 07:14:43.960026 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 26 07:14:43.960035 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 26 07:14:43.960045 systemd[1]: Reached target paths.target - Path Units. Jun 26 07:14:43.960057 systemd[1]: Reached target slices.target - Slice Units. Jun 26 07:14:43.960067 systemd[1]: Reached target swap.target - Swaps. Jun 26 07:14:43.960077 systemd[1]: Reached target timers.target - Timer Units. Jun 26 07:14:43.960090 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 26 07:14:43.960100 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 26 07:14:43.960109 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 26 07:14:43.960122 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jun 26 07:14:43.960132 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 26 07:14:43.960142 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 26 07:14:43.960152 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 26 07:14:43.960162 systemd[1]: Reached target sockets.target - Socket Units. Jun 26 07:14:43.960172 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 26 07:14:43.960182 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 26 07:14:43.960192 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 26 07:14:43.960204 systemd[1]: Starting systemd-fsck-usr.service... Jun 26 07:14:43.960214 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 26 07:14:43.960224 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 26 07:14:43.960234 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 26 07:14:43.960244 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 26 07:14:43.960253 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 26 07:14:43.960263 systemd[1]: Finished systemd-fsck-usr.service. Jun 26 07:14:43.960312 systemd-journald[183]: Collecting audit messages is disabled. Jun 26 07:14:43.960335 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 26 07:14:43.960350 systemd-journald[183]: Journal started Jun 26 07:14:43.960371 systemd-journald[183]: Runtime Journal (/run/log/journal/e0268ed965b746768461abadbf159ec4) is 4.9M, max 39.3M, 34.4M free. Jun 26 07:14:43.962786 systemd[1]: Started systemd-journald.service - Journal Service. Jun 26 07:14:43.938693 systemd-modules-load[184]: Inserted module 'overlay' Jun 26 07:14:43.980997 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 26 07:14:44.009434 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 26 07:14:44.009479 kernel: Bridge firewalling registered Jun 26 07:14:43.990841 systemd-modules-load[184]: Inserted module 'br_netfilter' Jun 26 07:14:44.011238 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 26 07:14:44.017386 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 26 07:14:44.019069 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 26 07:14:44.027089 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 26 07:14:44.035115 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 26 07:14:44.039304 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 26 07:14:44.040305 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 26 07:14:44.052495 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 26 07:14:44.060039 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 26 07:14:44.066332 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 26 07:14:44.068806 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 26 07:14:44.078292 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 26 07:14:44.108544 dracut-cmdline[217]: dracut-dracut-053 Jun 26 07:14:44.114775 dracut-cmdline[217]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=4483672da8ac4c95f5ee13a489103440a13110ce1f63977ab5a6a33d0c137bf8 Jun 26 07:14:44.117149 systemd-resolved[212]: Positive Trust Anchors: Jun 26 07:14:44.117162 systemd-resolved[212]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 26 07:14:44.117200 systemd-resolved[212]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jun 26 07:14:44.123423 systemd-resolved[212]: Defaulting to hostname 'linux'. Jun 26 07:14:44.125382 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 26 07:14:44.125862 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 26 07:14:44.218815 kernel: SCSI subsystem initialized Jun 26 07:14:44.229795 kernel: Loading iSCSI transport class v2.0-870. Jun 26 07:14:44.243803 kernel: iscsi: registered transport (tcp) Jun 26 07:14:44.273792 kernel: iscsi: registered transport (qla4xxx) Jun 26 07:14:44.273885 kernel: QLogic iSCSI HBA Driver Jun 26 07:14:44.329979 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 26 07:14:44.339049 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 26 07:14:44.370242 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 26 07:14:44.370323 kernel: device-mapper: uevent: version 1.0.3 Jun 26 07:14:44.370346 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jun 26 07:14:44.422849 kernel: raid6: avx2x4 gen() 17062 MB/s Jun 26 07:14:44.437810 kernel: raid6: avx2x2 gen() 16262 MB/s Jun 26 07:14:44.455068 kernel: raid6: avx2x1 gen() 13441 MB/s Jun 26 07:14:44.455190 kernel: raid6: using algorithm avx2x4 gen() 17062 MB/s Jun 26 07:14:44.473017 kernel: raid6: .... xor() 7116 MB/s, rmw enabled Jun 26 07:14:44.473097 kernel: raid6: using avx2x2 recovery algorithm Jun 26 07:14:44.500802 kernel: xor: automatically using best checksumming function avx Jun 26 07:14:44.701783 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 26 07:14:44.716489 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 26 07:14:44.725078 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 26 07:14:44.741702 systemd-udevd[400]: Using default interface naming scheme 'v255'. Jun 26 07:14:44.748115 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 26 07:14:44.756925 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 26 07:14:44.778132 dracut-pre-trigger[406]: rd.md=0: removing MD RAID activation Jun 26 07:14:44.823448 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 26 07:14:44.831010 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 26 07:14:44.892979 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 26 07:14:44.899239 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 26 07:14:44.920981 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 26 07:14:44.931145 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 26 07:14:44.932939 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 26 07:14:44.934267 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 26 07:14:44.940945 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 26 07:14:44.957407 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 26 07:14:44.977880 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Jun 26 07:14:45.075293 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jun 26 07:14:45.075451 kernel: scsi host0: Virtio SCSI HBA Jun 26 07:14:45.075576 kernel: cryptd: max_cpu_qlen set to 1000 Jun 26 07:14:45.075591 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jun 26 07:14:45.075610 kernel: GPT:9289727 != 125829119 Jun 26 07:14:45.075625 kernel: GPT:Alternate GPT header not at the end of the disk. Jun 26 07:14:45.075643 kernel: GPT:9289727 != 125829119 Jun 26 07:14:45.075659 kernel: GPT: Use GNU Parted to correct GPT errors. Jun 26 07:14:45.075682 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 26 07:14:45.075698 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Jun 26 07:14:45.082104 kernel: virtio_blk virtio5: [vdb] 968 512-byte logical blocks (496 kB/484 KiB) Jun 26 07:14:45.082345 kernel: ACPI: bus type USB registered Jun 26 07:14:45.082362 kernel: usbcore: registered new interface driver usbfs Jun 26 07:14:45.046451 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 26 07:14:45.046599 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 26 07:14:45.047264 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 26 07:14:45.047885 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 26 07:14:45.090862 kernel: usbcore: registered new interface driver hub Jun 26 07:14:45.090897 kernel: usbcore: registered new device driver usb Jun 26 07:14:45.090915 kernel: AVX2 version of gcm_enc/dec engaged. Jun 26 07:14:45.090929 kernel: AES CTR mode by8 optimization enabled Jun 26 07:14:45.048144 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 26 07:14:45.049875 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 26 07:14:45.057182 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 26 07:14:45.114774 kernel: libata version 3.00 loaded. Jun 26 07:14:45.130167 kernel: ata_piix 0000:00:01.1: version 2.13 Jun 26 07:14:45.152829 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (448) Jun 26 07:14:45.152853 kernel: BTRFS: device fsid 329ce27e-ea89-47b5-8f8b-f762c8412eb0 devid 1 transid 31 /dev/vda3 scanned by (udev-worker) (457) Jun 26 07:14:45.152866 kernel: scsi host1: ata_piix Jun 26 07:14:45.153039 kernel: scsi host2: ata_piix Jun 26 07:14:45.153154 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Jun 26 07:14:45.153168 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Jun 26 07:14:45.137776 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jun 26 07:14:45.169370 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 26 07:14:45.181948 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jun 26 07:14:45.186118 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jun 26 07:14:45.187500 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jun 26 07:14:45.192803 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 26 07:14:45.199036 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 26 07:14:45.201963 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 26 07:14:45.210133 disk-uuid[535]: Primary Header is updated. Jun 26 07:14:45.210133 disk-uuid[535]: Secondary Entries is updated. Jun 26 07:14:45.210133 disk-uuid[535]: Secondary Header is updated. Jun 26 07:14:45.220779 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 26 07:14:45.227773 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 26 07:14:45.235420 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 26 07:14:45.238874 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 26 07:14:45.345787 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Jun 26 07:14:45.354781 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Jun 26 07:14:45.354988 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Jun 26 07:14:45.355136 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Jun 26 07:14:45.355330 kernel: hub 1-0:1.0: USB hub found Jun 26 07:14:45.355514 kernel: hub 1-0:1.0: 2 ports detected Jun 26 07:14:46.245913 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 26 07:14:46.247294 disk-uuid[536]: The operation has completed successfully. Jun 26 07:14:46.297187 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 26 07:14:46.297305 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 26 07:14:46.311101 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 26 07:14:46.326869 sh[564]: Success Jun 26 07:14:46.343797 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jun 26 07:14:46.410345 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 26 07:14:46.412666 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 26 07:14:46.413986 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 26 07:14:46.435841 kernel: BTRFS info (device dm-0): first mount of filesystem 329ce27e-ea89-47b5-8f8b-f762c8412eb0 Jun 26 07:14:46.435918 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jun 26 07:14:46.436883 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jun 26 07:14:46.437907 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jun 26 07:14:46.438784 kernel: BTRFS info (device dm-0): using free space tree Jun 26 07:14:46.448543 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 26 07:14:46.449968 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 26 07:14:46.457070 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 26 07:14:46.459990 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 26 07:14:46.476333 kernel: BTRFS info (device vda6): first mount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 26 07:14:46.476404 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 26 07:14:46.476439 kernel: BTRFS info (device vda6): using free space tree Jun 26 07:14:46.480774 kernel: BTRFS info (device vda6): auto enabling async discard Jun 26 07:14:46.497092 systemd[1]: mnt-oem.mount: Deactivated successfully. Jun 26 07:14:46.498240 kernel: BTRFS info (device vda6): last unmount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 26 07:14:46.511051 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 26 07:14:46.517037 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 26 07:14:46.663568 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 26 07:14:46.670650 ignition[659]: Ignition 2.19.0 Jun 26 07:14:46.670667 ignition[659]: Stage: fetch-offline Jun 26 07:14:46.670731 ignition[659]: no configs at "/usr/lib/ignition/base.d" Jun 26 07:14:46.670764 ignition[659]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jun 26 07:14:46.670926 ignition[659]: parsed url from cmdline: "" Jun 26 07:14:46.670932 ignition[659]: no config URL provided Jun 26 07:14:46.670941 ignition[659]: reading system config file "/usr/lib/ignition/user.ign" Jun 26 07:14:46.670958 ignition[659]: no config at "/usr/lib/ignition/user.ign" Jun 26 07:14:46.670970 ignition[659]: failed to fetch config: resource requires networking Jun 26 07:14:46.671292 ignition[659]: Ignition finished successfully Jun 26 07:14:46.676002 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 26 07:14:46.676586 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 26 07:14:46.713666 systemd-networkd[755]: lo: Link UP Jun 26 07:14:46.713688 systemd-networkd[755]: lo: Gained carrier Jun 26 07:14:46.717318 systemd-networkd[755]: Enumeration completed Jun 26 07:14:46.717919 systemd-networkd[755]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jun 26 07:14:46.717924 systemd-networkd[755]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Jun 26 07:14:46.718183 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 26 07:14:46.719281 systemd[1]: Reached target network.target - Network. Jun 26 07:14:46.719347 systemd-networkd[755]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 26 07:14:46.719353 systemd-networkd[755]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 26 07:14:46.720885 systemd-networkd[755]: eth0: Link UP Jun 26 07:14:46.720891 systemd-networkd[755]: eth0: Gained carrier Jun 26 07:14:46.720905 systemd-networkd[755]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jun 26 07:14:46.726880 systemd-networkd[755]: eth1: Link UP Jun 26 07:14:46.726886 systemd-networkd[755]: eth1: Gained carrier Jun 26 07:14:46.726904 systemd-networkd[755]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 26 07:14:46.729805 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jun 26 07:14:46.737855 systemd-networkd[755]: eth0: DHCPv4 address 165.232.133.181/20, gateway 165.232.128.1 acquired from 169.254.169.253 Jun 26 07:14:46.741838 systemd-networkd[755]: eth1: DHCPv4 address 10.124.0.11/20 acquired from 169.254.169.253 Jun 26 07:14:46.752333 ignition[758]: Ignition 2.19.0 Jun 26 07:14:46.752345 ignition[758]: Stage: fetch Jun 26 07:14:46.752586 ignition[758]: no configs at "/usr/lib/ignition/base.d" Jun 26 07:14:46.752597 ignition[758]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jun 26 07:14:46.752728 ignition[758]: parsed url from cmdline: "" Jun 26 07:14:46.752732 ignition[758]: no config URL provided Jun 26 07:14:46.752738 ignition[758]: reading system config file "/usr/lib/ignition/user.ign" Jun 26 07:14:46.752791 ignition[758]: no config at "/usr/lib/ignition/user.ign" Jun 26 07:14:46.752811 ignition[758]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Jun 26 07:14:46.775930 ignition[758]: GET result: OK Jun 26 07:14:46.776101 ignition[758]: parsing config with SHA512: 981532d8bf2f81f8e89a8ba5b0a9a9243fa81a4426dde575d17028884a5e65e7589b9d976024b8c8b01aecacf19e9f67b4e5acafb7e587447378762f3ebf2fd0 Jun 26 07:14:46.782943 unknown[758]: fetched base config from "system" Jun 26 07:14:46.782961 unknown[758]: fetched base config from "system" Jun 26 07:14:46.783476 ignition[758]: fetch: fetch complete Jun 26 07:14:46.782971 unknown[758]: fetched user config from "digitalocean" Jun 26 07:14:46.783482 ignition[758]: fetch: fetch passed Jun 26 07:14:46.786311 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jun 26 07:14:46.783553 ignition[758]: Ignition finished successfully Jun 26 07:14:46.798393 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 26 07:14:46.823958 ignition[766]: Ignition 2.19.0 Jun 26 07:14:46.823974 ignition[766]: Stage: kargs Jun 26 07:14:46.824244 ignition[766]: no configs at "/usr/lib/ignition/base.d" Jun 26 07:14:46.824261 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jun 26 07:14:46.826788 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 26 07:14:46.825542 ignition[766]: kargs: kargs passed Jun 26 07:14:46.825621 ignition[766]: Ignition finished successfully Jun 26 07:14:46.842185 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 26 07:14:46.860498 ignition[773]: Ignition 2.19.0 Jun 26 07:14:46.860517 ignition[773]: Stage: disks Jun 26 07:14:46.860815 ignition[773]: no configs at "/usr/lib/ignition/base.d" Jun 26 07:14:46.860830 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jun 26 07:14:46.863659 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 26 07:14:46.862196 ignition[773]: disks: disks passed Jun 26 07:14:46.862267 ignition[773]: Ignition finished successfully Jun 26 07:14:46.866432 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 26 07:14:46.871778 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 26 07:14:46.872441 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 26 07:14:46.873434 systemd[1]: Reached target sysinit.target - System Initialization. Jun 26 07:14:46.874525 systemd[1]: Reached target basic.target - Basic System. Jun 26 07:14:46.889109 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 26 07:14:46.908767 systemd-fsck[782]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jun 26 07:14:46.913898 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 26 07:14:46.921913 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 26 07:14:47.063813 kernel: EXT4-fs (vda9): mounted filesystem ed685e11-963b-427a-9b96-a4691c40e909 r/w with ordered data mode. Quota mode: none. Jun 26 07:14:47.064686 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 26 07:14:47.065853 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 26 07:14:47.072924 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 26 07:14:47.076907 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 26 07:14:47.080005 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Jun 26 07:14:47.085776 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (790) Jun 26 07:14:47.088952 kernel: BTRFS info (device vda6): first mount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 26 07:14:47.089023 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 26 07:14:47.089208 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jun 26 07:14:47.093665 kernel: BTRFS info (device vda6): using free space tree Jun 26 07:14:47.093900 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 26 07:14:47.093956 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 26 07:14:47.095723 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 26 07:14:47.102516 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 26 07:14:47.108770 kernel: BTRFS info (device vda6): auto enabling async discard Jun 26 07:14:47.117311 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 26 07:14:47.190769 coreos-metadata[792]: Jun 26 07:14:47.190 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jun 26 07:14:47.191716 initrd-setup-root[821]: cut: /sysroot/etc/passwd: No such file or directory Jun 26 07:14:47.195875 coreos-metadata[793]: Jun 26 07:14:47.195 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jun 26 07:14:47.199282 initrd-setup-root[828]: cut: /sysroot/etc/group: No such file or directory Jun 26 07:14:47.200790 coreos-metadata[792]: Jun 26 07:14:47.200 INFO Fetch successful Jun 26 07:14:47.208052 coreos-metadata[793]: Jun 26 07:14:47.207 INFO Fetch successful Jun 26 07:14:47.211235 initrd-setup-root[835]: cut: /sysroot/etc/shadow: No such file or directory Jun 26 07:14:47.212818 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Jun 26 07:14:47.212936 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Jun 26 07:14:47.215198 coreos-metadata[793]: Jun 26 07:14:47.213 INFO wrote hostname ci-4012.0.0-0-ebda1d1a0c to /sysroot/etc/hostname Jun 26 07:14:47.217036 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 26 07:14:47.220764 initrd-setup-root[844]: cut: /sysroot/etc/gshadow: No such file or directory Jun 26 07:14:47.338852 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 26 07:14:47.346938 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 26 07:14:47.351984 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 26 07:14:47.360776 kernel: BTRFS info (device vda6): last unmount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 26 07:14:47.393963 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 26 07:14:47.403191 ignition[911]: INFO : Ignition 2.19.0 Jun 26 07:14:47.403191 ignition[911]: INFO : Stage: mount Jun 26 07:14:47.404576 ignition[911]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 26 07:14:47.404576 ignition[911]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jun 26 07:14:47.404576 ignition[911]: INFO : mount: mount passed Jun 26 07:14:47.404576 ignition[911]: INFO : Ignition finished successfully Jun 26 07:14:47.405704 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 26 07:14:47.411974 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 26 07:14:47.435321 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 26 07:14:47.440012 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 26 07:14:47.462817 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (924) Jun 26 07:14:47.466352 kernel: BTRFS info (device vda6): first mount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 26 07:14:47.466430 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 26 07:14:47.466445 kernel: BTRFS info (device vda6): using free space tree Jun 26 07:14:47.470802 kernel: BTRFS info (device vda6): auto enabling async discard Jun 26 07:14:47.473027 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 26 07:14:47.501161 ignition[941]: INFO : Ignition 2.19.0 Jun 26 07:14:47.501161 ignition[941]: INFO : Stage: files Jun 26 07:14:47.502413 ignition[941]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 26 07:14:47.502413 ignition[941]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jun 26 07:14:47.503595 ignition[941]: DEBUG : files: compiled without relabeling support, skipping Jun 26 07:14:47.504265 ignition[941]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 26 07:14:47.504265 ignition[941]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 26 07:14:47.507749 ignition[941]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 26 07:14:47.508707 ignition[941]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 26 07:14:47.509649 unknown[941]: wrote ssh authorized keys file for user: core Jun 26 07:14:47.510675 ignition[941]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 26 07:14:47.513600 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 26 07:14:47.515395 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jun 26 07:14:47.543805 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 26 07:14:47.644994 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 26 07:14:47.644994 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jun 26 07:14:47.647096 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jun 26 07:14:47.647096 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 26 07:14:47.647096 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 26 07:14:47.647096 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 26 07:14:47.647096 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 26 07:14:47.647096 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 26 07:14:47.647096 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 26 07:14:47.647096 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 26 07:14:47.647096 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 26 07:14:47.647096 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jun 26 07:14:47.647096 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jun 26 07:14:47.647096 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jun 26 07:14:47.647096 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jun 26 07:14:48.017035 systemd-networkd[755]: eth0: Gained IPv6LL Jun 26 07:14:48.183000 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jun 26 07:14:48.442184 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jun 26 07:14:48.442184 ignition[941]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jun 26 07:14:48.443736 ignition[941]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 26 07:14:48.443736 ignition[941]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 26 07:14:48.443736 ignition[941]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jun 26 07:14:48.443736 ignition[941]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jun 26 07:14:48.443736 ignition[941]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jun 26 07:14:48.448627 ignition[941]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 26 07:14:48.448627 ignition[941]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 26 07:14:48.448627 ignition[941]: INFO : files: files passed Jun 26 07:14:48.448627 ignition[941]: INFO : Ignition finished successfully Jun 26 07:14:48.445490 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 26 07:14:48.455063 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 26 07:14:48.459292 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 26 07:14:48.463210 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 26 07:14:48.463983 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 26 07:14:48.466230 systemd-networkd[755]: eth1: Gained IPv6LL Jun 26 07:14:48.487777 initrd-setup-root-after-ignition[970]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 26 07:14:48.487777 initrd-setup-root-after-ignition[970]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 26 07:14:48.491474 initrd-setup-root-after-ignition[974]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 26 07:14:48.492631 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 26 07:14:48.493763 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 26 07:14:48.498986 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 26 07:14:48.565022 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 26 07:14:48.565158 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 26 07:14:48.566651 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 26 07:14:48.567313 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 26 07:14:48.568423 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 26 07:14:48.574223 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 26 07:14:48.600427 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 26 07:14:48.607055 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 26 07:14:48.620905 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 26 07:14:48.622088 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 26 07:14:48.622666 systemd[1]: Stopped target timers.target - Timer Units. Jun 26 07:14:48.623128 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 26 07:14:48.623267 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 26 07:14:48.624349 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 26 07:14:48.624929 systemd[1]: Stopped target basic.target - Basic System. Jun 26 07:14:48.625614 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 26 07:14:48.626629 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 26 07:14:48.627330 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 26 07:14:48.628336 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 26 07:14:48.629287 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 26 07:14:48.630410 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 26 07:14:48.631210 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 26 07:14:48.632052 systemd[1]: Stopped target swap.target - Swaps. Jun 26 07:14:48.632804 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 26 07:14:48.632946 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 26 07:14:48.633942 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 26 07:14:48.634646 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 26 07:14:48.635243 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 26 07:14:48.635344 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 26 07:14:48.636028 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 26 07:14:48.636169 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 26 07:14:48.637151 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 26 07:14:48.637315 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 26 07:14:48.638317 systemd[1]: ignition-files.service: Deactivated successfully. Jun 26 07:14:48.638462 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 26 07:14:48.639061 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jun 26 07:14:48.639190 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 26 07:14:48.649631 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 26 07:14:48.652025 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 26 07:14:48.652873 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 26 07:14:48.653882 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 26 07:14:48.654559 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 26 07:14:48.654907 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 26 07:14:48.662178 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 26 07:14:48.662812 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 26 07:14:48.673671 ignition[994]: INFO : Ignition 2.19.0 Jun 26 07:14:48.674997 ignition[994]: INFO : Stage: umount Jun 26 07:14:48.676879 ignition[994]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 26 07:14:48.676879 ignition[994]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jun 26 07:14:48.676879 ignition[994]: INFO : umount: umount passed Jun 26 07:14:48.676879 ignition[994]: INFO : Ignition finished successfully Jun 26 07:14:48.679703 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 26 07:14:48.679908 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 26 07:14:48.680726 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 26 07:14:48.680898 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 26 07:14:48.682354 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 26 07:14:48.682441 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 26 07:14:48.683043 systemd[1]: ignition-fetch.service: Deactivated successfully. Jun 26 07:14:48.683110 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jun 26 07:14:48.683542 systemd[1]: Stopped target network.target - Network. Jun 26 07:14:48.687857 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 26 07:14:48.687945 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 26 07:14:48.703579 systemd[1]: Stopped target paths.target - Path Units. Jun 26 07:14:48.703980 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 26 07:14:48.704031 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 26 07:14:48.704546 systemd[1]: Stopped target slices.target - Slice Units. Jun 26 07:14:48.705302 systemd[1]: Stopped target sockets.target - Socket Units. Jun 26 07:14:48.706234 systemd[1]: iscsid.socket: Deactivated successfully. Jun 26 07:14:48.706300 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 26 07:14:48.706781 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 26 07:14:48.706826 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 26 07:14:48.707446 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 26 07:14:48.707515 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 26 07:14:48.708283 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 26 07:14:48.708334 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 26 07:14:48.709458 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 26 07:14:48.710044 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 26 07:14:48.724584 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 26 07:14:48.724839 systemd-networkd[755]: eth0: DHCPv6 lease lost Jun 26 07:14:48.729233 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 26 07:14:48.729359 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 26 07:14:48.736592 systemd-networkd[755]: eth1: DHCPv6 lease lost Jun 26 07:14:48.740426 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 26 07:14:48.740561 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 26 07:14:48.742630 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 26 07:14:48.742707 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 26 07:14:48.762284 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 26 07:14:48.766108 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 26 07:14:48.766333 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 26 07:14:48.767696 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 26 07:14:48.767785 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 26 07:14:48.768187 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 26 07:14:48.768235 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 26 07:14:48.770464 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 26 07:14:48.770551 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 26 07:14:48.771696 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 26 07:14:48.775383 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 26 07:14:48.775551 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 26 07:14:48.786354 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 26 07:14:48.786468 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 26 07:14:48.787701 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 26 07:14:48.788036 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 26 07:14:48.789588 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 26 07:14:48.789728 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 26 07:14:48.790712 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 26 07:14:48.790844 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 26 07:14:48.793101 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 26 07:14:48.793184 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 26 07:14:48.795385 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 26 07:14:48.795482 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 26 07:14:48.796799 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 26 07:14:48.796911 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 26 07:14:48.801057 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 26 07:14:48.803213 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 26 07:14:48.803339 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 26 07:14:48.804438 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jun 26 07:14:48.804522 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 26 07:14:48.809028 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 26 07:14:48.809122 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 26 07:14:48.809826 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 26 07:14:48.809895 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 26 07:14:48.814632 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 26 07:14:48.814836 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 26 07:14:48.826148 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 26 07:14:48.826342 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 26 07:14:48.827739 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 26 07:14:48.834114 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 26 07:14:48.851670 systemd[1]: Switching root. Jun 26 07:14:48.910904 systemd-journald[183]: Journal stopped Jun 26 07:14:50.058410 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Jun 26 07:14:50.058485 kernel: SELinux: policy capability network_peer_controls=1 Jun 26 07:14:50.058512 kernel: SELinux: policy capability open_perms=1 Jun 26 07:14:50.058530 kernel: SELinux: policy capability extended_socket_class=1 Jun 26 07:14:50.058542 kernel: SELinux: policy capability always_check_network=0 Jun 26 07:14:50.058559 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 26 07:14:50.058571 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 26 07:14:50.058584 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 26 07:14:50.058600 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 26 07:14:50.058612 kernel: audit: type=1403 audit(1719386089.058:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 26 07:14:50.058626 systemd[1]: Successfully loaded SELinux policy in 44.399ms. Jun 26 07:14:50.058651 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.976ms. Jun 26 07:14:50.058665 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jun 26 07:14:50.058680 systemd[1]: Detected virtualization kvm. Jun 26 07:14:50.058694 systemd[1]: Detected architecture x86-64. Jun 26 07:14:50.058707 systemd[1]: Detected first boot. Jun 26 07:14:50.058720 systemd[1]: Hostname set to . Jun 26 07:14:50.058733 systemd[1]: Initializing machine ID from VM UUID. Jun 26 07:14:50.058765 zram_generator::config[1038]: No configuration found. Jun 26 07:14:50.058780 systemd[1]: Populated /etc with preset unit settings. Jun 26 07:14:50.058792 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 26 07:14:50.058813 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 26 07:14:50.058826 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 26 07:14:50.058840 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 26 07:14:50.058853 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 26 07:14:50.058874 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 26 07:14:50.058888 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 26 07:14:50.058921 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 26 07:14:50.058935 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 26 07:14:50.058947 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 26 07:14:50.058963 systemd[1]: Created slice user.slice - User and Session Slice. Jun 26 07:14:50.058976 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 26 07:14:50.058990 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 26 07:14:50.059007 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 26 07:14:50.059028 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 26 07:14:50.059047 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 26 07:14:50.059066 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 26 07:14:50.059079 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jun 26 07:14:50.059095 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 26 07:14:50.059108 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 26 07:14:50.059121 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 26 07:14:50.059134 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 26 07:14:50.059146 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 26 07:14:50.059160 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 26 07:14:50.059173 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 26 07:14:50.059207 systemd[1]: Reached target slices.target - Slice Units. Jun 26 07:14:50.059234 systemd[1]: Reached target swap.target - Swaps. Jun 26 07:14:50.059247 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 26 07:14:50.059261 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 26 07:14:50.059274 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 26 07:14:50.059287 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 26 07:14:50.059300 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 26 07:14:50.059317 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 26 07:14:50.059330 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 26 07:14:50.059346 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 26 07:14:50.059359 systemd[1]: Mounting media.mount - External Media Directory... Jun 26 07:14:50.059372 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 26 07:14:50.059386 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 26 07:14:50.059399 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 26 07:14:50.059412 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 26 07:14:50.059426 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 26 07:14:50.059439 systemd[1]: Reached target machines.target - Containers. Jun 26 07:14:50.059455 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 26 07:14:50.059468 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 26 07:14:50.059480 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 26 07:14:50.059493 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 26 07:14:50.059505 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 26 07:14:50.059528 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 26 07:14:50.059541 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 26 07:14:50.059554 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 26 07:14:50.059567 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 26 07:14:50.059583 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 26 07:14:50.059596 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 26 07:14:50.059624 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 26 07:14:50.059638 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 26 07:14:50.059651 systemd[1]: Stopped systemd-fsck-usr.service. Jun 26 07:14:50.059663 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 26 07:14:50.059675 kernel: loop: module loaded Jun 26 07:14:50.059688 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 26 07:14:50.059702 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 26 07:14:50.059720 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 26 07:14:50.059733 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 26 07:14:50.059757 systemd[1]: verity-setup.service: Deactivated successfully. Jun 26 07:14:50.059771 systemd[1]: Stopped verity-setup.service. Jun 26 07:14:50.059785 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 26 07:14:50.059839 systemd-journald[1106]: Collecting audit messages is disabled. Jun 26 07:14:50.059873 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 26 07:14:50.059895 systemd-journald[1106]: Journal started Jun 26 07:14:50.059939 systemd-journald[1106]: Runtime Journal (/run/log/journal/e0268ed965b746768461abadbf159ec4) is 4.9M, max 39.3M, 34.4M free. Jun 26 07:14:49.751836 systemd[1]: Queued start job for default target multi-user.target. Jun 26 07:14:49.779147 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jun 26 07:14:49.779718 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 26 07:14:50.070786 systemd[1]: Started systemd-journald.service - Journal Service. Jun 26 07:14:50.072734 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 26 07:14:50.073527 systemd[1]: Mounted media.mount - External Media Directory. Jun 26 07:14:50.075005 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 26 07:14:50.081045 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 26 07:14:50.081773 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 26 07:14:50.083138 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 26 07:14:50.085011 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 26 07:14:50.085825 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 26 07:14:50.087409 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 26 07:14:50.087616 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 26 07:14:50.092762 kernel: fuse: init (API version 7.39) Jun 26 07:14:50.094634 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 26 07:14:50.100942 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 26 07:14:50.102086 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 26 07:14:50.102245 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 26 07:14:50.103124 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 26 07:14:50.103264 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 26 07:14:50.104081 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 26 07:14:50.104774 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 26 07:14:50.162817 kernel: ACPI: bus type drm_connector registered Jun 26 07:14:50.162968 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 26 07:14:50.178883 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 26 07:14:50.179496 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 26 07:14:50.179562 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 26 07:14:50.183184 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jun 26 07:14:50.190941 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jun 26 07:14:50.193558 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 26 07:14:50.194239 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 26 07:14:50.203941 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 26 07:14:50.206689 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 26 07:14:50.208061 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 26 07:14:50.209734 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jun 26 07:14:50.210534 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 26 07:14:50.218014 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 26 07:14:50.221945 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 26 07:14:50.230822 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 26 07:14:50.236459 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 26 07:14:50.238323 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 26 07:14:50.238564 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 26 07:14:50.241209 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 26 07:14:50.243201 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 26 07:14:50.243770 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 26 07:14:50.247102 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jun 26 07:14:50.273602 systemd-journald[1106]: Time spent on flushing to /var/log/journal/e0268ed965b746768461abadbf159ec4 is 118.046ms for 987 entries. Jun 26 07:14:50.273602 systemd-journald[1106]: System Journal (/var/log/journal/e0268ed965b746768461abadbf159ec4) is 8.0M, max 195.6M, 187.6M free. Jun 26 07:14:50.416680 systemd-journald[1106]: Received client request to flush runtime journal. Jun 26 07:14:50.416955 kernel: loop0: detected capacity change from 0 to 139760 Jun 26 07:14:50.416979 kernel: block loop0: the capability attribute has been deprecated. Jun 26 07:14:50.276676 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 26 07:14:50.323458 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 26 07:14:50.337254 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jun 26 07:14:50.380233 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jun 26 07:14:50.384724 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 26 07:14:50.398132 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jun 26 07:14:50.419058 udevadm[1164]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jun 26 07:14:50.431247 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 26 07:14:50.430377 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 26 07:14:50.440468 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 26 07:14:50.457849 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 26 07:14:50.459119 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jun 26 07:14:50.469816 kernel: loop1: detected capacity change from 0 to 211296 Jun 26 07:14:50.479134 systemd-tmpfiles[1155]: ACLs are not supported, ignoring. Jun 26 07:14:50.479156 systemd-tmpfiles[1155]: ACLs are not supported, ignoring. Jun 26 07:14:50.506483 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 26 07:14:50.516205 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 26 07:14:50.534853 kernel: loop2: detected capacity change from 0 to 8 Jun 26 07:14:50.578596 kernel: loop3: detected capacity change from 0 to 80568 Jun 26 07:14:50.601672 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 26 07:14:50.613597 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 26 07:14:50.646775 kernel: loop4: detected capacity change from 0 to 139760 Jun 26 07:14:50.682794 kernel: loop5: detected capacity change from 0 to 211296 Jun 26 07:14:50.709932 kernel: loop6: detected capacity change from 0 to 8 Jun 26 07:14:50.726261 kernel: loop7: detected capacity change from 0 to 80568 Jun 26 07:14:50.715573 systemd-tmpfiles[1182]: ACLs are not supported, ignoring. Jun 26 07:14:50.715604 systemd-tmpfiles[1182]: ACLs are not supported, ignoring. Jun 26 07:14:50.736483 (sd-merge)[1183]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Jun 26 07:14:50.737329 (sd-merge)[1183]: Merged extensions into '/usr'. Jun 26 07:14:50.745873 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 26 07:14:50.759309 systemd[1]: Reloading requested from client PID 1154 ('systemd-sysext') (unit systemd-sysext.service)... Jun 26 07:14:50.761820 systemd[1]: Reloading... Jun 26 07:14:51.031793 zram_generator::config[1209]: No configuration found. Jun 26 07:14:51.390955 ldconfig[1149]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 26 07:14:51.404535 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 26 07:14:51.500639 systemd[1]: Reloading finished in 733 ms. Jun 26 07:14:51.529069 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 26 07:14:51.536195 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 26 07:14:51.554432 systemd[1]: Starting ensure-sysext.service... Jun 26 07:14:51.573220 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 26 07:14:51.602124 systemd[1]: Reloading requested from client PID 1252 ('systemctl') (unit ensure-sysext.service)... Jun 26 07:14:51.602165 systemd[1]: Reloading... Jun 26 07:14:51.640569 systemd-tmpfiles[1253]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 26 07:14:51.641262 systemd-tmpfiles[1253]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 26 07:14:51.645719 systemd-tmpfiles[1253]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 26 07:14:51.646293 systemd-tmpfiles[1253]: ACLs are not supported, ignoring. Jun 26 07:14:51.646395 systemd-tmpfiles[1253]: ACLs are not supported, ignoring. Jun 26 07:14:51.654510 systemd-tmpfiles[1253]: Detected autofs mount point /boot during canonicalization of boot. Jun 26 07:14:51.655238 systemd-tmpfiles[1253]: Skipping /boot Jun 26 07:14:51.674639 systemd-tmpfiles[1253]: Detected autofs mount point /boot during canonicalization of boot. Jun 26 07:14:51.676096 systemd-tmpfiles[1253]: Skipping /boot Jun 26 07:14:51.793805 zram_generator::config[1275]: No configuration found. Jun 26 07:14:52.060314 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 26 07:14:52.156493 systemd[1]: Reloading finished in 553 ms. Jun 26 07:14:52.179308 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 26 07:14:52.186938 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 26 07:14:52.203219 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 26 07:14:52.216123 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 26 07:14:52.221031 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 26 07:14:52.231219 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 26 07:14:52.237064 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 26 07:14:52.246324 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 26 07:14:52.257457 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 26 07:14:52.257841 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 26 07:14:52.271481 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 26 07:14:52.276305 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 26 07:14:52.285425 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 26 07:14:52.286561 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 26 07:14:52.290088 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 26 07:14:52.295458 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 26 07:14:52.295747 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 26 07:14:52.297202 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 26 07:14:52.297308 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 26 07:14:52.312088 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 26 07:14:52.312377 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 26 07:14:52.323782 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 26 07:14:52.325815 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 26 07:14:52.328136 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 26 07:14:52.344281 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 26 07:14:52.348056 systemd[1]: Finished ensure-sysext.service. Jun 26 07:14:52.365284 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jun 26 07:14:52.377881 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 26 07:14:52.403635 systemd-udevd[1329]: Using default interface naming scheme 'v255'. Jun 26 07:14:52.413629 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 26 07:14:52.414946 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 26 07:14:52.422588 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 26 07:14:52.429519 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 26 07:14:52.429869 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 26 07:14:52.431596 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 26 07:14:52.431972 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 26 07:14:52.440295 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 26 07:14:52.440430 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 26 07:14:52.442307 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 26 07:14:52.442870 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 26 07:14:52.446521 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 26 07:14:52.476982 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 26 07:14:52.491377 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 26 07:14:52.492255 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 26 07:14:52.504131 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 26 07:14:52.524125 augenrules[1364]: No rules Jun 26 07:14:52.525849 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 26 07:14:52.538844 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 26 07:14:52.558000 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 26 07:14:52.744855 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1373) Jun 26 07:14:52.754790 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1380) Jun 26 07:14:52.762723 systemd-networkd[1360]: lo: Link UP Jun 26 07:14:52.765774 systemd-networkd[1360]: lo: Gained carrier Jun 26 07:14:52.785821 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Jun 26 07:14:52.786882 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 26 07:14:52.787119 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 26 07:14:52.797110 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 26 07:14:52.809173 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 26 07:14:52.819189 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 26 07:14:52.820278 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 26 07:14:52.820360 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 26 07:14:52.820386 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 26 07:14:52.823435 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 26 07:14:52.823726 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 26 07:14:52.837486 systemd-networkd[1360]: Enumeration completed Jun 26 07:14:52.840581 systemd-networkd[1360]: eth1: Configuring with /run/systemd/network/10-be:e7:4a:d1:0f:5c.network. Jun 26 07:14:52.841211 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 26 07:14:52.842916 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 26 07:14:52.844484 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 26 07:14:52.847678 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 26 07:14:52.850326 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 26 07:14:52.853719 systemd-networkd[1360]: eth1: Link UP Jun 26 07:14:52.853727 systemd-networkd[1360]: eth1: Gained carrier Jun 26 07:14:52.859044 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jun 26 07:14:52.868819 systemd-timesyncd[1342]: Network configuration changed, trying to establish connection. Jun 26 07:14:52.875685 systemd[1]: Reached target time-set.target - System Time Set. Jun 26 07:14:52.887797 kernel: ISO 9660 Extensions: RRIP_1991A Jun 26 07:14:52.885613 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 26 07:14:52.887201 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 26 07:14:52.887274 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 26 07:14:52.890407 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Jun 26 07:14:52.914552 systemd-resolved[1327]: Positive Trust Anchors: Jun 26 07:14:52.916352 systemd-resolved[1327]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 26 07:14:52.917425 systemd-resolved[1327]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jun 26 07:14:52.934815 systemd-resolved[1327]: Using system hostname 'ci-4012.0.0-0-ebda1d1a0c'. Jun 26 07:14:52.945712 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 26 07:14:52.946683 systemd[1]: Reached target network.target - Network. Jun 26 07:14:52.947335 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 26 07:14:52.965076 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jun 26 07:14:52.968821 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jun 26 07:14:52.993855 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jun 26 07:14:53.018572 kernel: ACPI: button: Power Button [PWRF] Jun 26 07:14:53.050581 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 26 07:14:53.065234 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 26 07:14:53.095794 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jun 26 07:14:53.114705 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 26 07:14:53.146268 systemd-networkd[1360]: eth0: Configuring with /run/systemd/network/10-96:7e:80:d5:e2:f8.network. Jun 26 07:14:53.146964 systemd-timesyncd[1342]: Network configuration changed, trying to establish connection. Jun 26 07:14:53.148957 systemd-networkd[1360]: eth0: Link UP Jun 26 07:14:53.148968 systemd-networkd[1360]: eth0: Gained carrier Jun 26 07:14:53.153964 systemd-timesyncd[1342]: Network configuration changed, trying to establish connection. Jun 26 07:14:53.154900 systemd-timesyncd[1342]: Network configuration changed, trying to establish connection. Jun 26 07:14:53.197788 kernel: mousedev: PS/2 mouse device common for all mice Jun 26 07:14:53.223664 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 26 07:14:53.247783 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jun 26 07:14:53.249776 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jun 26 07:14:53.254817 kernel: Console: switching to colour dummy device 80x25 Jun 26 07:14:53.256782 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jun 26 07:14:53.256907 kernel: [drm] features: -context_init Jun 26 07:14:53.256933 kernel: [drm] number of scanouts: 1 Jun 26 07:14:53.257771 kernel: [drm] number of cap sets: 0 Jun 26 07:14:53.262796 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jun 26 07:14:53.336410 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jun 26 07:14:53.336622 kernel: Console: switching to colour frame buffer device 128x48 Jun 26 07:14:53.336661 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jun 26 07:14:53.344990 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 26 07:14:53.345347 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 26 07:14:53.370096 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 26 07:14:53.390241 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 26 07:14:53.390594 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 26 07:14:53.410262 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 26 07:14:53.427810 kernel: EDAC MC: Ver: 3.0.0 Jun 26 07:14:53.468423 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jun 26 07:14:53.483160 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jun 26 07:14:53.527004 lvm[1429]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 26 07:14:53.558371 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 26 07:14:53.565317 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jun 26 07:14:53.568905 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 26 07:14:53.570647 systemd[1]: Reached target sysinit.target - System Initialization. Jun 26 07:14:53.571252 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 26 07:14:53.571619 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 26 07:14:53.573184 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 26 07:14:53.574827 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 26 07:14:53.575024 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 26 07:14:53.575147 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 26 07:14:53.575188 systemd[1]: Reached target paths.target - Path Units. Jun 26 07:14:53.575280 systemd[1]: Reached target timers.target - Timer Units. Jun 26 07:14:53.577924 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 26 07:14:53.581571 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 26 07:14:53.596590 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 26 07:14:53.605209 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jun 26 07:14:53.608736 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 26 07:14:53.611090 systemd[1]: Reached target sockets.target - Socket Units. Jun 26 07:14:53.612526 systemd[1]: Reached target basic.target - Basic System. Jun 26 07:14:53.613216 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 26 07:14:53.613258 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 26 07:14:53.620046 systemd[1]: Starting containerd.service - containerd container runtime... Jun 26 07:14:53.629410 lvm[1436]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 26 07:14:53.633078 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jun 26 07:14:53.653201 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 26 07:14:53.665996 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 26 07:14:53.671003 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 26 07:14:53.671720 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 26 07:14:53.682122 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 26 07:14:53.687712 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 26 07:14:53.694238 jq[1440]: false Jun 26 07:14:53.698109 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 26 07:14:53.705322 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 26 07:14:53.721386 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 26 07:14:53.737323 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 26 07:14:53.738390 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 26 07:14:53.748346 systemd[1]: Starting update-engine.service - Update Engine... Jun 26 07:14:53.760102 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 26 07:14:53.767859 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jun 26 07:14:53.780502 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 26 07:14:53.781024 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 26 07:14:53.807440 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 26 07:14:53.807854 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 26 07:14:53.851561 jq[1452]: true Jun 26 07:14:53.861486 dbus-daemon[1439]: [system] SELinux support is enabled Jun 26 07:14:53.871327 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 26 07:14:53.877367 coreos-metadata[1438]: Jun 26 07:14:53.876 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jun 26 07:14:53.881636 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 26 07:14:53.881705 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 26 07:14:53.883807 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 26 07:14:53.885934 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Jun 26 07:14:53.886022 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 26 07:14:53.908810 extend-filesystems[1443]: Found loop4 Jun 26 07:14:53.908810 extend-filesystems[1443]: Found loop5 Jun 26 07:14:53.908810 extend-filesystems[1443]: Found loop6 Jun 26 07:14:53.908810 extend-filesystems[1443]: Found loop7 Jun 26 07:14:53.908810 extend-filesystems[1443]: Found vda Jun 26 07:14:53.908810 extend-filesystems[1443]: Found vda1 Jun 26 07:14:53.908810 extend-filesystems[1443]: Found vda2 Jun 26 07:14:53.908810 extend-filesystems[1443]: Found vda3 Jun 26 07:14:53.908810 extend-filesystems[1443]: Found usr Jun 26 07:14:53.908810 extend-filesystems[1443]: Found vda4 Jun 26 07:14:53.908810 extend-filesystems[1443]: Found vda6 Jun 26 07:14:53.908810 extend-filesystems[1443]: Found vda7 Jun 26 07:14:53.908810 extend-filesystems[1443]: Found vda9 Jun 26 07:14:53.908810 extend-filesystems[1443]: Checking size of /dev/vda9 Jun 26 07:14:53.952462 coreos-metadata[1438]: Jun 26 07:14:53.891 INFO Fetch successful Jun 26 07:14:53.944467 (ntainerd)[1464]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jun 26 07:14:53.961023 systemd[1]: motdgen.service: Deactivated successfully. Jun 26 07:14:53.961396 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 26 07:14:53.972442 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 26 07:14:53.977596 tar[1454]: linux-amd64/helm Jun 26 07:14:54.000253 update_engine[1450]: I0626 07:14:54.000084 1450 main.cc:92] Flatcar Update Engine starting Jun 26 07:14:54.013774 jq[1467]: true Jun 26 07:14:54.014266 update_engine[1450]: I0626 07:14:54.013394 1450 update_check_scheduler.cc:74] Next update check in 9m9s Jun 26 07:14:54.018226 systemd[1]: Started update-engine.service - Update Engine. Jun 26 07:14:54.040248 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 26 07:14:54.058262 extend-filesystems[1443]: Resized partition /dev/vda9 Jun 26 07:14:54.079286 extend-filesystems[1484]: resize2fs 1.47.0 (5-Feb-2023) Jun 26 07:14:54.092574 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1386) Jun 26 07:14:54.092739 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Jun 26 07:14:54.139158 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jun 26 07:14:54.144403 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 26 07:14:54.346834 systemd-logind[1448]: New seat seat0. Jun 26 07:14:54.365626 systemd-logind[1448]: Watching system buttons on /dev/input/event1 (Power Button) Jun 26 07:14:54.367594 systemd-logind[1448]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jun 26 07:14:54.369556 systemd[1]: Started systemd-logind.service - User Login Management. Jun 26 07:14:54.427834 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jun 26 07:14:54.426860 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 26 07:14:54.510128 bash[1506]: Updated "/home/core/.ssh/authorized_keys" Jun 26 07:14:54.445912 systemd[1]: Starting sshkeys.service... Jun 26 07:14:54.510460 extend-filesystems[1484]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jun 26 07:14:54.510460 extend-filesystems[1484]: old_desc_blocks = 1, new_desc_blocks = 8 Jun 26 07:14:54.510460 extend-filesystems[1484]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jun 26 07:14:54.528501 extend-filesystems[1443]: Resized filesystem in /dev/vda9 Jun 26 07:14:54.528501 extend-filesystems[1443]: Found vdb Jun 26 07:14:54.513556 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 26 07:14:54.514061 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 26 07:14:54.564001 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jun 26 07:14:54.573294 locksmithd[1483]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 26 07:14:54.579000 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jun 26 07:14:54.655152 sshd_keygen[1457]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 26 07:14:54.689676 coreos-metadata[1516]: Jun 26 07:14:54.689 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jun 26 07:14:54.702317 coreos-metadata[1516]: Jun 26 07:14:54.702 INFO Fetch successful Jun 26 07:14:54.721818 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 26 07:14:54.736251 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 26 07:14:54.739716 unknown[1516]: wrote ssh authorized keys file for user: core Jun 26 07:14:54.739865 systemd-networkd[1360]: eth0: Gained IPv6LL Jun 26 07:14:54.740538 systemd-timesyncd[1342]: Network configuration changed, trying to establish connection. Jun 26 07:14:54.749029 systemd[1]: Started sshd@0-165.232.133.181:22-147.75.109.163:47754.service - OpenSSH per-connection server daemon (147.75.109.163:47754). Jun 26 07:14:54.765328 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 26 07:14:54.795992 systemd[1]: Reached target network-online.target - Network is Online. Jun 26 07:14:54.804206 systemd-networkd[1360]: eth1: Gained IPv6LL Jun 26 07:14:54.804855 systemd-timesyncd[1342]: Network configuration changed, trying to establish connection. Jun 26 07:14:54.812563 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 26 07:14:54.824496 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 26 07:14:54.828173 systemd[1]: issuegen.service: Deactivated successfully. Jun 26 07:14:54.828553 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 26 07:14:54.856839 containerd[1464]: time="2024-06-26T07:14:54.854626009Z" level=info msg="starting containerd" revision=cd7148ac666309abf41fd4a49a8a5895b905e7f3 version=v1.7.18 Jun 26 07:14:54.856070 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 26 07:14:54.879369 update-ssh-keys[1529]: Updated "/home/core/.ssh/authorized_keys" Jun 26 07:14:54.889362 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jun 26 07:14:54.907916 systemd[1]: Finished sshkeys.service. Jun 26 07:14:54.988432 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 26 07:14:54.998571 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 26 07:14:55.005681 sshd[1528]: Accepted publickey for core from 147.75.109.163 port 47754 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:14:55.014455 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jun 26 07:14:55.014674 sshd[1528]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:14:55.015611 systemd[1]: Reached target getty.target - Login Prompts. Jun 26 07:14:55.033903 containerd[1464]: time="2024-06-26T07:14:55.033294881Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jun 26 07:14:55.033903 containerd[1464]: time="2024-06-26T07:14:55.033435001Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jun 26 07:14:55.051492 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 26 07:14:55.066888 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 26 07:14:55.073866 containerd[1464]: time="2024-06-26T07:14:55.072300993Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.35-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jun 26 07:14:55.073866 containerd[1464]: time="2024-06-26T07:14:55.072390549Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jun 26 07:14:55.076967 containerd[1464]: time="2024-06-26T07:14:55.075113810Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 26 07:14:55.076967 containerd[1464]: time="2024-06-26T07:14:55.075173127Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jun 26 07:14:55.076967 containerd[1464]: time="2024-06-26T07:14:55.075429432Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jun 26 07:14:55.076967 containerd[1464]: time="2024-06-26T07:14:55.075560688Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jun 26 07:14:55.076967 containerd[1464]: time="2024-06-26T07:14:55.075583538Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jun 26 07:14:55.076967 containerd[1464]: time="2024-06-26T07:14:55.075701185Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jun 26 07:14:55.080448 containerd[1464]: time="2024-06-26T07:14:55.080380762Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jun 26 07:14:55.080448 containerd[1464]: time="2024-06-26T07:14:55.080447615Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jun 26 07:14:55.080642 containerd[1464]: time="2024-06-26T07:14:55.080470178Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jun 26 07:14:55.086819 containerd[1464]: time="2024-06-26T07:14:55.081381230Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 26 07:14:55.086819 containerd[1464]: time="2024-06-26T07:14:55.081425503Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jun 26 07:14:55.086819 containerd[1464]: time="2024-06-26T07:14:55.081592941Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jun 26 07:14:55.086819 containerd[1464]: time="2024-06-26T07:14:55.081619959Z" level=info msg="metadata content store policy set" policy=shared Jun 26 07:14:55.091077 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 26 07:14:55.092583 systemd-logind[1448]: New session 1 of user core. Jun 26 07:14:55.111842 containerd[1464]: time="2024-06-26T07:14:55.108738869Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jun 26 07:14:55.111842 containerd[1464]: time="2024-06-26T07:14:55.108944922Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jun 26 07:14:55.111842 containerd[1464]: time="2024-06-26T07:14:55.108972256Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jun 26 07:14:55.111842 containerd[1464]: time="2024-06-26T07:14:55.109050036Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jun 26 07:14:55.111842 containerd[1464]: time="2024-06-26T07:14:55.109396216Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jun 26 07:14:55.111842 containerd[1464]: time="2024-06-26T07:14:55.109424468Z" level=info msg="NRI interface is disabled by configuration." Jun 26 07:14:55.111842 containerd[1464]: time="2024-06-26T07:14:55.109446972Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jun 26 07:14:55.111842 containerd[1464]: time="2024-06-26T07:14:55.109793195Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jun 26 07:14:55.111842 containerd[1464]: time="2024-06-26T07:14:55.109824654Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jun 26 07:14:55.111842 containerd[1464]: time="2024-06-26T07:14:55.109851011Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jun 26 07:14:55.111842 containerd[1464]: time="2024-06-26T07:14:55.109877000Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jun 26 07:14:55.111842 containerd[1464]: time="2024-06-26T07:14:55.109903678Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jun 26 07:14:55.111842 containerd[1464]: time="2024-06-26T07:14:55.109933850Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jun 26 07:14:55.111842 containerd[1464]: time="2024-06-26T07:14:55.109957312Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jun 26 07:14:55.112640 containerd[1464]: time="2024-06-26T07:14:55.109995575Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jun 26 07:14:55.112640 containerd[1464]: time="2024-06-26T07:14:55.110044591Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jun 26 07:14:55.112640 containerd[1464]: time="2024-06-26T07:14:55.110069916Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jun 26 07:14:55.112640 containerd[1464]: time="2024-06-26T07:14:55.110091056Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jun 26 07:14:55.112640 containerd[1464]: time="2024-06-26T07:14:55.110113419Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jun 26 07:14:55.112640 containerd[1464]: time="2024-06-26T07:14:55.110313505Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jun 26 07:14:55.112640 containerd[1464]: time="2024-06-26T07:14:55.110694254Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jun 26 07:14:55.116538 containerd[1464]: time="2024-06-26T07:14:55.115679776Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jun 26 07:14:55.116538 containerd[1464]: time="2024-06-26T07:14:55.115812073Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jun 26 07:14:55.116538 containerd[1464]: time="2024-06-26T07:14:55.115868510Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jun 26 07:14:55.116538 containerd[1464]: time="2024-06-26T07:14:55.115982113Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jun 26 07:14:55.116538 containerd[1464]: time="2024-06-26T07:14:55.116009475Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jun 26 07:14:55.116538 containerd[1464]: time="2024-06-26T07:14:55.116031519Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jun 26 07:14:55.116538 containerd[1464]: time="2024-06-26T07:14:55.116050643Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jun 26 07:14:55.116538 containerd[1464]: time="2024-06-26T07:14:55.116069969Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jun 26 07:14:55.116538 containerd[1464]: time="2024-06-26T07:14:55.116092914Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jun 26 07:14:55.116538 containerd[1464]: time="2024-06-26T07:14:55.116125947Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jun 26 07:14:55.120717 containerd[1464]: time="2024-06-26T07:14:55.118143970Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jun 26 07:14:55.120717 containerd[1464]: time="2024-06-26T07:14:55.120658795Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jun 26 07:14:55.121250 containerd[1464]: time="2024-06-26T07:14:55.121145614Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jun 26 07:14:55.121853 containerd[1464]: time="2024-06-26T07:14:55.121388710Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jun 26 07:14:55.121853 containerd[1464]: time="2024-06-26T07:14:55.121465327Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jun 26 07:14:55.121853 containerd[1464]: time="2024-06-26T07:14:55.121497880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jun 26 07:14:55.121853 containerd[1464]: time="2024-06-26T07:14:55.121591287Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jun 26 07:14:55.121853 containerd[1464]: time="2024-06-26T07:14:55.121645734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jun 26 07:14:55.121853 containerd[1464]: time="2024-06-26T07:14:55.121674548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jun 26 07:14:55.121853 containerd[1464]: time="2024-06-26T07:14:55.121713342Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jun 26 07:14:55.124253 containerd[1464]: time="2024-06-26T07:14:55.122682722Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jun 26 07:14:55.124253 containerd[1464]: time="2024-06-26T07:14:55.122814828Z" level=info msg="Connect containerd service" Jun 26 07:14:55.124253 containerd[1464]: time="2024-06-26T07:14:55.122940836Z" level=info msg="using legacy CRI server" Jun 26 07:14:55.124253 containerd[1464]: time="2024-06-26T07:14:55.122956258Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 26 07:14:55.124253 containerd[1464]: time="2024-06-26T07:14:55.123442914Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jun 26 07:14:55.133557 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 26 07:14:55.144889 containerd[1464]: time="2024-06-26T07:14:55.136986523Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 26 07:14:55.144889 containerd[1464]: time="2024-06-26T07:14:55.137153711Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jun 26 07:14:55.144889 containerd[1464]: time="2024-06-26T07:14:55.137202145Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jun 26 07:14:55.144889 containerd[1464]: time="2024-06-26T07:14:55.137227062Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jun 26 07:14:55.144889 containerd[1464]: time="2024-06-26T07:14:55.137252636Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jun 26 07:14:55.148367 containerd[1464]: time="2024-06-26T07:14:55.148112471Z" level=info msg="Start subscribing containerd event" Jun 26 07:14:55.149207 containerd[1464]: time="2024-06-26T07:14:55.149145148Z" level=info msg="Start recovering state" Jun 26 07:14:55.149559 containerd[1464]: time="2024-06-26T07:14:55.149527218Z" level=info msg="Start event monitor" Jun 26 07:14:55.149913 containerd[1464]: time="2024-06-26T07:14:55.148617931Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 26 07:14:55.150459 containerd[1464]: time="2024-06-26T07:14:55.150425299Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 26 07:14:55.150898 containerd[1464]: time="2024-06-26T07:14:55.150725392Z" level=info msg="Start snapshots syncer" Jun 26 07:14:55.152411 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 26 07:14:55.159341 containerd[1464]: time="2024-06-26T07:14:55.153370181Z" level=info msg="Start cni network conf syncer for default" Jun 26 07:14:55.161948 containerd[1464]: time="2024-06-26T07:14:55.161864310Z" level=info msg="Start streaming server" Jun 26 07:14:55.168397 containerd[1464]: time="2024-06-26T07:14:55.162761654Z" level=info msg="containerd successfully booted in 0.327504s" Jun 26 07:14:55.164284 systemd[1]: Started containerd.service - containerd container runtime. Jun 26 07:14:55.187412 (systemd)[1559]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:14:55.412122 systemd[1559]: Queued start job for default target default.target. Jun 26 07:14:55.418479 systemd[1559]: Created slice app.slice - User Application Slice. Jun 26 07:14:55.419136 systemd[1559]: Reached target paths.target - Paths. Jun 26 07:14:55.419164 systemd[1559]: Reached target timers.target - Timers. Jun 26 07:14:55.424088 systemd[1559]: Starting dbus.socket - D-Bus User Message Bus Socket... Jun 26 07:14:55.465790 systemd[1559]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jun 26 07:14:55.466024 systemd[1559]: Reached target sockets.target - Sockets. Jun 26 07:14:55.466050 systemd[1559]: Reached target basic.target - Basic System. Jun 26 07:14:55.466136 systemd[1559]: Reached target default.target - Main User Target. Jun 26 07:14:55.466189 systemd[1559]: Startup finished in 256ms. Jun 26 07:14:55.466571 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 26 07:14:55.483052 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 26 07:14:55.578348 systemd[1]: Started sshd@1-165.232.133.181:22-147.75.109.163:47764.service - OpenSSH per-connection server daemon (147.75.109.163:47764). Jun 26 07:14:55.697566 sshd[1570]: Accepted publickey for core from 147.75.109.163 port 47764 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:14:55.700808 sshd[1570]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:14:55.719270 systemd-logind[1448]: New session 2 of user core. Jun 26 07:14:55.730987 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 26 07:14:55.812297 tar[1454]: linux-amd64/LICENSE Jun 26 07:14:55.812297 tar[1454]: linux-amd64/README.md Jun 26 07:14:55.840919 sshd[1570]: pam_unix(sshd:session): session closed for user core Jun 26 07:14:55.843948 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 26 07:14:55.849660 systemd[1]: sshd@1-165.232.133.181:22-147.75.109.163:47764.service: Deactivated successfully. Jun 26 07:14:55.854394 systemd[1]: session-2.scope: Deactivated successfully. Jun 26 07:14:55.868052 systemd-logind[1448]: Session 2 logged out. Waiting for processes to exit. Jun 26 07:14:55.879287 systemd[1]: Started sshd@2-165.232.133.181:22-147.75.109.163:57232.service - OpenSSH per-connection server daemon (147.75.109.163:57232). Jun 26 07:14:55.884620 systemd-logind[1448]: Removed session 2. Jun 26 07:14:55.961912 sshd[1580]: Accepted publickey for core from 147.75.109.163 port 57232 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:14:55.966511 sshd[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:14:55.986582 systemd-logind[1448]: New session 3 of user core. Jun 26 07:14:55.988118 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 26 07:14:56.068141 sshd[1580]: pam_unix(sshd:session): session closed for user core Jun 26 07:14:56.073839 systemd-logind[1448]: Session 3 logged out. Waiting for processes to exit. Jun 26 07:14:56.076466 systemd[1]: sshd@2-165.232.133.181:22-147.75.109.163:57232.service: Deactivated successfully. Jun 26 07:14:56.081634 systemd[1]: session-3.scope: Deactivated successfully. Jun 26 07:14:56.083616 systemd-logind[1448]: Removed session 3. Jun 26 07:14:56.716565 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 26 07:14:56.722520 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 26 07:14:56.729022 systemd[1]: Startup finished in 1.108s (kernel) + 5.346s (initrd) + 7.713s (userspace) = 14.169s. Jun 26 07:14:56.735476 (kubelet)[1590]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 26 07:14:57.836844 kubelet[1590]: E0626 07:14:57.836670 1590 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 26 07:14:57.842064 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 26 07:14:57.842690 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 26 07:14:57.843222 systemd[1]: kubelet.service: Consumed 1.613s CPU time. Jun 26 07:15:06.084910 systemd[1]: Started sshd@3-165.232.133.181:22-147.75.109.163:54700.service - OpenSSH per-connection server daemon (147.75.109.163:54700). Jun 26 07:15:06.153823 sshd[1604]: Accepted publickey for core from 147.75.109.163 port 54700 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:15:06.156557 sshd[1604]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:15:06.166107 systemd-logind[1448]: New session 4 of user core. Jun 26 07:15:06.171116 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 26 07:15:06.244313 sshd[1604]: pam_unix(sshd:session): session closed for user core Jun 26 07:15:06.254798 systemd[1]: sshd@3-165.232.133.181:22-147.75.109.163:54700.service: Deactivated successfully. Jun 26 07:15:06.256942 systemd[1]: session-4.scope: Deactivated successfully. Jun 26 07:15:06.260143 systemd-logind[1448]: Session 4 logged out. Waiting for processes to exit. Jun 26 07:15:06.265296 systemd[1]: Started sshd@4-165.232.133.181:22-147.75.109.163:54708.service - OpenSSH per-connection server daemon (147.75.109.163:54708). Jun 26 07:15:06.267774 systemd-logind[1448]: Removed session 4. Jun 26 07:15:06.325910 sshd[1611]: Accepted publickey for core from 147.75.109.163 port 54708 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:15:06.328317 sshd[1611]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:15:06.337633 systemd-logind[1448]: New session 5 of user core. Jun 26 07:15:06.349086 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 26 07:15:06.412022 sshd[1611]: pam_unix(sshd:session): session closed for user core Jun 26 07:15:06.421663 systemd[1]: sshd@4-165.232.133.181:22-147.75.109.163:54708.service: Deactivated successfully. Jun 26 07:15:06.423938 systemd[1]: session-5.scope: Deactivated successfully. Jun 26 07:15:06.426162 systemd-logind[1448]: Session 5 logged out. Waiting for processes to exit. Jun 26 07:15:06.431476 systemd[1]: Started sshd@5-165.232.133.181:22-147.75.109.163:54710.service - OpenSSH per-connection server daemon (147.75.109.163:54710). Jun 26 07:15:06.433725 systemd-logind[1448]: Removed session 5. Jun 26 07:15:06.497025 sshd[1618]: Accepted publickey for core from 147.75.109.163 port 54710 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:15:06.499340 sshd[1618]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:15:06.507829 systemd-logind[1448]: New session 6 of user core. Jun 26 07:15:06.518237 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 26 07:15:06.585152 sshd[1618]: pam_unix(sshd:session): session closed for user core Jun 26 07:15:06.595887 systemd[1]: sshd@5-165.232.133.181:22-147.75.109.163:54710.service: Deactivated successfully. Jun 26 07:15:06.598816 systemd[1]: session-6.scope: Deactivated successfully. Jun 26 07:15:06.602280 systemd-logind[1448]: Session 6 logged out. Waiting for processes to exit. Jun 26 07:15:06.608316 systemd[1]: Started sshd@6-165.232.133.181:22-147.75.109.163:54712.service - OpenSSH per-connection server daemon (147.75.109.163:54712). Jun 26 07:15:06.610892 systemd-logind[1448]: Removed session 6. Jun 26 07:15:06.662570 sshd[1625]: Accepted publickey for core from 147.75.109.163 port 54712 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:15:06.664638 sshd[1625]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:15:06.672146 systemd-logind[1448]: New session 7 of user core. Jun 26 07:15:06.679144 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 26 07:15:06.760048 sudo[1628]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 26 07:15:06.760410 sudo[1628]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 26 07:15:06.779450 sudo[1628]: pam_unix(sudo:session): session closed for user root Jun 26 07:15:06.783881 sshd[1625]: pam_unix(sshd:session): session closed for user core Jun 26 07:15:06.795109 systemd[1]: sshd@6-165.232.133.181:22-147.75.109.163:54712.service: Deactivated successfully. Jun 26 07:15:06.797438 systemd[1]: session-7.scope: Deactivated successfully. Jun 26 07:15:06.800097 systemd-logind[1448]: Session 7 logged out. Waiting for processes to exit. Jun 26 07:15:06.807247 systemd[1]: Started sshd@7-165.232.133.181:22-147.75.109.163:54726.service - OpenSSH per-connection server daemon (147.75.109.163:54726). Jun 26 07:15:06.810204 systemd-logind[1448]: Removed session 7. Jun 26 07:15:06.863150 sshd[1633]: Accepted publickey for core from 147.75.109.163 port 54726 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:15:06.866272 sshd[1633]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:15:06.877004 systemd-logind[1448]: New session 8 of user core. Jun 26 07:15:06.882284 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 26 07:15:06.947720 sudo[1637]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 26 07:15:06.948691 sudo[1637]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 26 07:15:06.956052 sudo[1637]: pam_unix(sudo:session): session closed for user root Jun 26 07:15:06.964857 sudo[1636]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jun 26 07:15:06.965204 sudo[1636]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 26 07:15:06.991247 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jun 26 07:15:06.994308 auditctl[1640]: No rules Jun 26 07:15:06.994978 systemd[1]: audit-rules.service: Deactivated successfully. Jun 26 07:15:06.995321 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jun 26 07:15:06.999661 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 26 07:15:07.061249 augenrules[1658]: No rules Jun 26 07:15:07.063236 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 26 07:15:07.064717 sudo[1636]: pam_unix(sudo:session): session closed for user root Jun 26 07:15:07.069088 sshd[1633]: pam_unix(sshd:session): session closed for user core Jun 26 07:15:07.078585 systemd[1]: sshd@7-165.232.133.181:22-147.75.109.163:54726.service: Deactivated successfully. Jun 26 07:15:07.081087 systemd[1]: session-8.scope: Deactivated successfully. Jun 26 07:15:07.084081 systemd-logind[1448]: Session 8 logged out. Waiting for processes to exit. Jun 26 07:15:07.091331 systemd[1]: Started sshd@8-165.232.133.181:22-147.75.109.163:54728.service - OpenSSH per-connection server daemon (147.75.109.163:54728). Jun 26 07:15:07.093616 systemd-logind[1448]: Removed session 8. Jun 26 07:15:07.144209 sshd[1666]: Accepted publickey for core from 147.75.109.163 port 54728 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:15:07.146822 sshd[1666]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:15:07.156535 systemd-logind[1448]: New session 9 of user core. Jun 26 07:15:07.163077 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 26 07:15:07.225091 sudo[1669]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 26 07:15:07.225436 sudo[1669]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 26 07:15:07.426791 (dockerd)[1678]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jun 26 07:15:07.428189 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 26 07:15:07.851510 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 26 07:15:07.871634 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 26 07:15:07.960806 dockerd[1678]: time="2024-06-26T07:15:07.960408551Z" level=info msg="Starting up" Jun 26 07:15:08.041601 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport883917805-merged.mount: Deactivated successfully. Jun 26 07:15:08.057651 systemd[1]: var-lib-docker-metacopy\x2dcheck3376052472-merged.mount: Deactivated successfully. Jun 26 07:15:08.120824 dockerd[1678]: time="2024-06-26T07:15:08.120390189Z" level=info msg="Loading containers: start." Jun 26 07:15:08.137172 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 26 07:15:08.139525 (kubelet)[1696]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 26 07:15:08.249123 kubelet[1696]: E0626 07:15:08.249050 1696 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 26 07:15:08.255166 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 26 07:15:08.255322 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 26 07:15:08.308781 kernel: Initializing XFRM netlink socket Jun 26 07:15:08.345362 systemd-timesyncd[1342]: Network configuration changed, trying to establish connection. Jun 26 07:15:08.371091 systemd-timesyncd[1342]: Contacted time server 148.135.68.31:123 (2.flatcar.pool.ntp.org). Jun 26 07:15:08.371993 systemd-timesyncd[1342]: Initial clock synchronization to Wed 2024-06-26 07:15:08.567215 UTC. Jun 26 07:15:08.415476 systemd-networkd[1360]: docker0: Link UP Jun 26 07:15:08.435575 dockerd[1678]: time="2024-06-26T07:15:08.435525236Z" level=info msg="Loading containers: done." Jun 26 07:15:08.529843 dockerd[1678]: time="2024-06-26T07:15:08.529415409Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 26 07:15:08.529843 dockerd[1678]: time="2024-06-26T07:15:08.529677769Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jun 26 07:15:08.529843 dockerd[1678]: time="2024-06-26T07:15:08.529832375Z" level=info msg="Daemon has completed initialization" Jun 26 07:15:08.606321 dockerd[1678]: time="2024-06-26T07:15:08.606217465Z" level=info msg="API listen on /run/docker.sock" Jun 26 07:15:08.607266 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 26 07:15:09.645884 containerd[1464]: time="2024-06-26T07:15:09.645519300Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.6\"" Jun 26 07:15:10.498674 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1267669976.mount: Deactivated successfully. Jun 26 07:15:12.901672 containerd[1464]: time="2024-06-26T07:15:12.901571566Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:15:12.903324 containerd[1464]: time="2024-06-26T07:15:12.902874704Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.6: active requests=0, bytes read=35235837" Jun 26 07:15:12.905808 containerd[1464]: time="2024-06-26T07:15:12.904217627Z" level=info msg="ImageCreate event name:\"sha256:3af2ab51e136465590d968a2052e02e180fc7967a03724b269c1337e8f09d36f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:15:12.910223 containerd[1464]: time="2024-06-26T07:15:12.910155571Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f4d993b3d73cc0d59558be584b5b40785b4a96874bc76873b69d1dd818485e70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:15:12.912139 containerd[1464]: time="2024-06-26T07:15:12.912082113Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.6\" with image id \"sha256:3af2ab51e136465590d968a2052e02e180fc7967a03724b269c1337e8f09d36f\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f4d993b3d73cc0d59558be584b5b40785b4a96874bc76873b69d1dd818485e70\", size \"35232637\" in 3.26646723s" Jun 26 07:15:12.912455 containerd[1464]: time="2024-06-26T07:15:12.912141531Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.6\" returns image reference \"sha256:3af2ab51e136465590d968a2052e02e180fc7967a03724b269c1337e8f09d36f\"" Jun 26 07:15:12.948703 containerd[1464]: time="2024-06-26T07:15:12.948624350Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.6\"" Jun 26 07:15:16.121253 containerd[1464]: time="2024-06-26T07:15:16.121172964Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:15:16.124227 containerd[1464]: time="2024-06-26T07:15:16.124138378Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.6: active requests=0, bytes read=32069747" Jun 26 07:15:16.126660 containerd[1464]: time="2024-06-26T07:15:16.126578363Z" level=info msg="ImageCreate event name:\"sha256:083b81fc09e858d3e0d4b42f567a9d44a2232b60bac396a94cbdd7ce1098235e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:15:16.132974 containerd[1464]: time="2024-06-26T07:15:16.132849833Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:692fc3f88a60b3afc76492ad347306d34042000f56f230959e9367fd59c48b1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:15:16.136233 containerd[1464]: time="2024-06-26T07:15:16.134655890Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.6\" with image id \"sha256:083b81fc09e858d3e0d4b42f567a9d44a2232b60bac396a94cbdd7ce1098235e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:692fc3f88a60b3afc76492ad347306d34042000f56f230959e9367fd59c48b1e\", size \"33590639\" in 3.185984517s" Jun 26 07:15:16.136233 containerd[1464]: time="2024-06-26T07:15:16.134725935Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.6\" returns image reference \"sha256:083b81fc09e858d3e0d4b42f567a9d44a2232b60bac396a94cbdd7ce1098235e\"" Jun 26 07:15:16.202336 containerd[1464]: time="2024-06-26T07:15:16.201466651Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.6\"" Jun 26 07:15:18.093354 containerd[1464]: time="2024-06-26T07:15:18.093239118Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:15:18.096878 containerd[1464]: time="2024-06-26T07:15:18.096620686Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.6: active requests=0, bytes read=17153803" Jun 26 07:15:18.104082 containerd[1464]: time="2024-06-26T07:15:18.104011961Z" level=info msg="ImageCreate event name:\"sha256:49d9b8328a8fda6ebca6b3226c6d722d92ec7adffff18668511a88058444cf15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:15:18.110798 containerd[1464]: time="2024-06-26T07:15:18.110634968Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b91a4e45debd0d5336d9f533aefdf47d4b39b24071feb459e521709b9e4ec24f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:15:18.112151 containerd[1464]: time="2024-06-26T07:15:18.112085026Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.6\" with image id \"sha256:49d9b8328a8fda6ebca6b3226c6d722d92ec7adffff18668511a88058444cf15\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b91a4e45debd0d5336d9f533aefdf47d4b39b24071feb459e521709b9e4ec24f\", size \"18674713\" in 1.910548225s" Jun 26 07:15:18.112449 containerd[1464]: time="2024-06-26T07:15:18.112191775Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.6\" returns image reference \"sha256:49d9b8328a8fda6ebca6b3226c6d722d92ec7adffff18668511a88058444cf15\"" Jun 26 07:15:18.151444 containerd[1464]: time="2024-06-26T07:15:18.151388679Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.6\"" Jun 26 07:15:18.351918 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 26 07:15:18.362250 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 26 07:15:18.542088 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 26 07:15:18.556491 (kubelet)[1913]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 26 07:15:18.647988 kubelet[1913]: E0626 07:15:18.647624 1913 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 26 07:15:18.652577 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 26 07:15:18.653145 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 26 07:15:19.677793 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount234529461.mount: Deactivated successfully. Jun 26 07:15:20.273910 containerd[1464]: time="2024-06-26T07:15:20.273830285Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:15:20.275774 containerd[1464]: time="2024-06-26T07:15:20.275546683Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.6: active requests=0, bytes read=28409334" Jun 26 07:15:20.278082 containerd[1464]: time="2024-06-26T07:15:20.277987909Z" level=info msg="ImageCreate event name:\"sha256:9c49592198fa15b509fe4ee4a538067866776e325d6dd33c77ad6647e1d3aac9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:15:20.282486 containerd[1464]: time="2024-06-26T07:15:20.282183689Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:88bacb3e1d6c0c37c6da95c6d6b8e30531d0b4d0ab540cc290b0af51fbfebd90\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:15:20.283169 containerd[1464]: time="2024-06-26T07:15:20.283114520Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.6\" with image id \"sha256:9c49592198fa15b509fe4ee4a538067866776e325d6dd33c77ad6647e1d3aac9\", repo tag \"registry.k8s.io/kube-proxy:v1.29.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:88bacb3e1d6c0c37c6da95c6d6b8e30531d0b4d0ab540cc290b0af51fbfebd90\", size \"28408353\" in 2.13166328s" Jun 26 07:15:20.283169 containerd[1464]: time="2024-06-26T07:15:20.283169847Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.6\" returns image reference \"sha256:9c49592198fa15b509fe4ee4a538067866776e325d6dd33c77ad6647e1d3aac9\"" Jun 26 07:15:20.318322 containerd[1464]: time="2024-06-26T07:15:20.318277788Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jun 26 07:15:20.922254 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3767199259.mount: Deactivated successfully. Jun 26 07:15:21.973105 containerd[1464]: time="2024-06-26T07:15:21.972993523Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:15:21.974782 containerd[1464]: time="2024-06-26T07:15:21.974479270Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jun 26 07:15:21.977808 containerd[1464]: time="2024-06-26T07:15:21.976001645Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:15:21.981161 containerd[1464]: time="2024-06-26T07:15:21.980641217Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:15:21.984654 containerd[1464]: time="2024-06-26T07:15:21.984461942Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.66613737s" Jun 26 07:15:21.984654 containerd[1464]: time="2024-06-26T07:15:21.984532409Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jun 26 07:15:22.025087 containerd[1464]: time="2024-06-26T07:15:22.024758648Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jun 26 07:15:22.679173 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4181890274.mount: Deactivated successfully. Jun 26 07:15:22.688634 containerd[1464]: time="2024-06-26T07:15:22.688566979Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:15:22.689304 containerd[1464]: time="2024-06-26T07:15:22.689220693Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jun 26 07:15:22.691386 containerd[1464]: time="2024-06-26T07:15:22.691309822Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:15:22.695332 containerd[1464]: time="2024-06-26T07:15:22.694810198Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:15:22.695974 containerd[1464]: time="2024-06-26T07:15:22.695934600Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 671.137878ms" Jun 26 07:15:22.695974 containerd[1464]: time="2024-06-26T07:15:22.695974541Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jun 26 07:15:22.727393 containerd[1464]: time="2024-06-26T07:15:22.727334893Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jun 26 07:15:23.369075 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount366022074.mount: Deactivated successfully. Jun 26 07:15:25.463088 containerd[1464]: time="2024-06-26T07:15:25.463005303Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:15:25.465538 containerd[1464]: time="2024-06-26T07:15:25.465174858Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Jun 26 07:15:25.467237 containerd[1464]: time="2024-06-26T07:15:25.466691135Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:15:25.469979 containerd[1464]: time="2024-06-26T07:15:25.469935414Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:15:25.471790 containerd[1464]: time="2024-06-26T07:15:25.471726782Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 2.744331005s" Jun 26 07:15:25.471790 containerd[1464]: time="2024-06-26T07:15:25.471792216Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jun 26 07:15:28.311477 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 26 07:15:28.328136 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 26 07:15:28.372076 systemd[1]: Reloading requested from client PID 2100 ('systemctl') (unit session-9.scope)... Jun 26 07:15:28.372110 systemd[1]: Reloading... Jun 26 07:15:28.529806 zram_generator::config[2143]: No configuration found. Jun 26 07:15:28.663625 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 26 07:15:28.767572 systemd[1]: Reloading finished in 394 ms. Jun 26 07:15:28.837245 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jun 26 07:15:28.837358 systemd[1]: kubelet.service: Failed with result 'signal'. Jun 26 07:15:28.837645 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 26 07:15:28.846373 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 26 07:15:28.994165 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 26 07:15:29.007881 (kubelet)[2192]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 26 07:15:29.085469 kubelet[2192]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 26 07:15:29.085961 kubelet[2192]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 26 07:15:29.086032 kubelet[2192]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 26 07:15:29.087600 kubelet[2192]: I0626 07:15:29.087510 2192 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 26 07:15:29.556129 kubelet[2192]: I0626 07:15:29.556066 2192 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jun 26 07:15:29.556129 kubelet[2192]: I0626 07:15:29.556111 2192 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 26 07:15:29.556508 kubelet[2192]: I0626 07:15:29.556479 2192 server.go:919] "Client rotation is on, will bootstrap in background" Jun 26 07:15:29.597265 kubelet[2192]: E0626 07:15:29.597213 2192 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://165.232.133.181:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 165.232.133.181:6443: connect: connection refused Jun 26 07:15:29.598563 kubelet[2192]: I0626 07:15:29.598389 2192 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 26 07:15:29.612758 kubelet[2192]: I0626 07:15:29.612710 2192 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 26 07:15:29.615529 kubelet[2192]: I0626 07:15:29.614961 2192 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 26 07:15:29.617190 kubelet[2192]: I0626 07:15:29.616654 2192 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 26 07:15:29.617190 kubelet[2192]: I0626 07:15:29.616727 2192 topology_manager.go:138] "Creating topology manager with none policy" Jun 26 07:15:29.617190 kubelet[2192]: I0626 07:15:29.616791 2192 container_manager_linux.go:301] "Creating device plugin manager" Jun 26 07:15:29.617190 kubelet[2192]: I0626 07:15:29.616976 2192 state_mem.go:36] "Initialized new in-memory state store" Jun 26 07:15:29.618520 kubelet[2192]: W0626 07:15:29.618373 2192 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://165.232.133.181:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4012.0.0-0-ebda1d1a0c&limit=500&resourceVersion=0": dial tcp 165.232.133.181:6443: connect: connection refused Jun 26 07:15:29.618520 kubelet[2192]: E0626 07:15:29.618477 2192 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://165.232.133.181:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4012.0.0-0-ebda1d1a0c&limit=500&resourceVersion=0": dial tcp 165.232.133.181:6443: connect: connection refused Jun 26 07:15:29.619726 kubelet[2192]: I0626 07:15:29.619650 2192 kubelet.go:396] "Attempting to sync node with API server" Jun 26 07:15:29.619726 kubelet[2192]: I0626 07:15:29.619725 2192 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 26 07:15:29.621952 kubelet[2192]: I0626 07:15:29.619825 2192 kubelet.go:312] "Adding apiserver pod source" Jun 26 07:15:29.621952 kubelet[2192]: I0626 07:15:29.619850 2192 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 26 07:15:29.622288 kubelet[2192]: I0626 07:15:29.622248 2192 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Jun 26 07:15:29.626498 kubelet[2192]: W0626 07:15:29.626418 2192 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://165.232.133.181:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 165.232.133.181:6443: connect: connection refused Jun 26 07:15:29.626772 kubelet[2192]: E0626 07:15:29.626733 2192 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://165.232.133.181:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 165.232.133.181:6443: connect: connection refused Jun 26 07:15:29.631762 kubelet[2192]: I0626 07:15:29.631645 2192 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 26 07:15:29.631933 kubelet[2192]: W0626 07:15:29.631831 2192 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 26 07:15:29.634347 kubelet[2192]: I0626 07:15:29.632724 2192 server.go:1256] "Started kubelet" Jun 26 07:15:29.634347 kubelet[2192]: I0626 07:15:29.633067 2192 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jun 26 07:15:29.634859 kubelet[2192]: I0626 07:15:29.634832 2192 server.go:461] "Adding debug handlers to kubelet server" Jun 26 07:15:29.638107 kubelet[2192]: I0626 07:15:29.638062 2192 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 26 07:15:29.638539 kubelet[2192]: I0626 07:15:29.638504 2192 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 26 07:15:29.638816 kubelet[2192]: I0626 07:15:29.638794 2192 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 26 07:15:29.642837 kubelet[2192]: E0626 07:15:29.642791 2192 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://165.232.133.181:6443/api/v1/namespaces/default/events\": dial tcp 165.232.133.181:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4012.0.0-0-ebda1d1a0c.17dc7c93407d6507 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4012.0.0-0-ebda1d1a0c,UID:ci-4012.0.0-0-ebda1d1a0c,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4012.0.0-0-ebda1d1a0c,},FirstTimestamp:2024-06-26 07:15:29.632683271 +0000 UTC m=+0.617700333,LastTimestamp:2024-06-26 07:15:29.632683271 +0000 UTC m=+0.617700333,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4012.0.0-0-ebda1d1a0c,}" Jun 26 07:15:29.655103 kubelet[2192]: E0626 07:15:29.655063 2192 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4012.0.0-0-ebda1d1a0c\" not found" Jun 26 07:15:29.656852 kubelet[2192]: I0626 07:15:29.656817 2192 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 26 07:15:29.657135 kubelet[2192]: I0626 07:15:29.657121 2192 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jun 26 07:15:29.657296 kubelet[2192]: I0626 07:15:29.657284 2192 reconciler_new.go:29] "Reconciler: start to sync state" Jun 26 07:15:29.658453 kubelet[2192]: E0626 07:15:29.658423 2192 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://165.232.133.181:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012.0.0-0-ebda1d1a0c?timeout=10s\": dial tcp 165.232.133.181:6443: connect: connection refused" interval="200ms" Jun 26 07:15:29.663235 kubelet[2192]: W0626 07:15:29.663113 2192 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://165.232.133.181:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 165.232.133.181:6443: connect: connection refused Jun 26 07:15:29.663235 kubelet[2192]: E0626 07:15:29.663194 2192 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://165.232.133.181:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 165.232.133.181:6443: connect: connection refused Jun 26 07:15:29.664696 kubelet[2192]: I0626 07:15:29.663729 2192 factory.go:221] Registration of the containerd container factory successfully Jun 26 07:15:29.664696 kubelet[2192]: I0626 07:15:29.663765 2192 factory.go:221] Registration of the systemd container factory successfully Jun 26 07:15:29.664696 kubelet[2192]: I0626 07:15:29.663867 2192 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 26 07:15:29.673578 kubelet[2192]: I0626 07:15:29.673529 2192 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 26 07:15:29.675379 kubelet[2192]: I0626 07:15:29.675328 2192 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 26 07:15:29.675379 kubelet[2192]: I0626 07:15:29.675397 2192 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 26 07:15:29.675548 kubelet[2192]: I0626 07:15:29.675437 2192 kubelet.go:2329] "Starting kubelet main sync loop" Jun 26 07:15:29.675548 kubelet[2192]: E0626 07:15:29.675516 2192 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 26 07:15:29.688508 kubelet[2192]: E0626 07:15:29.688451 2192 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 26 07:15:29.689184 kubelet[2192]: W0626 07:15:29.689119 2192 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://165.232.133.181:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 165.232.133.181:6443: connect: connection refused Jun 26 07:15:29.689667 kubelet[2192]: E0626 07:15:29.689648 2192 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://165.232.133.181:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 165.232.133.181:6443: connect: connection refused Jun 26 07:15:29.696499 kubelet[2192]: I0626 07:15:29.696467 2192 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 26 07:15:29.696499 kubelet[2192]: I0626 07:15:29.696489 2192 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 26 07:15:29.696499 kubelet[2192]: I0626 07:15:29.696514 2192 state_mem.go:36] "Initialized new in-memory state store" Jun 26 07:15:29.701310 kubelet[2192]: I0626 07:15:29.701243 2192 policy_none.go:49] "None policy: Start" Jun 26 07:15:29.702633 kubelet[2192]: I0626 07:15:29.702563 2192 memory_manager.go:170] "Starting memorymanager" policy="None" Jun 26 07:15:29.702633 kubelet[2192]: I0626 07:15:29.702599 2192 state_mem.go:35] "Initializing new in-memory state store" Jun 26 07:15:29.714261 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 26 07:15:29.731143 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 26 07:15:29.736287 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 26 07:15:29.747369 kubelet[2192]: I0626 07:15:29.747059 2192 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 26 07:15:29.747906 kubelet[2192]: I0626 07:15:29.747660 2192 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 26 07:15:29.750422 kubelet[2192]: E0626 07:15:29.750387 2192 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4012.0.0-0-ebda1d1a0c\" not found" Jun 26 07:15:29.758322 kubelet[2192]: I0626 07:15:29.758291 2192 kubelet_node_status.go:73] "Attempting to register node" node="ci-4012.0.0-0-ebda1d1a0c" Jun 26 07:15:29.759123 kubelet[2192]: E0626 07:15:29.759094 2192 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://165.232.133.181:6443/api/v1/nodes\": dial tcp 165.232.133.181:6443: connect: connection refused" node="ci-4012.0.0-0-ebda1d1a0c" Jun 26 07:15:29.776635 kubelet[2192]: I0626 07:15:29.776585 2192 topology_manager.go:215] "Topology Admit Handler" podUID="4557f76b6b2c30597fc978e9efdd1fb6" podNamespace="kube-system" podName="kube-apiserver-ci-4012.0.0-0-ebda1d1a0c" Jun 26 07:15:29.778670 kubelet[2192]: I0626 07:15:29.778076 2192 topology_manager.go:215] "Topology Admit Handler" podUID="5ccfd86310337b6c59208c791967868b" podNamespace="kube-system" podName="kube-controller-manager-ci-4012.0.0-0-ebda1d1a0c" Jun 26 07:15:29.779534 kubelet[2192]: I0626 07:15:29.779498 2192 topology_manager.go:215] "Topology Admit Handler" podUID="255e5c9a922e99867b851181ddfe8187" podNamespace="kube-system" podName="kube-scheduler-ci-4012.0.0-0-ebda1d1a0c" Jun 26 07:15:29.793375 systemd[1]: Created slice kubepods-burstable-pod4557f76b6b2c30597fc978e9efdd1fb6.slice - libcontainer container kubepods-burstable-pod4557f76b6b2c30597fc978e9efdd1fb6.slice. Jun 26 07:15:29.809837 systemd[1]: Created slice kubepods-burstable-pod255e5c9a922e99867b851181ddfe8187.slice - libcontainer container kubepods-burstable-pod255e5c9a922e99867b851181ddfe8187.slice. Jun 26 07:15:29.832193 systemd[1]: Created slice kubepods-burstable-pod5ccfd86310337b6c59208c791967868b.slice - libcontainer container kubepods-burstable-pod5ccfd86310337b6c59208c791967868b.slice. Jun 26 07:15:29.859418 kubelet[2192]: E0626 07:15:29.859373 2192 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://165.232.133.181:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012.0.0-0-ebda1d1a0c?timeout=10s\": dial tcp 165.232.133.181:6443: connect: connection refused" interval="400ms" Jun 26 07:15:29.959025 kubelet[2192]: I0626 07:15:29.958960 2192 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4557f76b6b2c30597fc978e9efdd1fb6-ca-certs\") pod \"kube-apiserver-ci-4012.0.0-0-ebda1d1a0c\" (UID: \"4557f76b6b2c30597fc978e9efdd1fb6\") " pod="kube-system/kube-apiserver-ci-4012.0.0-0-ebda1d1a0c" Jun 26 07:15:29.959025 kubelet[2192]: I0626 07:15:29.959043 2192 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5ccfd86310337b6c59208c791967868b-ca-certs\") pod \"kube-controller-manager-ci-4012.0.0-0-ebda1d1a0c\" (UID: \"5ccfd86310337b6c59208c791967868b\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-0-ebda1d1a0c" Jun 26 07:15:29.959322 kubelet[2192]: I0626 07:15:29.959081 2192 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5ccfd86310337b6c59208c791967868b-flexvolume-dir\") pod \"kube-controller-manager-ci-4012.0.0-0-ebda1d1a0c\" (UID: \"5ccfd86310337b6c59208c791967868b\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-0-ebda1d1a0c" Jun 26 07:15:29.959322 kubelet[2192]: I0626 07:15:29.959148 2192 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5ccfd86310337b6c59208c791967868b-k8s-certs\") pod \"kube-controller-manager-ci-4012.0.0-0-ebda1d1a0c\" (UID: \"5ccfd86310337b6c59208c791967868b\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-0-ebda1d1a0c" Jun 26 07:15:29.959322 kubelet[2192]: I0626 07:15:29.959171 2192 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5ccfd86310337b6c59208c791967868b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4012.0.0-0-ebda1d1a0c\" (UID: \"5ccfd86310337b6c59208c791967868b\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-0-ebda1d1a0c" Jun 26 07:15:29.959322 kubelet[2192]: I0626 07:15:29.959192 2192 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/255e5c9a922e99867b851181ddfe8187-kubeconfig\") pod \"kube-scheduler-ci-4012.0.0-0-ebda1d1a0c\" (UID: \"255e5c9a922e99867b851181ddfe8187\") " pod="kube-system/kube-scheduler-ci-4012.0.0-0-ebda1d1a0c" Jun 26 07:15:29.959322 kubelet[2192]: I0626 07:15:29.959210 2192 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4557f76b6b2c30597fc978e9efdd1fb6-k8s-certs\") pod \"kube-apiserver-ci-4012.0.0-0-ebda1d1a0c\" (UID: \"4557f76b6b2c30597fc978e9efdd1fb6\") " pod="kube-system/kube-apiserver-ci-4012.0.0-0-ebda1d1a0c" Jun 26 07:15:29.959563 kubelet[2192]: I0626 07:15:29.959228 2192 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4557f76b6b2c30597fc978e9efdd1fb6-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4012.0.0-0-ebda1d1a0c\" (UID: \"4557f76b6b2c30597fc978e9efdd1fb6\") " pod="kube-system/kube-apiserver-ci-4012.0.0-0-ebda1d1a0c" Jun 26 07:15:29.959563 kubelet[2192]: I0626 07:15:29.959247 2192 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5ccfd86310337b6c59208c791967868b-kubeconfig\") pod \"kube-controller-manager-ci-4012.0.0-0-ebda1d1a0c\" (UID: \"5ccfd86310337b6c59208c791967868b\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-0-ebda1d1a0c" Jun 26 07:15:29.960974 kubelet[2192]: I0626 07:15:29.960491 2192 kubelet_node_status.go:73] "Attempting to register node" node="ci-4012.0.0-0-ebda1d1a0c" Jun 26 07:15:29.960974 kubelet[2192]: E0626 07:15:29.960943 2192 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://165.232.133.181:6443/api/v1/nodes\": dial tcp 165.232.133.181:6443: connect: connection refused" node="ci-4012.0.0-0-ebda1d1a0c" Jun 26 07:15:30.108343 kubelet[2192]: E0626 07:15:30.108270 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:15:30.109276 containerd[1464]: time="2024-06-26T07:15:30.109218663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4012.0.0-0-ebda1d1a0c,Uid:4557f76b6b2c30597fc978e9efdd1fb6,Namespace:kube-system,Attempt:0,}" Jun 26 07:15:30.127181 kubelet[2192]: E0626 07:15:30.127126 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:15:30.135812 containerd[1464]: time="2024-06-26T07:15:30.135717002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4012.0.0-0-ebda1d1a0c,Uid:255e5c9a922e99867b851181ddfe8187,Namespace:kube-system,Attempt:0,}" Jun 26 07:15:30.137593 kubelet[2192]: E0626 07:15:30.137426 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:15:30.138455 containerd[1464]: time="2024-06-26T07:15:30.138065194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4012.0.0-0-ebda1d1a0c,Uid:5ccfd86310337b6c59208c791967868b,Namespace:kube-system,Attempt:0,}" Jun 26 07:15:30.260695 kubelet[2192]: E0626 07:15:30.260615 2192 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://165.232.133.181:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012.0.0-0-ebda1d1a0c?timeout=10s\": dial tcp 165.232.133.181:6443: connect: connection refused" interval="800ms" Jun 26 07:15:30.362696 kubelet[2192]: I0626 07:15:30.362540 2192 kubelet_node_status.go:73] "Attempting to register node" node="ci-4012.0.0-0-ebda1d1a0c" Jun 26 07:15:30.363148 kubelet[2192]: E0626 07:15:30.363033 2192 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://165.232.133.181:6443/api/v1/nodes\": dial tcp 165.232.133.181:6443: connect: connection refused" node="ci-4012.0.0-0-ebda1d1a0c" Jun 26 07:15:30.605691 kubelet[2192]: W0626 07:15:30.605580 2192 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://165.232.133.181:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4012.0.0-0-ebda1d1a0c&limit=500&resourceVersion=0": dial tcp 165.232.133.181:6443: connect: connection refused Jun 26 07:15:30.605691 kubelet[2192]: E0626 07:15:30.605665 2192 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://165.232.133.181:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4012.0.0-0-ebda1d1a0c&limit=500&resourceVersion=0": dial tcp 165.232.133.181:6443: connect: connection refused Jun 26 07:15:30.787293 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount835097692.mount: Deactivated successfully. Jun 26 07:15:30.801316 containerd[1464]: time="2024-06-26T07:15:30.801068504Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 26 07:15:30.802806 containerd[1464]: time="2024-06-26T07:15:30.802354670Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 26 07:15:30.803520 containerd[1464]: time="2024-06-26T07:15:30.803469459Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 26 07:15:30.804631 containerd[1464]: time="2024-06-26T07:15:30.804579099Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 26 07:15:30.805064 containerd[1464]: time="2024-06-26T07:15:30.805030640Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 26 07:15:30.805859 containerd[1464]: time="2024-06-26T07:15:30.805821863Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jun 26 07:15:30.806490 containerd[1464]: time="2024-06-26T07:15:30.806447386Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 26 07:15:30.811115 containerd[1464]: time="2024-06-26T07:15:30.811025207Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 26 07:15:30.812945 containerd[1464]: time="2024-06-26T07:15:30.812106078Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 676.219453ms" Jun 26 07:15:30.816427 containerd[1464]: time="2024-06-26T07:15:30.816014958Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 677.717624ms" Jun 26 07:15:30.819168 containerd[1464]: time="2024-06-26T07:15:30.819107691Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 706.028916ms" Jun 26 07:15:30.832984 kubelet[2192]: W0626 07:15:30.832875 2192 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://165.232.133.181:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 165.232.133.181:6443: connect: connection refused Jun 26 07:15:30.832984 kubelet[2192]: E0626 07:15:30.832937 2192 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://165.232.133.181:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 165.232.133.181:6443: connect: connection refused Jun 26 07:15:30.839528 kubelet[2192]: W0626 07:15:30.839471 2192 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://165.232.133.181:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 165.232.133.181:6443: connect: connection refused Jun 26 07:15:30.840153 kubelet[2192]: E0626 07:15:30.840090 2192 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://165.232.133.181:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 165.232.133.181:6443: connect: connection refused Jun 26 07:15:30.861849 kubelet[2192]: W0626 07:15:30.860274 2192 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://165.232.133.181:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 165.232.133.181:6443: connect: connection refused Jun 26 07:15:30.861849 kubelet[2192]: E0626 07:15:30.860397 2192 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://165.232.133.181:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 165.232.133.181:6443: connect: connection refused Jun 26 07:15:31.011579 containerd[1464]: time="2024-06-26T07:15:31.011212217Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 26 07:15:31.011579 containerd[1464]: time="2024-06-26T07:15:31.011341263Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:15:31.011579 containerd[1464]: time="2024-06-26T07:15:31.011367031Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 26 07:15:31.011579 containerd[1464]: time="2024-06-26T07:15:31.011380575Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:15:31.016771 containerd[1464]: time="2024-06-26T07:15:31.015110389Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 26 07:15:31.016771 containerd[1464]: time="2024-06-26T07:15:31.015165651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:15:31.016771 containerd[1464]: time="2024-06-26T07:15:31.015180644Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 26 07:15:31.016771 containerd[1464]: time="2024-06-26T07:15:31.015190714Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:15:31.021130 containerd[1464]: time="2024-06-26T07:15:31.020618416Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 26 07:15:31.021130 containerd[1464]: time="2024-06-26T07:15:31.020994890Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:15:31.022566 containerd[1464]: time="2024-06-26T07:15:31.021020376Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 26 07:15:31.022566 containerd[1464]: time="2024-06-26T07:15:31.022392538Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:15:31.050341 systemd[1]: Started cri-containerd-3bd90784a65a02f3221f95b038858c303416a4ce7fff1164959096d55934be3f.scope - libcontainer container 3bd90784a65a02f3221f95b038858c303416a4ce7fff1164959096d55934be3f. Jun 26 07:15:31.065677 kubelet[2192]: E0626 07:15:31.061123 2192 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://165.232.133.181:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012.0.0-0-ebda1d1a0c?timeout=10s\": dial tcp 165.232.133.181:6443: connect: connection refused" interval="1.6s" Jun 26 07:15:31.072080 systemd[1]: Started cri-containerd-e4d05b8f587ebed8a6082a1ad84f930c0a06c010139dd38a56164f1fd82a50ce.scope - libcontainer container e4d05b8f587ebed8a6082a1ad84f930c0a06c010139dd38a56164f1fd82a50ce. Jun 26 07:15:31.076112 systemd[1]: Started cri-containerd-ed8f1c24a888c5b53c9784ba8111aaa005d23dfd8eba3ffa3a11dc8e17d066e9.scope - libcontainer container ed8f1c24a888c5b53c9784ba8111aaa005d23dfd8eba3ffa3a11dc8e17d066e9. Jun 26 07:15:31.165634 kubelet[2192]: I0626 07:15:31.164804 2192 kubelet_node_status.go:73] "Attempting to register node" node="ci-4012.0.0-0-ebda1d1a0c" Jun 26 07:15:31.165634 kubelet[2192]: E0626 07:15:31.165309 2192 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://165.232.133.181:6443/api/v1/nodes\": dial tcp 165.232.133.181:6443: connect: connection refused" node="ci-4012.0.0-0-ebda1d1a0c" Jun 26 07:15:31.180830 containerd[1464]: time="2024-06-26T07:15:31.180467458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4012.0.0-0-ebda1d1a0c,Uid:5ccfd86310337b6c59208c791967868b,Namespace:kube-system,Attempt:0,} returns sandbox id \"e4d05b8f587ebed8a6082a1ad84f930c0a06c010139dd38a56164f1fd82a50ce\"" Jun 26 07:15:31.186368 kubelet[2192]: E0626 07:15:31.185325 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:15:31.186544 containerd[1464]: time="2024-06-26T07:15:31.185871646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4012.0.0-0-ebda1d1a0c,Uid:4557f76b6b2c30597fc978e9efdd1fb6,Namespace:kube-system,Attempt:0,} returns sandbox id \"3bd90784a65a02f3221f95b038858c303416a4ce7fff1164959096d55934be3f\"" Jun 26 07:15:31.190377 kubelet[2192]: E0626 07:15:31.190210 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:15:31.195637 containerd[1464]: time="2024-06-26T07:15:31.195363696Z" level=info msg="CreateContainer within sandbox \"e4d05b8f587ebed8a6082a1ad84f930c0a06c010139dd38a56164f1fd82a50ce\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 26 07:15:31.199907 containerd[1464]: time="2024-06-26T07:15:31.199598128Z" level=info msg="CreateContainer within sandbox \"3bd90784a65a02f3221f95b038858c303416a4ce7fff1164959096d55934be3f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 26 07:15:31.208346 containerd[1464]: time="2024-06-26T07:15:31.208229637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4012.0.0-0-ebda1d1a0c,Uid:255e5c9a922e99867b851181ddfe8187,Namespace:kube-system,Attempt:0,} returns sandbox id \"ed8f1c24a888c5b53c9784ba8111aaa005d23dfd8eba3ffa3a11dc8e17d066e9\"" Jun 26 07:15:31.209871 kubelet[2192]: E0626 07:15:31.209648 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:15:31.213548 containerd[1464]: time="2024-06-26T07:15:31.213466810Z" level=info msg="CreateContainer within sandbox \"ed8f1c24a888c5b53c9784ba8111aaa005d23dfd8eba3ffa3a11dc8e17d066e9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 26 07:15:31.245366 containerd[1464]: time="2024-06-26T07:15:31.245299838Z" level=info msg="CreateContainer within sandbox \"e4d05b8f587ebed8a6082a1ad84f930c0a06c010139dd38a56164f1fd82a50ce\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b8c1581ceb3e5f392b8b2b45e0a76d7c0d4b2bb5f3e9c9a968d666d4af9703eb\"" Jun 26 07:15:31.247033 containerd[1464]: time="2024-06-26T07:15:31.246998691Z" level=info msg="StartContainer for \"b8c1581ceb3e5f392b8b2b45e0a76d7c0d4b2bb5f3e9c9a968d666d4af9703eb\"" Jun 26 07:15:31.253638 containerd[1464]: time="2024-06-26T07:15:31.253547847Z" level=info msg="CreateContainer within sandbox \"ed8f1c24a888c5b53c9784ba8111aaa005d23dfd8eba3ffa3a11dc8e17d066e9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a9176d97d8e53e03238fdf57660a27756d7bcce9cb562d9b25284888ce43b840\"" Jun 26 07:15:31.254513 containerd[1464]: time="2024-06-26T07:15:31.254298743Z" level=info msg="StartContainer for \"a9176d97d8e53e03238fdf57660a27756d7bcce9cb562d9b25284888ce43b840\"" Jun 26 07:15:31.256072 containerd[1464]: time="2024-06-26T07:15:31.256019368Z" level=info msg="CreateContainer within sandbox \"3bd90784a65a02f3221f95b038858c303416a4ce7fff1164959096d55934be3f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e46520ab1784952908eea5e1e7b0e671f0dde12c3c5353b3f21ffa2e0f98b11a\"" Jun 26 07:15:31.257081 containerd[1464]: time="2024-06-26T07:15:31.257006368Z" level=info msg="StartContainer for \"e46520ab1784952908eea5e1e7b0e671f0dde12c3c5353b3f21ffa2e0f98b11a\"" Jun 26 07:15:31.302009 systemd[1]: Started cri-containerd-b8c1581ceb3e5f392b8b2b45e0a76d7c0d4b2bb5f3e9c9a968d666d4af9703eb.scope - libcontainer container b8c1581ceb3e5f392b8b2b45e0a76d7c0d4b2bb5f3e9c9a968d666d4af9703eb. Jun 26 07:15:31.316951 systemd[1]: Started cri-containerd-a9176d97d8e53e03238fdf57660a27756d7bcce9cb562d9b25284888ce43b840.scope - libcontainer container a9176d97d8e53e03238fdf57660a27756d7bcce9cb562d9b25284888ce43b840. Jun 26 07:15:31.327037 systemd[1]: Started cri-containerd-e46520ab1784952908eea5e1e7b0e671f0dde12c3c5353b3f21ffa2e0f98b11a.scope - libcontainer container e46520ab1784952908eea5e1e7b0e671f0dde12c3c5353b3f21ffa2e0f98b11a. Jun 26 07:15:31.401336 containerd[1464]: time="2024-06-26T07:15:31.401166109Z" level=info msg="StartContainer for \"a9176d97d8e53e03238fdf57660a27756d7bcce9cb562d9b25284888ce43b840\" returns successfully" Jun 26 07:15:31.432806 containerd[1464]: time="2024-06-26T07:15:31.432037419Z" level=info msg="StartContainer for \"b8c1581ceb3e5f392b8b2b45e0a76d7c0d4b2bb5f3e9c9a968d666d4af9703eb\" returns successfully" Jun 26 07:15:31.444239 containerd[1464]: time="2024-06-26T07:15:31.444182152Z" level=info msg="StartContainer for \"e46520ab1784952908eea5e1e7b0e671f0dde12c3c5353b3f21ffa2e0f98b11a\" returns successfully" Jun 26 07:15:31.706719 kubelet[2192]: E0626 07:15:31.706677 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:15:31.711285 kubelet[2192]: E0626 07:15:31.711249 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:15:31.714756 kubelet[2192]: E0626 07:15:31.714708 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:15:32.719727 kubelet[2192]: E0626 07:15:32.718683 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:15:32.767352 kubelet[2192]: I0626 07:15:32.767315 2192 kubelet_node_status.go:73] "Attempting to register node" node="ci-4012.0.0-0-ebda1d1a0c" Jun 26 07:15:33.228861 kubelet[2192]: E0626 07:15:33.228605 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:15:33.926674 kubelet[2192]: I0626 07:15:33.926613 2192 kubelet_node_status.go:76] "Successfully registered node" node="ci-4012.0.0-0-ebda1d1a0c" Jun 26 07:15:33.970668 kubelet[2192]: E0626 07:15:33.970605 2192 event.go:346] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4012.0.0-0-ebda1d1a0c.17dc7c93407d6507 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4012.0.0-0-ebda1d1a0c,UID:ci-4012.0.0-0-ebda1d1a0c,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4012.0.0-0-ebda1d1a0c,},FirstTimestamp:2024-06-26 07:15:29.632683271 +0000 UTC m=+0.617700333,LastTimestamp:2024-06-26 07:15:29.632683271 +0000 UTC m=+0.617700333,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4012.0.0-0-ebda1d1a0c,}" Jun 26 07:15:33.995198 kubelet[2192]: E0626 07:15:33.995154 2192 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Jun 26 07:15:34.625839 kubelet[2192]: I0626 07:15:34.625614 2192 apiserver.go:52] "Watching apiserver" Jun 26 07:15:34.658467 kubelet[2192]: I0626 07:15:34.658400 2192 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jun 26 07:15:37.229559 systemd[1]: Reloading requested from client PID 2466 ('systemctl') (unit session-9.scope)... Jun 26 07:15:37.229592 systemd[1]: Reloading... Jun 26 07:15:37.567939 zram_generator::config[2509]: No configuration found. Jun 26 07:15:37.839799 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 26 07:15:38.021620 systemd[1]: Reloading finished in 790 ms. Jun 26 07:15:38.088570 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 26 07:15:38.105215 systemd[1]: kubelet.service: Deactivated successfully. Jun 26 07:15:38.105728 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 26 07:15:38.106071 systemd[1]: kubelet.service: Consumed 1.145s CPU time, 105.9M memory peak, 0B memory swap peak. Jun 26 07:15:38.114258 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 26 07:15:38.364725 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 26 07:15:38.385303 (kubelet)[2555]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 26 07:15:38.512093 kubelet[2555]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 26 07:15:38.514788 kubelet[2555]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 26 07:15:38.514788 kubelet[2555]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 26 07:15:38.514788 kubelet[2555]: I0626 07:15:38.513058 2555 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 26 07:15:38.537497 kubelet[2555]: I0626 07:15:38.537414 2555 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jun 26 07:15:38.537945 kubelet[2555]: I0626 07:15:38.537910 2555 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 26 07:15:38.538506 kubelet[2555]: I0626 07:15:38.538474 2555 server.go:919] "Client rotation is on, will bootstrap in background" Jun 26 07:15:38.545017 kubelet[2555]: I0626 07:15:38.544970 2555 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 26 07:15:38.577176 kubelet[2555]: I0626 07:15:38.577062 2555 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 26 07:15:38.594417 kubelet[2555]: I0626 07:15:38.594371 2555 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 26 07:15:38.597909 kubelet[2555]: I0626 07:15:38.597861 2555 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 26 07:15:38.598630 kubelet[2555]: I0626 07:15:38.598574 2555 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 26 07:15:38.599544 kubelet[2555]: I0626 07:15:38.599005 2555 topology_manager.go:138] "Creating topology manager with none policy" Jun 26 07:15:38.599544 kubelet[2555]: I0626 07:15:38.599043 2555 container_manager_linux.go:301] "Creating device plugin manager" Jun 26 07:15:38.599544 kubelet[2555]: I0626 07:15:38.599119 2555 state_mem.go:36] "Initialized new in-memory state store" Jun 26 07:15:38.599544 kubelet[2555]: I0626 07:15:38.599290 2555 kubelet.go:396] "Attempting to sync node with API server" Jun 26 07:15:38.599544 kubelet[2555]: I0626 07:15:38.599369 2555 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 26 07:15:38.599544 kubelet[2555]: I0626 07:15:38.599416 2555 kubelet.go:312] "Adding apiserver pod source" Jun 26 07:15:38.599544 kubelet[2555]: I0626 07:15:38.599440 2555 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 26 07:15:38.603809 kubelet[2555]: I0626 07:15:38.603625 2555 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Jun 26 07:15:38.604454 kubelet[2555]: I0626 07:15:38.604425 2555 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 26 07:15:38.605250 kubelet[2555]: I0626 07:15:38.605223 2555 server.go:1256] "Started kubelet" Jun 26 07:15:38.610159 kubelet[2555]: I0626 07:15:38.609861 2555 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 26 07:15:38.629895 kubelet[2555]: I0626 07:15:38.629554 2555 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jun 26 07:15:38.632555 kubelet[2555]: I0626 07:15:38.631540 2555 server.go:461] "Adding debug handlers to kubelet server" Jun 26 07:15:38.655206 kubelet[2555]: I0626 07:15:38.655145 2555 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 26 07:15:38.657260 kubelet[2555]: I0626 07:15:38.657214 2555 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 26 07:15:38.660530 kubelet[2555]: I0626 07:15:38.660266 2555 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 26 07:15:38.688343 kubelet[2555]: I0626 07:15:38.661064 2555 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jun 26 07:15:38.688343 kubelet[2555]: I0626 07:15:38.687612 2555 reconciler_new.go:29] "Reconciler: start to sync state" Jun 26 07:15:38.692233 kubelet[2555]: I0626 07:15:38.692182 2555 factory.go:221] Registration of the systemd container factory successfully Jun 26 07:15:38.697512 kubelet[2555]: I0626 07:15:38.696967 2555 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 26 07:15:38.718477 kubelet[2555]: I0626 07:15:38.718138 2555 factory.go:221] Registration of the containerd container factory successfully Jun 26 07:15:38.723046 kubelet[2555]: I0626 07:15:38.718929 2555 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 26 07:15:38.730830 kubelet[2555]: I0626 07:15:38.730466 2555 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 26 07:15:38.730830 kubelet[2555]: I0626 07:15:38.730534 2555 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 26 07:15:38.730830 kubelet[2555]: I0626 07:15:38.730575 2555 kubelet.go:2329] "Starting kubelet main sync loop" Jun 26 07:15:38.730830 kubelet[2555]: E0626 07:15:38.730664 2555 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 26 07:15:38.732045 kubelet[2555]: E0626 07:15:38.720094 2555 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 26 07:15:38.775605 kubelet[2555]: I0626 07:15:38.775065 2555 kubelet_node_status.go:73] "Attempting to register node" node="ci-4012.0.0-0-ebda1d1a0c" Jun 26 07:15:38.813557 kubelet[2555]: I0626 07:15:38.813472 2555 kubelet_node_status.go:112] "Node was previously registered" node="ci-4012.0.0-0-ebda1d1a0c" Jun 26 07:15:38.819906 kubelet[2555]: I0626 07:15:38.813599 2555 kubelet_node_status.go:76] "Successfully registered node" node="ci-4012.0.0-0-ebda1d1a0c" Jun 26 07:15:38.832092 kubelet[2555]: E0626 07:15:38.831407 2555 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 26 07:15:38.918131 kubelet[2555]: I0626 07:15:38.913847 2555 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 26 07:15:38.918131 kubelet[2555]: I0626 07:15:38.913918 2555 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 26 07:15:38.918131 kubelet[2555]: I0626 07:15:38.913952 2555 state_mem.go:36] "Initialized new in-memory state store" Jun 26 07:15:38.918131 kubelet[2555]: I0626 07:15:38.916185 2555 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 26 07:15:38.920579 kubelet[2555]: I0626 07:15:38.918557 2555 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 26 07:15:38.920579 kubelet[2555]: I0626 07:15:38.918597 2555 policy_none.go:49] "None policy: Start" Jun 26 07:15:38.929864 kubelet[2555]: I0626 07:15:38.929820 2555 memory_manager.go:170] "Starting memorymanager" policy="None" Jun 26 07:15:38.930130 kubelet[2555]: I0626 07:15:38.930110 2555 state_mem.go:35] "Initializing new in-memory state store" Jun 26 07:15:38.930968 kubelet[2555]: I0626 07:15:38.930939 2555 state_mem.go:75] "Updated machine memory state" Jun 26 07:15:38.964453 kubelet[2555]: I0626 07:15:38.964415 2555 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 26 07:15:38.966025 kubelet[2555]: I0626 07:15:38.965981 2555 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 26 07:15:39.032764 kubelet[2555]: I0626 07:15:39.031845 2555 topology_manager.go:215] "Topology Admit Handler" podUID="4557f76b6b2c30597fc978e9efdd1fb6" podNamespace="kube-system" podName="kube-apiserver-ci-4012.0.0-0-ebda1d1a0c" Jun 26 07:15:39.033826 kubelet[2555]: I0626 07:15:39.033251 2555 topology_manager.go:215] "Topology Admit Handler" podUID="5ccfd86310337b6c59208c791967868b" podNamespace="kube-system" podName="kube-controller-manager-ci-4012.0.0-0-ebda1d1a0c" Jun 26 07:15:39.033826 kubelet[2555]: I0626 07:15:39.033339 2555 topology_manager.go:215] "Topology Admit Handler" podUID="255e5c9a922e99867b851181ddfe8187" podNamespace="kube-system" podName="kube-scheduler-ci-4012.0.0-0-ebda1d1a0c" Jun 26 07:15:39.064161 kubelet[2555]: W0626 07:15:39.063967 2555 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 26 07:15:39.078184 kubelet[2555]: W0626 07:15:39.077926 2555 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 26 07:15:39.078998 kubelet[2555]: W0626 07:15:39.078966 2555 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 26 07:15:39.088888 kubelet[2555]: I0626 07:15:39.088827 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4557f76b6b2c30597fc978e9efdd1fb6-ca-certs\") pod \"kube-apiserver-ci-4012.0.0-0-ebda1d1a0c\" (UID: \"4557f76b6b2c30597fc978e9efdd1fb6\") " pod="kube-system/kube-apiserver-ci-4012.0.0-0-ebda1d1a0c" Jun 26 07:15:39.089719 kubelet[2555]: I0626 07:15:39.089391 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4557f76b6b2c30597fc978e9efdd1fb6-k8s-certs\") pod \"kube-apiserver-ci-4012.0.0-0-ebda1d1a0c\" (UID: \"4557f76b6b2c30597fc978e9efdd1fb6\") " pod="kube-system/kube-apiserver-ci-4012.0.0-0-ebda1d1a0c" Jun 26 07:15:39.089719 kubelet[2555]: I0626 07:15:39.089496 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4557f76b6b2c30597fc978e9efdd1fb6-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4012.0.0-0-ebda1d1a0c\" (UID: \"4557f76b6b2c30597fc978e9efdd1fb6\") " pod="kube-system/kube-apiserver-ci-4012.0.0-0-ebda1d1a0c" Jun 26 07:15:39.089719 kubelet[2555]: I0626 07:15:39.089591 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5ccfd86310337b6c59208c791967868b-k8s-certs\") pod \"kube-controller-manager-ci-4012.0.0-0-ebda1d1a0c\" (UID: \"5ccfd86310337b6c59208c791967868b\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-0-ebda1d1a0c" Jun 26 07:15:39.089719 kubelet[2555]: I0626 07:15:39.089658 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5ccfd86310337b6c59208c791967868b-kubeconfig\") pod \"kube-controller-manager-ci-4012.0.0-0-ebda1d1a0c\" (UID: \"5ccfd86310337b6c59208c791967868b\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-0-ebda1d1a0c" Jun 26 07:15:39.089719 kubelet[2555]: I0626 07:15:39.089692 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/255e5c9a922e99867b851181ddfe8187-kubeconfig\") pod \"kube-scheduler-ci-4012.0.0-0-ebda1d1a0c\" (UID: \"255e5c9a922e99867b851181ddfe8187\") " pod="kube-system/kube-scheduler-ci-4012.0.0-0-ebda1d1a0c" Jun 26 07:15:39.090447 kubelet[2555]: I0626 07:15:39.090174 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5ccfd86310337b6c59208c791967868b-ca-certs\") pod \"kube-controller-manager-ci-4012.0.0-0-ebda1d1a0c\" (UID: \"5ccfd86310337b6c59208c791967868b\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-0-ebda1d1a0c" Jun 26 07:15:39.090447 kubelet[2555]: I0626 07:15:39.090318 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5ccfd86310337b6c59208c791967868b-flexvolume-dir\") pod \"kube-controller-manager-ci-4012.0.0-0-ebda1d1a0c\" (UID: \"5ccfd86310337b6c59208c791967868b\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-0-ebda1d1a0c" Jun 26 07:15:39.090447 kubelet[2555]: I0626 07:15:39.090394 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5ccfd86310337b6c59208c791967868b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4012.0.0-0-ebda1d1a0c\" (UID: \"5ccfd86310337b6c59208c791967868b\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-0-ebda1d1a0c" Jun 26 07:15:39.380261 kubelet[2555]: E0626 07:15:39.380165 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:15:39.382927 kubelet[2555]: E0626 07:15:39.382857 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:15:39.383936 kubelet[2555]: E0626 07:15:39.383875 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:15:39.563632 update_engine[1450]: I0626 07:15:39.562454 1450 update_attempter.cc:509] Updating boot flags... Jun 26 07:15:39.602849 kubelet[2555]: I0626 07:15:39.602450 2555 apiserver.go:52] "Watching apiserver" Jun 26 07:15:39.689440 kubelet[2555]: I0626 07:15:39.687540 2555 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jun 26 07:15:39.701477 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (2602) Jun 26 07:15:39.835461 kubelet[2555]: E0626 07:15:39.835388 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:15:39.855786 kubelet[2555]: E0626 07:15:39.850111 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:15:39.855786 kubelet[2555]: E0626 07:15:39.853001 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:15:39.857092 kubelet[2555]: I0626 07:15:39.856994 2555 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4012.0.0-0-ebda1d1a0c" podStartSLOduration=0.856848398 podStartE2EDuration="856.848398ms" podCreationTimestamp="2024-06-26 07:15:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-26 07:15:39.817629018 +0000 UTC m=+1.419666880" watchObservedRunningTime="2024-06-26 07:15:39.856848398 +0000 UTC m=+1.458886313" Jun 26 07:15:39.893695 kubelet[2555]: I0626 07:15:39.893634 2555 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4012.0.0-0-ebda1d1a0c" podStartSLOduration=0.89355277 podStartE2EDuration="893.55277ms" podCreationTimestamp="2024-06-26 07:15:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-26 07:15:39.890169918 +0000 UTC m=+1.492207780" watchObservedRunningTime="2024-06-26 07:15:39.89355277 +0000 UTC m=+1.495590628" Jun 26 07:15:39.900786 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (2600) Jun 26 07:15:40.123870 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (2600) Jun 26 07:15:40.176290 kubelet[2555]: I0626 07:15:40.175499 2555 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4012.0.0-0-ebda1d1a0c" podStartSLOduration=1.175450825 podStartE2EDuration="1.175450825s" podCreationTimestamp="2024-06-26 07:15:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-26 07:15:40.068923714 +0000 UTC m=+1.670961577" watchObservedRunningTime="2024-06-26 07:15:40.175450825 +0000 UTC m=+1.777488684" Jun 26 07:15:40.828172 kubelet[2555]: E0626 07:15:40.827926 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:15:44.897190 kubelet[2555]: E0626 07:15:44.896784 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:15:45.753487 sudo[1669]: pam_unix(sudo:session): session closed for user root Jun 26 07:15:45.759385 sshd[1666]: pam_unix(sshd:session): session closed for user core Jun 26 07:15:45.763867 systemd[1]: sshd@8-165.232.133.181:22-147.75.109.163:54728.service: Deactivated successfully. Jun 26 07:15:45.767702 systemd[1]: session-9.scope: Deactivated successfully. Jun 26 07:15:45.768276 systemd[1]: session-9.scope: Consumed 5.742s CPU time, 135.9M memory peak, 0B memory swap peak. Jun 26 07:15:45.770700 systemd-logind[1448]: Session 9 logged out. Waiting for processes to exit. Jun 26 07:15:45.772561 systemd-logind[1448]: Removed session 9. Jun 26 07:15:45.839542 kubelet[2555]: E0626 07:15:45.839457 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:15:47.952524 kubelet[2555]: E0626 07:15:47.951217 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:15:48.200188 kubelet[2555]: E0626 07:15:48.197948 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:15:48.845978 kubelet[2555]: E0626 07:15:48.845312 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:15:48.845978 kubelet[2555]: E0626 07:15:48.845724 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:15:51.811674 kubelet[2555]: I0626 07:15:51.811634 2555 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 26 07:15:51.813693 containerd[1464]: time="2024-06-26T07:15:51.813645462Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 26 07:15:51.815837 kubelet[2555]: I0626 07:15:51.813892 2555 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 26 07:15:51.891187 kubelet[2555]: I0626 07:15:51.887763 2555 topology_manager.go:215] "Topology Admit Handler" podUID="86db3ec7-203c-448e-99be-8d34b8457e9c" podNamespace="kube-system" podName="kube-proxy-km8c8" Jun 26 07:15:51.901261 systemd[1]: Created slice kubepods-besteffort-pod86db3ec7_203c_448e_99be_8d34b8457e9c.slice - libcontainer container kubepods-besteffort-pod86db3ec7_203c_448e_99be_8d34b8457e9c.slice. Jun 26 07:15:51.967863 kubelet[2555]: I0626 07:15:51.967815 2555 topology_manager.go:215] "Topology Admit Handler" podUID="b5aca86f-9d78-47a3-af3f-4bd8068d3213" podNamespace="tigera-operator" podName="tigera-operator-76c4974c85-6t94c" Jun 26 07:15:51.980705 systemd[1]: Created slice kubepods-besteffort-podb5aca86f_9d78_47a3_af3f_4bd8068d3213.slice - libcontainer container kubepods-besteffort-podb5aca86f_9d78_47a3_af3f_4bd8068d3213.slice. Jun 26 07:15:51.992730 kubelet[2555]: I0626 07:15:51.992679 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/86db3ec7-203c-448e-99be-8d34b8457e9c-xtables-lock\") pod \"kube-proxy-km8c8\" (UID: \"86db3ec7-203c-448e-99be-8d34b8457e9c\") " pod="kube-system/kube-proxy-km8c8" Jun 26 07:15:51.992730 kubelet[2555]: I0626 07:15:51.992738 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/86db3ec7-203c-448e-99be-8d34b8457e9c-lib-modules\") pod \"kube-proxy-km8c8\" (UID: \"86db3ec7-203c-448e-99be-8d34b8457e9c\") " pod="kube-system/kube-proxy-km8c8" Jun 26 07:15:51.992976 kubelet[2555]: I0626 07:15:51.992787 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zx782\" (UniqueName: \"kubernetes.io/projected/86db3ec7-203c-448e-99be-8d34b8457e9c-kube-api-access-zx782\") pod \"kube-proxy-km8c8\" (UID: \"86db3ec7-203c-448e-99be-8d34b8457e9c\") " pod="kube-system/kube-proxy-km8c8" Jun 26 07:15:51.992976 kubelet[2555]: I0626 07:15:51.992812 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/86db3ec7-203c-448e-99be-8d34b8457e9c-kube-proxy\") pod \"kube-proxy-km8c8\" (UID: \"86db3ec7-203c-448e-99be-8d34b8457e9c\") " pod="kube-system/kube-proxy-km8c8" Jun 26 07:15:52.093965 kubelet[2555]: I0626 07:15:52.093413 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b5aca86f-9d78-47a3-af3f-4bd8068d3213-var-lib-calico\") pod \"tigera-operator-76c4974c85-6t94c\" (UID: \"b5aca86f-9d78-47a3-af3f-4bd8068d3213\") " pod="tigera-operator/tigera-operator-76c4974c85-6t94c" Jun 26 07:15:52.093965 kubelet[2555]: I0626 07:15:52.093507 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvmrm\" (UniqueName: \"kubernetes.io/projected/b5aca86f-9d78-47a3-af3f-4bd8068d3213-kube-api-access-hvmrm\") pod \"tigera-operator-76c4974c85-6t94c\" (UID: \"b5aca86f-9d78-47a3-af3f-4bd8068d3213\") " pod="tigera-operator/tigera-operator-76c4974c85-6t94c" Jun 26 07:15:52.212585 kubelet[2555]: E0626 07:15:52.210281 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:15:52.218241 containerd[1464]: time="2024-06-26T07:15:52.217216690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-km8c8,Uid:86db3ec7-203c-448e-99be-8d34b8457e9c,Namespace:kube-system,Attempt:0,}" Jun 26 07:15:52.293131 containerd[1464]: time="2024-06-26T07:15:52.293078573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-6t94c,Uid:b5aca86f-9d78-47a3-af3f-4bd8068d3213,Namespace:tigera-operator,Attempt:0,}" Jun 26 07:15:52.303054 containerd[1464]: time="2024-06-26T07:15:52.302795096Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 26 07:15:52.303054 containerd[1464]: time="2024-06-26T07:15:52.302904052Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:15:52.303806 containerd[1464]: time="2024-06-26T07:15:52.303690248Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 26 07:15:52.303806 containerd[1464]: time="2024-06-26T07:15:52.303765058Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:15:52.377865 systemd[1]: Started cri-containerd-50ac01719bd1ceb33fbb415baeccd39a8887da4a808e21cfa10857648bec2c49.scope - libcontainer container 50ac01719bd1ceb33fbb415baeccd39a8887da4a808e21cfa10857648bec2c49. Jun 26 07:15:52.399419 containerd[1464]: time="2024-06-26T07:15:52.399280954Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 26 07:15:52.399419 containerd[1464]: time="2024-06-26T07:15:52.399357620Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:15:52.400232 containerd[1464]: time="2024-06-26T07:15:52.399387488Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 26 07:15:52.400581 containerd[1464]: time="2024-06-26T07:15:52.400524486Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:15:52.434073 systemd[1]: Started cri-containerd-3b78106cb2c47209dfd55e0487c1c2ef384eb3ea13a79db06bc5df400854eee8.scope - libcontainer container 3b78106cb2c47209dfd55e0487c1c2ef384eb3ea13a79db06bc5df400854eee8. Jun 26 07:15:52.446836 containerd[1464]: time="2024-06-26T07:15:52.446728821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-km8c8,Uid:86db3ec7-203c-448e-99be-8d34b8457e9c,Namespace:kube-system,Attempt:0,} returns sandbox id \"50ac01719bd1ceb33fbb415baeccd39a8887da4a808e21cfa10857648bec2c49\"" Jun 26 07:15:52.451094 kubelet[2555]: E0626 07:15:52.451038 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:15:52.458049 containerd[1464]: time="2024-06-26T07:15:52.457904306Z" level=info msg="CreateContainer within sandbox \"50ac01719bd1ceb33fbb415baeccd39a8887da4a808e21cfa10857648bec2c49\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 26 07:15:52.482036 containerd[1464]: time="2024-06-26T07:15:52.480866756Z" level=info msg="CreateContainer within sandbox \"50ac01719bd1ceb33fbb415baeccd39a8887da4a808e21cfa10857648bec2c49\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"dfd0b993cac84423ab14c5b0e4218d1ed0b121efe03b0369326ce8d8bf21241a\"" Jun 26 07:15:52.482944 containerd[1464]: time="2024-06-26T07:15:52.482328635Z" level=info msg="StartContainer for \"dfd0b993cac84423ab14c5b0e4218d1ed0b121efe03b0369326ce8d8bf21241a\"" Jun 26 07:15:52.512283 containerd[1464]: time="2024-06-26T07:15:52.511584960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-6t94c,Uid:b5aca86f-9d78-47a3-af3f-4bd8068d3213,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"3b78106cb2c47209dfd55e0487c1c2ef384eb3ea13a79db06bc5df400854eee8\"" Jun 26 07:15:52.515035 containerd[1464]: time="2024-06-26T07:15:52.514521026Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Jun 26 07:15:52.539076 systemd[1]: Started cri-containerd-dfd0b993cac84423ab14c5b0e4218d1ed0b121efe03b0369326ce8d8bf21241a.scope - libcontainer container dfd0b993cac84423ab14c5b0e4218d1ed0b121efe03b0369326ce8d8bf21241a. Jun 26 07:15:52.579899 containerd[1464]: time="2024-06-26T07:15:52.579838350Z" level=info msg="StartContainer for \"dfd0b993cac84423ab14c5b0e4218d1ed0b121efe03b0369326ce8d8bf21241a\" returns successfully" Jun 26 07:15:52.857131 kubelet[2555]: E0626 07:15:52.857079 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:15:53.874958 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount438694088.mount: Deactivated successfully. Jun 26 07:15:54.453824 containerd[1464]: time="2024-06-26T07:15:54.453382614Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:15:54.457211 containerd[1464]: time="2024-06-26T07:15:54.456928278Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=22076040" Jun 26 07:15:54.459785 containerd[1464]: time="2024-06-26T07:15:54.457988628Z" level=info msg="ImageCreate event name:\"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:15:54.468920 containerd[1464]: time="2024-06-26T07:15:54.468855877Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:15:54.471630 containerd[1464]: time="2024-06-26T07:15:54.471554279Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"22070263\" in 1.956985728s" Jun 26 07:15:54.471998 containerd[1464]: time="2024-06-26T07:15:54.471960242Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\"" Jun 26 07:15:54.501785 containerd[1464]: time="2024-06-26T07:15:54.501434118Z" level=info msg="CreateContainer within sandbox \"3b78106cb2c47209dfd55e0487c1c2ef384eb3ea13a79db06bc5df400854eee8\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jun 26 07:15:54.519153 containerd[1464]: time="2024-06-26T07:15:54.519004203Z" level=info msg="CreateContainer within sandbox \"3b78106cb2c47209dfd55e0487c1c2ef384eb3ea13a79db06bc5df400854eee8\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"d33ead41eae5150388d76677456592bd023e23baf849c14be7a49bba68e53013\"" Jun 26 07:15:54.521004 containerd[1464]: time="2024-06-26T07:15:54.520077563Z" level=info msg="StartContainer for \"d33ead41eae5150388d76677456592bd023e23baf849c14be7a49bba68e53013\"" Jun 26 07:15:54.558175 systemd[1]: run-containerd-runc-k8s.io-d33ead41eae5150388d76677456592bd023e23baf849c14be7a49bba68e53013-runc.GOMDXO.mount: Deactivated successfully. Jun 26 07:15:54.565229 systemd[1]: Started cri-containerd-d33ead41eae5150388d76677456592bd023e23baf849c14be7a49bba68e53013.scope - libcontainer container d33ead41eae5150388d76677456592bd023e23baf849c14be7a49bba68e53013. Jun 26 07:15:54.609591 containerd[1464]: time="2024-06-26T07:15:54.609537650Z" level=info msg="StartContainer for \"d33ead41eae5150388d76677456592bd023e23baf849c14be7a49bba68e53013\" returns successfully" Jun 26 07:15:54.881358 kubelet[2555]: I0626 07:15:54.880727 2555 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-km8c8" podStartSLOduration=3.880683943 podStartE2EDuration="3.880683943s" podCreationTimestamp="2024-06-26 07:15:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-26 07:15:52.874379174 +0000 UTC m=+14.476417033" watchObservedRunningTime="2024-06-26 07:15:54.880683943 +0000 UTC m=+16.482721801" Jun 26 07:15:54.881358 kubelet[2555]: I0626 07:15:54.880895 2555 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4974c85-6t94c" podStartSLOduration=1.91470043 podStartE2EDuration="3.880871198s" podCreationTimestamp="2024-06-26 07:15:51 +0000 UTC" firstStartedPulling="2024-06-26 07:15:52.513557654 +0000 UTC m=+14.115595508" lastFinishedPulling="2024-06-26 07:15:54.479728432 +0000 UTC m=+16.081766276" observedRunningTime="2024-06-26 07:15:54.88055034 +0000 UTC m=+16.482588198" watchObservedRunningTime="2024-06-26 07:15:54.880871198 +0000 UTC m=+16.482909056" Jun 26 07:15:57.780838 kubelet[2555]: I0626 07:15:57.780797 2555 topology_manager.go:215] "Topology Admit Handler" podUID="008f8b3e-1803-451e-bcae-4d6a41f8f2ef" podNamespace="calico-system" podName="calico-typha-59c845bdc-967vp" Jun 26 07:15:57.792111 systemd[1]: Created slice kubepods-besteffort-pod008f8b3e_1803_451e_bcae_4d6a41f8f2ef.slice - libcontainer container kubepods-besteffort-pod008f8b3e_1803_451e_bcae_4d6a41f8f2ef.slice. Jun 26 07:15:57.890530 kubelet[2555]: I0626 07:15:57.890484 2555 topology_manager.go:215] "Topology Admit Handler" podUID="a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b" podNamespace="calico-system" podName="calico-node-p4gd6" Jun 26 07:15:57.901465 systemd[1]: Created slice kubepods-besteffort-poda2a1620e_2c1c_4dc1_8a77_1b45a847ae6b.slice - libcontainer container kubepods-besteffort-poda2a1620e_2c1c_4dc1_8a77_1b45a847ae6b.slice. Jun 26 07:15:57.945961 kubelet[2555]: I0626 07:15:57.945911 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b-var-lib-calico\") pod \"calico-node-p4gd6\" (UID: \"a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b\") " pod="calico-system/calico-node-p4gd6" Jun 26 07:15:57.945961 kubelet[2555]: I0626 07:15:57.945965 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b-lib-modules\") pod \"calico-node-p4gd6\" (UID: \"a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b\") " pod="calico-system/calico-node-p4gd6" Jun 26 07:15:57.946173 kubelet[2555]: I0626 07:15:57.945988 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b-cni-bin-dir\") pod \"calico-node-p4gd6\" (UID: \"a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b\") " pod="calico-system/calico-node-p4gd6" Jun 26 07:15:57.946173 kubelet[2555]: I0626 07:15:57.946008 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b-flexvol-driver-host\") pod \"calico-node-p4gd6\" (UID: \"a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b\") " pod="calico-system/calico-node-p4gd6" Jun 26 07:15:57.946173 kubelet[2555]: I0626 07:15:57.946031 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/008f8b3e-1803-451e-bcae-4d6a41f8f2ef-tigera-ca-bundle\") pod \"calico-typha-59c845bdc-967vp\" (UID: \"008f8b3e-1803-451e-bcae-4d6a41f8f2ef\") " pod="calico-system/calico-typha-59c845bdc-967vp" Jun 26 07:15:57.946173 kubelet[2555]: I0626 07:15:57.946052 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b-cni-net-dir\") pod \"calico-node-p4gd6\" (UID: \"a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b\") " pod="calico-system/calico-node-p4gd6" Jun 26 07:15:57.946173 kubelet[2555]: I0626 07:15:57.946069 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b-cni-log-dir\") pod \"calico-node-p4gd6\" (UID: \"a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b\") " pod="calico-system/calico-node-p4gd6" Jun 26 07:15:57.947651 kubelet[2555]: I0626 07:15:57.947563 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b-policysync\") pod \"calico-node-p4gd6\" (UID: \"a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b\") " pod="calico-system/calico-node-p4gd6" Jun 26 07:15:57.947651 kubelet[2555]: I0626 07:15:57.947664 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b-node-certs\") pod \"calico-node-p4gd6\" (UID: \"a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b\") " pod="calico-system/calico-node-p4gd6" Jun 26 07:15:57.948368 kubelet[2555]: I0626 07:15:57.947689 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-995gw\" (UniqueName: \"kubernetes.io/projected/a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b-kube-api-access-995gw\") pod \"calico-node-p4gd6\" (UID: \"a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b\") " pod="calico-system/calico-node-p4gd6" Jun 26 07:15:57.948368 kubelet[2555]: I0626 07:15:57.947939 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/008f8b3e-1803-451e-bcae-4d6a41f8f2ef-typha-certs\") pod \"calico-typha-59c845bdc-967vp\" (UID: \"008f8b3e-1803-451e-bcae-4d6a41f8f2ef\") " pod="calico-system/calico-typha-59c845bdc-967vp" Jun 26 07:15:57.948368 kubelet[2555]: I0626 07:15:57.947992 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cz8vd\" (UniqueName: \"kubernetes.io/projected/008f8b3e-1803-451e-bcae-4d6a41f8f2ef-kube-api-access-cz8vd\") pod \"calico-typha-59c845bdc-967vp\" (UID: \"008f8b3e-1803-451e-bcae-4d6a41f8f2ef\") " pod="calico-system/calico-typha-59c845bdc-967vp" Jun 26 07:15:57.948368 kubelet[2555]: I0626 07:15:57.948023 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b-tigera-ca-bundle\") pod \"calico-node-p4gd6\" (UID: \"a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b\") " pod="calico-system/calico-node-p4gd6" Jun 26 07:15:57.948368 kubelet[2555]: I0626 07:15:57.948085 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b-xtables-lock\") pod \"calico-node-p4gd6\" (UID: \"a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b\") " pod="calico-system/calico-node-p4gd6" Jun 26 07:15:57.948616 kubelet[2555]: I0626 07:15:57.948137 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b-var-run-calico\") pod \"calico-node-p4gd6\" (UID: \"a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b\") " pod="calico-system/calico-node-p4gd6" Jun 26 07:15:58.085859 kubelet[2555]: E0626 07:15:58.081955 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:15:58.085859 kubelet[2555]: W0626 07:15:58.081991 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:15:58.085859 kubelet[2555]: E0626 07:15:58.082036 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:15:58.091015 kubelet[2555]: E0626 07:15:58.090975 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:15:58.091301 kubelet[2555]: W0626 07:15:58.091204 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:15:58.092709 kubelet[2555]: E0626 07:15:58.092673 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:15:58.096269 kubelet[2555]: E0626 07:15:58.096219 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:15:58.096269 kubelet[2555]: W0626 07:15:58.096248 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:15:58.096269 kubelet[2555]: E0626 07:15:58.096276 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:15:58.098099 kubelet[2555]: I0626 07:15:58.098057 2555 topology_manager.go:215] "Topology Admit Handler" podUID="57e08add-fe69-4f05-8bca-834c135d01cc" podNamespace="calico-system" podName="csi-node-driver-whchj" Jun 26 07:15:58.100687 kubelet[2555]: E0626 07:15:58.100612 2555 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-whchj" podUID="57e08add-fe69-4f05-8bca-834c135d01cc" Jun 26 07:15:58.151923 kubelet[2555]: E0626 07:15:58.151875 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:15:58.151923 kubelet[2555]: W0626 07:15:58.151908 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:15:58.170647 kubelet[2555]: E0626 07:15:58.151943 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:15:58.170647 kubelet[2555]: E0626 07:15:58.157479 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:15:58.170647 kubelet[2555]: W0626 07:15:58.157508 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:15:58.170647 kubelet[2555]: E0626 07:15:58.157542 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:15:58.170647 kubelet[2555]: E0626 07:15:58.159382 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:15:58.170647 kubelet[2555]: W0626 07:15:58.159400 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:15:58.170647 kubelet[2555]: E0626 07:15:58.159441 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:15:58.170647 kubelet[2555]: E0626 07:15:58.162144 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:15:58.170647 kubelet[2555]: W0626 07:15:58.162165 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:15:58.170647 kubelet[2555]: E0626 07:15:58.162195 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:15:58.172588 kubelet[2555]: E0626 07:15:58.162594 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:15:58.172588 kubelet[2555]: W0626 07:15:58.162611 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:15:58.172588 kubelet[2555]: E0626 07:15:58.162633 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:15:58.172588 kubelet[2555]: E0626 07:15:58.163117 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:15:58.172588 kubelet[2555]: W0626 07:15:58.163132 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:15:58.172588 kubelet[2555]: E0626 07:15:58.163180 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:15:58.172588 kubelet[2555]: E0626 07:15:58.168402 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:15:58.172588 kubelet[2555]: W0626 07:15:58.168429 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:15:58.172588 kubelet[2555]: E0626 07:15:58.168967 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:15:58.172588 kubelet[2555]: E0626 07:15:58.170657 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:15:58.173058 kubelet[2555]: W0626 07:15:58.170679 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:15:58.173058 kubelet[2555]: E0626 07:15:58.170780 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:15:58.173058 kubelet[2555]: E0626 07:15:58.172982 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:15:58.173058 kubelet[2555]: W0626 07:15:58.173007 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:15:58.173058 kubelet[2555]: E0626 07:15:58.173039 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:15:58.173384 kubelet[2555]: E0626 07:15:58.173313 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:15:58.173384 kubelet[2555]: W0626 07:15:58.173324 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:15:58.173384 kubelet[2555]: E0626 07:15:58.173341 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:15:58.176965 kubelet[2555]: E0626 07:15:58.175881 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:15:58.176965 kubelet[2555]: W0626 07:15:58.175921 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:15:58.176965 kubelet[2555]: E0626 07:15:58.175954 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:15:58.176965 kubelet[2555]: E0626 07:15:58.176240 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:15:58.176965 kubelet[2555]: W0626 07:15:58.176251 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:15:58.176965 kubelet[2555]: E0626 07:15:58.176265 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:15:58.176965 kubelet[2555]: E0626 07:15:58.176899 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:15:58.176965 kubelet[2555]: W0626 07:15:58.176921 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:15:58.176965 kubelet[2555]: E0626 07:15:58.176950 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:15:58.180646 kubelet[2555]: E0626 07:15:58.177975 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:15:58.180646 kubelet[2555]: W0626 07:15:58.177994 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:15:58.180646 kubelet[2555]: E0626 07:15:58.178013 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:15:58.180646 kubelet[2555]: E0626 07:15:58.178222 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:15:58.180646 kubelet[2555]: W0626 07:15:58.178229 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:15:58.180646 kubelet[2555]: E0626 07:15:58.178241 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:15:58.180646 kubelet[2555]: E0626 07:15:58.178488 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:15:58.180646 kubelet[2555]: W0626 07:15:58.178498 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:15:58.180646 kubelet[2555]: E0626 07:15:58.178510 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:15:58.180646 kubelet[2555]: E0626 07:15:58.178680 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:15:58.181070 kubelet[2555]: W0626 07:15:58.178692 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:15:58.181070 kubelet[2555]: E0626 07:15:58.178707 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:15:58.181070 kubelet[2555]: E0626 07:15:58.179936 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:15:58.181070 kubelet[2555]: W0626 07:15:58.179956 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:15:58.181070 kubelet[2555]: E0626 07:15:58.179977 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:15:58.181070 kubelet[2555]: E0626 07:15:58.180230 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:15:58.181070 kubelet[2555]: W0626 07:15:58.180239 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:15:58.181070 kubelet[2555]: E0626 07:15:58.180251 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:15:58.181070 kubelet[2555]: E0626 07:15:58.180530 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:15:58.181070 kubelet[2555]: W0626 07:15:58.180541 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:15:58.181367 kubelet[2555]: E0626 07:15:58.180555 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:15:58.181367 kubelet[2555]: E0626 07:15:58.180779 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:15:58.181367 kubelet[2555]: W0626 07:15:58.180787 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:15:58.181367 kubelet[2555]: E0626 07:15:58.180799 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:15:58.181367 kubelet[2555]: E0626 07:15:58.180955 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:15:58.181367 kubelet[2555]: W0626 07:15:58.180962 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:15:58.181367 kubelet[2555]: E0626 07:15:58.180972 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:15:58.181562 kubelet[2555]: E0626 07:15:58.181359 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:15:58.181562 kubelet[2555]: W0626 07:15:58.181419 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:15:58.181562 kubelet[2555]: E0626 07:15:58.181433 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:15:58.181562 kubelet[2555]: I0626 07:15:58.181464 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/57e08add-fe69-4f05-8bca-834c135d01cc-kubelet-dir\") pod \"csi-node-driver-whchj\" (UID: \"57e08add-fe69-4f05-8bca-834c135d01cc\") " pod="calico-system/csi-node-driver-whchj" Jun 26 07:15:58.182672 kubelet[2555]: E0626 07:15:58.181914 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:15:58.182672 kubelet[2555]: W0626 07:15:58.181932 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:15:58.182672 kubelet[2555]: E0626 07:15:58.181947 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:15:58.182672 kubelet[2555]: I0626 07:15:58.181972 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/57e08add-fe69-4f05-8bca-834c135d01cc-socket-dir\") pod \"csi-node-driver-whchj\" (UID: \"57e08add-fe69-4f05-8bca-834c135d01cc\") " pod="calico-system/csi-node-driver-whchj" Jun 26 07:15:58.182672 kubelet[2555]: E0626 07:15:58.182313 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:15:58.182672 kubelet[2555]: W0626 07:15:58.182326 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:15:58.182672 kubelet[2555]: E0626 07:15:58.182340 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:15:58.182672 kubelet[2555]: I0626 07:15:58.182363 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/57e08add-fe69-4f05-8bca-834c135d01cc-registration-dir\") pod \"csi-node-driver-whchj\" (UID: \"57e08add-fe69-4f05-8bca-834c135d01cc\") " pod="calico-system/csi-node-driver-whchj" Jun 26 07:15:58.183665 kubelet[2555]: E0626 07:15:58.183243 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:15:58.183665 kubelet[2555]: W0626 07:15:58.183259 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:15:58.183665 kubelet[2555]: E0626 07:15:58.183284 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:15:58.183665 kubelet[2555]: I0626 07:15:58.183310 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/57e08add-fe69-4f05-8bca-834c135d01cc-varrun\") pod \"csi-node-driver-whchj\" (UID: \"57e08add-fe69-4f05-8bca-834c135d01cc\") " pod="calico-system/csi-node-driver-whchj" Jun 26 07:15:58.183665 kubelet[2555]: E0626 07:15:58.183503 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:15:58.183665 kubelet[2555]: W0626 07:15:58.183511 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:15:58.183665 kubelet[2555]: E0626 07:15:58.183525 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:15:58.183665 kubelet[2555]: I0626 07:15:58.183545 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9f7m\" (UniqueName: \"kubernetes.io/projected/57e08add-fe69-4f05-8bca-834c135d01cc-kube-api-access-d9f7m\") pod \"csi-node-driver-whchj\" (UID: \"57e08add-fe69-4f05-8bca-834c135d01cc\") " pod="calico-system/csi-node-driver-whchj" Jun 26 07:15:58.184020 kubelet[2555]: E0626 07:15:58.183735 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:15:58.184020 kubelet[2555]: W0626 07:15:58.183755 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:15:58.184020 kubelet[2555]: E0626 07:15:58.183776 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:15:58.184681 kubelet[2555]: E0626 07:15:58.184658 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:15:58.184681 kubelet[2555]: W0626 07:15:58.184678 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:15:58.188531 kubelet[2555]: E0626 07:15:58.184909 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:15:58.188531 kubelet[2555]: E0626 07:15:58.185149 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:15:58.188531 kubelet[2555]: W0626 07:15:58.185162 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:15:58.188531 kubelet[2555]: E0626 07:15:58.185210 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:15:58.188531 kubelet[2555]: E0626 07:15:58.185872 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:15:58.188531 kubelet[2555]: W0626 07:15:58.185891 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:15:58.188531 kubelet[2555]: E0626 07:15:58.185978 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:15:58.188531 kubelet[2555]: E0626 07:15:58.186851 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:15:58.188531 kubelet[2555]: W0626 07:15:58.187090 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:15:58.188531 kubelet[2555]: E0626 07:15:58.187156 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:15:58.188951 kubelet[2555]: E0626 07:15:58.188236 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:15:58.188951 kubelet[2555]: W0626 07:15:58.188253 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:15:58.189010 kubelet[2555]: E0626 07:15:58.188980 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:15:58.189010 kubelet[2555]: W0626 07:15:58.188994 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:15:58.189065 kubelet[2555]: E0626 07:15:58.189012 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:15:58.189476 kubelet[2555]: E0626 07:15:58.189413 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:15:58.189908 kubelet[2555]: E0626 07:15:58.189885 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:15:58.189972 kubelet[2555]: W0626 07:15:58.189909 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:15:58.189972 kubelet[2555]: E0626 07:15:58.189930 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:15:58.190345 kubelet[2555]: E0626 07:15:58.190327 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:15:58.190345 kubelet[2555]: W0626 07:15:58.190344 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:15:58.190589 kubelet[2555]: E0626 07:15:58.190362 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:15:58.192354 kubelet[2555]: E0626 07:15:58.192306 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:15:58.192354 kubelet[2555]: W0626 07:15:58.192333 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:15:58.192354 kubelet[2555]: E0626 07:15:58.192356 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:15:58.209781 kubelet[2555]: E0626 07:15:58.206949 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:15:58.210024 containerd[1464]: time="2024-06-26T07:15:58.208166729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-p4gd6,Uid:a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b,Namespace:calico-system,Attempt:0,}" Jun 26 07:15:58.268630 containerd[1464]: time="2024-06-26T07:15:58.267078754Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 26 07:15:58.268630 containerd[1464]: time="2024-06-26T07:15:58.267342524Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:15:58.268630 containerd[1464]: time="2024-06-26T07:15:58.267407263Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 26 07:15:58.268630 containerd[1464]: time="2024-06-26T07:15:58.267482483Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:15:58.285851 kubelet[2555]: E0626 07:15:58.285622 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:15:58.285851 kubelet[2555]: W0626 07:15:58.285654 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:15:58.285851 kubelet[2555]: E0626 07:15:58.285682 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:15:58.286415 kubelet[2555]: E0626 07:15:58.286181 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:15:58.286415 kubelet[2555]: W0626 07:15:58.286259 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:15:58.286415 kubelet[2555]: E0626 07:15:58.286288 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:15:58.287062 kubelet[2555]: E0626 07:15:58.286888 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:15:58.287062 kubelet[2555]: W0626 07:15:58.286904 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:15:58.287062 kubelet[2555]: E0626 07:15:58.286931 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:15:58.287506 kubelet[2555]: E0626 07:15:58.287488 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:15:58.287622 kubelet[2555]: W0626 07:15:58.287605 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:15:58.287935 kubelet[2555]: E0626 07:15:58.287903 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:15:58.288651 kubelet[2555]: E0626 07:15:58.288410 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:15:58.288651 kubelet[2555]: W0626 07:15:58.288428 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:15:58.288651 kubelet[2555]: E0626 07:15:58.288577 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:15:58.289142 kubelet[2555]: E0626 07:15:58.289125 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:15:58.289873 kubelet[2555]: W0626 07:15:58.289815 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:15:58.290230 kubelet[2555]: E0626 07:15:58.290006 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:15:58.291312 kubelet[2555]: E0626 07:15:58.291172 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:15:58.291312 kubelet[2555]: W0626 07:15:58.291190 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:15:58.291312 kubelet[2555]: E0626 07:15:58.291237 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:15:58.292814 kubelet[2555]: E0626 07:15:58.292651 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:15:58.292814 kubelet[2555]: W0626 07:15:58.292672 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:15:58.292814 kubelet[2555]: E0626 07:15:58.292728 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:15:58.293186 kubelet[2555]: E0626 07:15:58.293074 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:15:58.293186 kubelet[2555]: W0626 07:15:58.293088 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:15:58.293186 kubelet[2555]: E0626 07:15:58.293125 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:15:58.293439 kubelet[2555]: E0626 07:15:58.293351 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:15:58.293439 kubelet[2555]: W0626 07:15:58.293363 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:15:58.293842 kubelet[2555]: E0626 07:15:58.293605 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:15:58.293842 kubelet[2555]: E0626 07:15:58.293670 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:15:58.293842 kubelet[2555]: W0626 07:15:58.293684 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:15:58.293842 kubelet[2555]: E0626 07:15:58.293720 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:15:58.294085 kubelet[2555]: E0626 07:15:58.294071 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:15:58.294135 kubelet[2555]: W0626 07:15:58.294126 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:15:58.294309 kubelet[2555]: E0626 07:15:58.294282 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:15:58.295309 kubelet[2555]: E0626 07:15:58.295292 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:15:58.295524 kubelet[2555]: W0626 07:15:58.295418 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:15:58.295524 kubelet[2555]: E0626 07:15:58.295470 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:15:58.295894 kubelet[2555]: E0626 07:15:58.295879 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:15:58.296129 kubelet[2555]: W0626 07:15:58.296027 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:15:58.296129 kubelet[2555]: E0626 07:15:58.296102 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:15:58.297002 kubelet[2555]: E0626 07:15:58.296859 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:15:58.297002 kubelet[2555]: W0626 07:15:58.296885 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:15:58.297465 kubelet[2555]: E0626 07:15:58.297322 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:15:58.297465 kubelet[2555]: W0626 07:15:58.297341 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:15:58.297909 kubelet[2555]: E0626 07:15:58.297834 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:15:58.299163 kubelet[2555]: E0626 07:15:58.298040 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:15:58.299163 kubelet[2555]: W0626 07:15:58.298055 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:15:58.299163 kubelet[2555]: E0626 07:15:58.298065 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:15:58.301094 kubelet[2555]: E0626 07:15:58.300851 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:15:58.303345 kubelet[2555]: E0626 07:15:58.302122 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:15:58.303345 kubelet[2555]: W0626 07:15:58.302157 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:15:58.305245 kubelet[2555]: E0626 07:15:58.303792 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:15:58.305245 kubelet[2555]: E0626 07:15:58.305000 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:15:58.305245 kubelet[2555]: W0626 07:15:58.305033 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:15:58.305112 systemd[1]: Started cri-containerd-94ab79e09be45b684350370b2dda6b1285df379c87ffb196fe2d3cf2c0765330.scope - libcontainer container 94ab79e09be45b684350370b2dda6b1285df379c87ffb196fe2d3cf2c0765330. Jun 26 07:15:58.308800 kubelet[2555]: E0626 07:15:58.305801 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:15:58.308800 kubelet[2555]: E0626 07:15:58.307164 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:15:58.308800 kubelet[2555]: W0626 07:15:58.307180 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:15:58.308800 kubelet[2555]: E0626 07:15:58.307233 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:15:58.310304 kubelet[2555]: E0626 07:15:58.309885 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:15:58.310304 kubelet[2555]: W0626 07:15:58.309918 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:15:58.310304 kubelet[2555]: E0626 07:15:58.310002 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:15:58.311827 kubelet[2555]: E0626 07:15:58.311802 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:15:58.312225 kubelet[2555]: W0626 07:15:58.311898 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:15:58.312225 kubelet[2555]: E0626 07:15:58.311964 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:15:58.314945 kubelet[2555]: E0626 07:15:58.314724 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:15:58.314945 kubelet[2555]: W0626 07:15:58.314769 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:15:58.314945 kubelet[2555]: E0626 07:15:58.314824 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:15:58.315346 kubelet[2555]: E0626 07:15:58.315280 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:15:58.316492 kubelet[2555]: W0626 07:15:58.316456 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:15:58.316684 kubelet[2555]: E0626 07:15:58.316635 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:15:58.317051 kubelet[2555]: E0626 07:15:58.317030 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:15:58.317051 kubelet[2555]: W0626 07:15:58.317050 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:15:58.317135 kubelet[2555]: E0626 07:15:58.317073 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:15:58.332103 kubelet[2555]: E0626 07:15:58.332050 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 26 07:15:58.332103 kubelet[2555]: W0626 07:15:58.332094 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 26 07:15:58.332341 kubelet[2555]: E0626 07:15:58.332132 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 26 07:15:58.364942 containerd[1464]: time="2024-06-26T07:15:58.363851105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-p4gd6,Uid:a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b,Namespace:calico-system,Attempt:0,} returns sandbox id \"94ab79e09be45b684350370b2dda6b1285df379c87ffb196fe2d3cf2c0765330\"" Jun 26 07:15:58.365322 kubelet[2555]: E0626 07:15:58.365292 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:15:58.367269 containerd[1464]: time="2024-06-26T07:15:58.367230731Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Jun 26 07:15:58.403839 kubelet[2555]: E0626 07:15:58.403792 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:15:58.405409 containerd[1464]: time="2024-06-26T07:15:58.405294943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-59c845bdc-967vp,Uid:008f8b3e-1803-451e-bcae-4d6a41f8f2ef,Namespace:calico-system,Attempt:0,}" Jun 26 07:15:58.452771 containerd[1464]: time="2024-06-26T07:15:58.450852059Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 26 07:15:58.452771 containerd[1464]: time="2024-06-26T07:15:58.450918976Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:15:58.452771 containerd[1464]: time="2024-06-26T07:15:58.450939064Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 26 07:15:58.452771 containerd[1464]: time="2024-06-26T07:15:58.450948637Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:15:58.478097 systemd[1]: Started cri-containerd-088e0e913189ae445319f4761f150238e81e8e1dbe42d8c60d8c4168efaef988.scope - libcontainer container 088e0e913189ae445319f4761f150238e81e8e1dbe42d8c60d8c4168efaef988. Jun 26 07:15:58.556255 containerd[1464]: time="2024-06-26T07:15:58.556198766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-59c845bdc-967vp,Uid:008f8b3e-1803-451e-bcae-4d6a41f8f2ef,Namespace:calico-system,Attempt:0,} returns sandbox id \"088e0e913189ae445319f4761f150238e81e8e1dbe42d8c60d8c4168efaef988\"" Jun 26 07:15:58.557812 kubelet[2555]: E0626 07:15:58.557727 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:15:59.732980 kubelet[2555]: E0626 07:15:59.731716 2555 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-whchj" podUID="57e08add-fe69-4f05-8bca-834c135d01cc" Jun 26 07:16:00.164584 containerd[1464]: time="2024-06-26T07:16:00.155962049Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:16:00.164584 containerd[1464]: time="2024-06-26T07:16:00.160393620Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=5140568" Jun 26 07:16:00.164584 containerd[1464]: time="2024-06-26T07:16:00.163330831Z" level=info msg="ImageCreate event name:\"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:16:00.238784 containerd[1464]: time="2024-06-26T07:16:00.238080729Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:16:00.244188 containerd[1464]: time="2024-06-26T07:16:00.241040992Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6588288\" in 1.873765088s" Jun 26 07:16:00.244500 containerd[1464]: time="2024-06-26T07:16:00.244463509Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\"" Jun 26 07:16:00.253772 containerd[1464]: time="2024-06-26T07:16:00.252969514Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Jun 26 07:16:00.259895 containerd[1464]: time="2024-06-26T07:16:00.258478554Z" level=info msg="CreateContainer within sandbox \"94ab79e09be45b684350370b2dda6b1285df379c87ffb196fe2d3cf2c0765330\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jun 26 07:16:00.299409 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3887889968.mount: Deactivated successfully. Jun 26 07:16:00.307988 containerd[1464]: time="2024-06-26T07:16:00.307914388Z" level=info msg="CreateContainer within sandbox \"94ab79e09be45b684350370b2dda6b1285df379c87ffb196fe2d3cf2c0765330\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b86be6ddf6135c51887dc9752f23b19718aaa5f5afe65c78b3e0f6ec4b0f147d\"" Jun 26 07:16:00.309870 containerd[1464]: time="2024-06-26T07:16:00.309177246Z" level=info msg="StartContainer for \"b86be6ddf6135c51887dc9752f23b19718aaa5f5afe65c78b3e0f6ec4b0f147d\"" Jun 26 07:16:00.425508 systemd[1]: Started cri-containerd-b86be6ddf6135c51887dc9752f23b19718aaa5f5afe65c78b3e0f6ec4b0f147d.scope - libcontainer container b86be6ddf6135c51887dc9752f23b19718aaa5f5afe65c78b3e0f6ec4b0f147d. Jun 26 07:16:00.562940 containerd[1464]: time="2024-06-26T07:16:00.562865814Z" level=info msg="StartContainer for \"b86be6ddf6135c51887dc9752f23b19718aaa5f5afe65c78b3e0f6ec4b0f147d\" returns successfully" Jun 26 07:16:00.611952 systemd[1]: cri-containerd-b86be6ddf6135c51887dc9752f23b19718aaa5f5afe65c78b3e0f6ec4b0f147d.scope: Deactivated successfully. Jun 26 07:16:00.697645 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b86be6ddf6135c51887dc9752f23b19718aaa5f5afe65c78b3e0f6ec4b0f147d-rootfs.mount: Deactivated successfully. Jun 26 07:16:00.714702 containerd[1464]: time="2024-06-26T07:16:00.714339300Z" level=info msg="shim disconnected" id=b86be6ddf6135c51887dc9752f23b19718aaa5f5afe65c78b3e0f6ec4b0f147d namespace=k8s.io Jun 26 07:16:00.714702 containerd[1464]: time="2024-06-26T07:16:00.714448217Z" level=warning msg="cleaning up after shim disconnected" id=b86be6ddf6135c51887dc9752f23b19718aaa5f5afe65c78b3e0f6ec4b0f147d namespace=k8s.io Jun 26 07:16:00.714702 containerd[1464]: time="2024-06-26T07:16:00.714462702Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 26 07:16:00.902841 containerd[1464]: time="2024-06-26T07:16:00.901274577Z" level=info msg="StopPodSandbox for \"94ab79e09be45b684350370b2dda6b1285df379c87ffb196fe2d3cf2c0765330\"" Jun 26 07:16:00.902841 containerd[1464]: time="2024-06-26T07:16:00.901415468Z" level=info msg="Container to stop \"b86be6ddf6135c51887dc9752f23b19718aaa5f5afe65c78b3e0f6ec4b0f147d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 26 07:16:00.925263 systemd[1]: cri-containerd-94ab79e09be45b684350370b2dda6b1285df379c87ffb196fe2d3cf2c0765330.scope: Deactivated successfully. Jun 26 07:16:01.024491 containerd[1464]: time="2024-06-26T07:16:01.024004659Z" level=info msg="shim disconnected" id=94ab79e09be45b684350370b2dda6b1285df379c87ffb196fe2d3cf2c0765330 namespace=k8s.io Jun 26 07:16:01.024491 containerd[1464]: time="2024-06-26T07:16:01.024100420Z" level=warning msg="cleaning up after shim disconnected" id=94ab79e09be45b684350370b2dda6b1285df379c87ffb196fe2d3cf2c0765330 namespace=k8s.io Jun 26 07:16:01.024491 containerd[1464]: time="2024-06-26T07:16:01.024114857Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 26 07:16:01.074926 containerd[1464]: time="2024-06-26T07:16:01.074620070Z" level=warning msg="cleanup warnings time=\"2024-06-26T07:16:01Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jun 26 07:16:01.077998 containerd[1464]: time="2024-06-26T07:16:01.077893642Z" level=info msg="TearDown network for sandbox \"94ab79e09be45b684350370b2dda6b1285df379c87ffb196fe2d3cf2c0765330\" successfully" Jun 26 07:16:01.078394 containerd[1464]: time="2024-06-26T07:16:01.078300278Z" level=info msg="StopPodSandbox for \"94ab79e09be45b684350370b2dda6b1285df379c87ffb196fe2d3cf2c0765330\" returns successfully" Jun 26 07:16:01.148320 kubelet[2555]: I0626 07:16:01.147505 2555 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b" (UID: "a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 26 07:16:01.151333 kubelet[2555]: I0626 07:16:01.149203 2555 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b-var-lib-calico\") pod \"a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b\" (UID: \"a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b\") " Jun 26 07:16:01.151333 kubelet[2555]: I0626 07:16:01.149320 2555 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b-node-certs\") pod \"a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b\" (UID: \"a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b\") " Jun 26 07:16:01.151333 kubelet[2555]: I0626 07:16:01.149378 2555 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b-lib-modules\") pod \"a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b\" (UID: \"a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b\") " Jun 26 07:16:01.151333 kubelet[2555]: I0626 07:16:01.149410 2555 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b-policysync\") pod \"a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b\" (UID: \"a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b\") " Jun 26 07:16:01.151333 kubelet[2555]: I0626 07:16:01.149468 2555 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b-cni-bin-dir\") pod \"a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b\" (UID: \"a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b\") " Jun 26 07:16:01.151333 kubelet[2555]: I0626 07:16:01.149515 2555 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-995gw\" (UniqueName: \"kubernetes.io/projected/a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b-kube-api-access-995gw\") pod \"a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b\" (UID: \"a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b\") " Jun 26 07:16:01.151818 kubelet[2555]: I0626 07:16:01.149549 2555 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b-tigera-ca-bundle\") pod \"a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b\" (UID: \"a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b\") " Jun 26 07:16:01.151818 kubelet[2555]: I0626 07:16:01.149577 2555 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b-var-run-calico\") pod \"a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b\" (UID: \"a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b\") " Jun 26 07:16:01.151818 kubelet[2555]: I0626 07:16:01.149662 2555 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b-flexvol-driver-host\") pod \"a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b\" (UID: \"a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b\") " Jun 26 07:16:01.151818 kubelet[2555]: I0626 07:16:01.149691 2555 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b-cni-net-dir\") pod \"a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b\" (UID: \"a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b\") " Jun 26 07:16:01.151818 kubelet[2555]: I0626 07:16:01.149718 2555 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b-cni-log-dir\") pod \"a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b\" (UID: \"a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b\") " Jun 26 07:16:01.151818 kubelet[2555]: I0626 07:16:01.150562 2555 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b-xtables-lock\") pod \"a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b\" (UID: \"a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b\") " Jun 26 07:16:01.153132 kubelet[2555]: I0626 07:16:01.152435 2555 reconciler_common.go:300] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b-var-lib-calico\") on node \"ci-4012.0.0-0-ebda1d1a0c\" DevicePath \"\"" Jun 26 07:16:01.153132 kubelet[2555]: I0626 07:16:01.152523 2555 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b" (UID: "a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 26 07:16:01.153132 kubelet[2555]: I0626 07:16:01.152570 2555 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b-policysync" (OuterVolumeSpecName: "policysync") pod "a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b" (UID: "a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 26 07:16:01.153132 kubelet[2555]: I0626 07:16:01.152594 2555 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b" (UID: "a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 26 07:16:01.158532 kubelet[2555]: I0626 07:16:01.158469 2555 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b-kube-api-access-995gw" (OuterVolumeSpecName: "kube-api-access-995gw") pod "a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b" (UID: "a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b"). InnerVolumeSpecName "kube-api-access-995gw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 26 07:16:01.163736 kubelet[2555]: I0626 07:16:01.163469 2555 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b" (UID: "a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jun 26 07:16:01.163736 kubelet[2555]: I0626 07:16:01.163585 2555 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b-node-certs" (OuterVolumeSpecName: "node-certs") pod "a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b" (UID: "a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jun 26 07:16:01.163736 kubelet[2555]: I0626 07:16:01.163616 2555 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b" (UID: "a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 26 07:16:01.163736 kubelet[2555]: I0626 07:16:01.149553 2555 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b" (UID: "a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 26 07:16:01.163736 kubelet[2555]: I0626 07:16:01.163654 2555 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b" (UID: "a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 26 07:16:01.165912 kubelet[2555]: I0626 07:16:01.163680 2555 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b" (UID: "a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 26 07:16:01.165912 kubelet[2555]: I0626 07:16:01.163691 2555 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b" (UID: "a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 26 07:16:01.253400 kubelet[2555]: I0626 07:16:01.253137 2555 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b-xtables-lock\") on node \"ci-4012.0.0-0-ebda1d1a0c\" DevicePath \"\"" Jun 26 07:16:01.253400 kubelet[2555]: I0626 07:16:01.253207 2555 reconciler_common.go:300] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b-node-certs\") on node \"ci-4012.0.0-0-ebda1d1a0c\" DevicePath \"\"" Jun 26 07:16:01.253400 kubelet[2555]: I0626 07:16:01.253226 2555 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b-lib-modules\") on node \"ci-4012.0.0-0-ebda1d1a0c\" DevicePath \"\"" Jun 26 07:16:01.253400 kubelet[2555]: I0626 07:16:01.253244 2555 reconciler_common.go:300] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b-policysync\") on node \"ci-4012.0.0-0-ebda1d1a0c\" DevicePath \"\"" Jun 26 07:16:01.253400 kubelet[2555]: I0626 07:16:01.253261 2555 reconciler_common.go:300] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b-cni-bin-dir\") on node \"ci-4012.0.0-0-ebda1d1a0c\" DevicePath \"\"" Jun 26 07:16:01.253400 kubelet[2555]: I0626 07:16:01.253281 2555 reconciler_common.go:300] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b-flexvol-driver-host\") on node \"ci-4012.0.0-0-ebda1d1a0c\" DevicePath \"\"" Jun 26 07:16:01.253400 kubelet[2555]: I0626 07:16:01.253298 2555 reconciler_common.go:300] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b-cni-net-dir\") on node \"ci-4012.0.0-0-ebda1d1a0c\" DevicePath \"\"" Jun 26 07:16:01.253400 kubelet[2555]: I0626 07:16:01.253314 2555 reconciler_common.go:300] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b-cni-log-dir\") on node \"ci-4012.0.0-0-ebda1d1a0c\" DevicePath \"\"" Jun 26 07:16:01.254097 kubelet[2555]: I0626 07:16:01.253330 2555 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-995gw\" (UniqueName: \"kubernetes.io/projected/a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b-kube-api-access-995gw\") on node \"ci-4012.0.0-0-ebda1d1a0c\" DevicePath \"\"" Jun 26 07:16:01.254097 kubelet[2555]: I0626 07:16:01.253347 2555 reconciler_common.go:300] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b-tigera-ca-bundle\") on node \"ci-4012.0.0-0-ebda1d1a0c\" DevicePath \"\"" Jun 26 07:16:01.254097 kubelet[2555]: I0626 07:16:01.253362 2555 reconciler_common.go:300] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b-var-run-calico\") on node \"ci-4012.0.0-0-ebda1d1a0c\" DevicePath \"\"" Jun 26 07:16:01.289407 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-94ab79e09be45b684350370b2dda6b1285df379c87ffb196fe2d3cf2c0765330-rootfs.mount: Deactivated successfully. Jun 26 07:16:01.290144 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-94ab79e09be45b684350370b2dda6b1285df379c87ffb196fe2d3cf2c0765330-shm.mount: Deactivated successfully. Jun 26 07:16:01.290261 systemd[1]: var-lib-kubelet-pods-a2a1620e\x2d2c1c\x2d4dc1\x2d8a77\x2d1b45a847ae6b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d995gw.mount: Deactivated successfully. Jun 26 07:16:01.290352 systemd[1]: var-lib-kubelet-pods-a2a1620e\x2d2c1c\x2d4dc1\x2d8a77\x2d1b45a847ae6b-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Jun 26 07:16:01.733315 kubelet[2555]: E0626 07:16:01.731025 2555 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-whchj" podUID="57e08add-fe69-4f05-8bca-834c135d01cc" Jun 26 07:16:01.927825 kubelet[2555]: I0626 07:16:01.916478 2555 scope.go:117] "RemoveContainer" containerID="b86be6ddf6135c51887dc9752f23b19718aaa5f5afe65c78b3e0f6ec4b0f147d" Jun 26 07:16:01.960196 containerd[1464]: time="2024-06-26T07:16:01.960134538Z" level=info msg="RemoveContainer for \"b86be6ddf6135c51887dc9752f23b19718aaa5f5afe65c78b3e0f6ec4b0f147d\"" Jun 26 07:16:01.976412 systemd[1]: Removed slice kubepods-besteffort-poda2a1620e_2c1c_4dc1_8a77_1b45a847ae6b.slice - libcontainer container kubepods-besteffort-poda2a1620e_2c1c_4dc1_8a77_1b45a847ae6b.slice. Jun 26 07:16:01.998560 containerd[1464]: time="2024-06-26T07:16:01.996169067Z" level=info msg="RemoveContainer for \"b86be6ddf6135c51887dc9752f23b19718aaa5f5afe65c78b3e0f6ec4b0f147d\" returns successfully" Jun 26 07:16:02.154383 kubelet[2555]: I0626 07:16:02.154285 2555 topology_manager.go:215] "Topology Admit Handler" podUID="0efa1a12-c88f-475f-9c2a-14d317ee82e0" podNamespace="calico-system" podName="calico-node-k22jp" Jun 26 07:16:02.155616 kubelet[2555]: E0626 07:16:02.155210 2555 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b" containerName="flexvol-driver" Jun 26 07:16:02.155616 kubelet[2555]: I0626 07:16:02.155311 2555 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b" containerName="flexvol-driver" Jun 26 07:16:02.225572 systemd[1]: Created slice kubepods-besteffort-pod0efa1a12_c88f_475f_9c2a_14d317ee82e0.slice - libcontainer container kubepods-besteffort-pod0efa1a12_c88f_475f_9c2a_14d317ee82e0.slice. Jun 26 07:16:02.269278 kubelet[2555]: I0626 07:16:02.269000 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0efa1a12-c88f-475f-9c2a-14d317ee82e0-tigera-ca-bundle\") pod \"calico-node-k22jp\" (UID: \"0efa1a12-c88f-475f-9c2a-14d317ee82e0\") " pod="calico-system/calico-node-k22jp" Jun 26 07:16:02.269278 kubelet[2555]: I0626 07:16:02.269098 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0efa1a12-c88f-475f-9c2a-14d317ee82e0-cni-net-dir\") pod \"calico-node-k22jp\" (UID: \"0efa1a12-c88f-475f-9c2a-14d317ee82e0\") " pod="calico-system/calico-node-k22jp" Jun 26 07:16:02.269278 kubelet[2555]: I0626 07:16:02.269140 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0efa1a12-c88f-475f-9c2a-14d317ee82e0-cni-bin-dir\") pod \"calico-node-k22jp\" (UID: \"0efa1a12-c88f-475f-9c2a-14d317ee82e0\") " pod="calico-system/calico-node-k22jp" Jun 26 07:16:02.269278 kubelet[2555]: I0626 07:16:02.269168 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0efa1a12-c88f-475f-9c2a-14d317ee82e0-policysync\") pod \"calico-node-k22jp\" (UID: \"0efa1a12-c88f-475f-9c2a-14d317ee82e0\") " pod="calico-system/calico-node-k22jp" Jun 26 07:16:02.269278 kubelet[2555]: I0626 07:16:02.269203 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0efa1a12-c88f-475f-9c2a-14d317ee82e0-xtables-lock\") pod \"calico-node-k22jp\" (UID: \"0efa1a12-c88f-475f-9c2a-14d317ee82e0\") " pod="calico-system/calico-node-k22jp" Jun 26 07:16:02.269737 kubelet[2555]: I0626 07:16:02.269230 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0efa1a12-c88f-475f-9c2a-14d317ee82e0-node-certs\") pod \"calico-node-k22jp\" (UID: \"0efa1a12-c88f-475f-9c2a-14d317ee82e0\") " pod="calico-system/calico-node-k22jp" Jun 26 07:16:02.269737 kubelet[2555]: I0626 07:16:02.269259 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0efa1a12-c88f-475f-9c2a-14d317ee82e0-var-run-calico\") pod \"calico-node-k22jp\" (UID: \"0efa1a12-c88f-475f-9c2a-14d317ee82e0\") " pod="calico-system/calico-node-k22jp" Jun 26 07:16:02.269737 kubelet[2555]: I0626 07:16:02.269293 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0efa1a12-c88f-475f-9c2a-14d317ee82e0-flexvol-driver-host\") pod \"calico-node-k22jp\" (UID: \"0efa1a12-c88f-475f-9c2a-14d317ee82e0\") " pod="calico-system/calico-node-k22jp" Jun 26 07:16:02.269737 kubelet[2555]: I0626 07:16:02.269335 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0efa1a12-c88f-475f-9c2a-14d317ee82e0-cni-log-dir\") pod \"calico-node-k22jp\" (UID: \"0efa1a12-c88f-475f-9c2a-14d317ee82e0\") " pod="calico-system/calico-node-k22jp" Jun 26 07:16:02.269737 kubelet[2555]: I0626 07:16:02.269369 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0efa1a12-c88f-475f-9c2a-14d317ee82e0-var-lib-calico\") pod \"calico-node-k22jp\" (UID: \"0efa1a12-c88f-475f-9c2a-14d317ee82e0\") " pod="calico-system/calico-node-k22jp" Jun 26 07:16:02.269995 kubelet[2555]: I0626 07:16:02.269405 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0efa1a12-c88f-475f-9c2a-14d317ee82e0-lib-modules\") pod \"calico-node-k22jp\" (UID: \"0efa1a12-c88f-475f-9c2a-14d317ee82e0\") " pod="calico-system/calico-node-k22jp" Jun 26 07:16:02.269995 kubelet[2555]: I0626 07:16:02.269435 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p98qj\" (UniqueName: \"kubernetes.io/projected/0efa1a12-c88f-475f-9c2a-14d317ee82e0-kube-api-access-p98qj\") pod \"calico-node-k22jp\" (UID: \"0efa1a12-c88f-475f-9c2a-14d317ee82e0\") " pod="calico-system/calico-node-k22jp" Jun 26 07:16:02.547822 kubelet[2555]: E0626 07:16:02.547184 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:16:02.550076 containerd[1464]: time="2024-06-26T07:16:02.548054668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-k22jp,Uid:0efa1a12-c88f-475f-9c2a-14d317ee82e0,Namespace:calico-system,Attempt:0,}" Jun 26 07:16:02.691796 containerd[1464]: time="2024-06-26T07:16:02.687205412Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 26 07:16:02.691796 containerd[1464]: time="2024-06-26T07:16:02.687311676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:16:02.691796 containerd[1464]: time="2024-06-26T07:16:02.687337426Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 26 07:16:02.691796 containerd[1464]: time="2024-06-26T07:16:02.687352637Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:16:02.778942 kubelet[2555]: I0626 07:16:02.778898 2555 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b" path="/var/lib/kubelet/pods/a2a1620e-2c1c-4dc1-8a77-1b45a847ae6b/volumes" Jun 26 07:16:02.782777 systemd[1]: Started cri-containerd-81e2a039865d66c16404c3aa53c06493c495fabd2bea934fd621c7704764cbf1.scope - libcontainer container 81e2a039865d66c16404c3aa53c06493c495fabd2bea934fd621c7704764cbf1. Jun 26 07:16:02.933782 containerd[1464]: time="2024-06-26T07:16:02.933692081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-k22jp,Uid:0efa1a12-c88f-475f-9c2a-14d317ee82e0,Namespace:calico-system,Attempt:0,} returns sandbox id \"81e2a039865d66c16404c3aa53c06493c495fabd2bea934fd621c7704764cbf1\"" Jun 26 07:16:02.938925 kubelet[2555]: E0626 07:16:02.938301 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:16:02.952273 containerd[1464]: time="2024-06-26T07:16:02.952199306Z" level=info msg="CreateContainer within sandbox \"81e2a039865d66c16404c3aa53c06493c495fabd2bea934fd621c7704764cbf1\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jun 26 07:16:03.041270 containerd[1464]: time="2024-06-26T07:16:03.040820813Z" level=info msg="CreateContainer within sandbox \"81e2a039865d66c16404c3aa53c06493c495fabd2bea934fd621c7704764cbf1\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d3dcb06e4c5673fd6a3c0282c7694646dfe29df53a6b61c15bdb13cef4b13c5f\"" Jun 26 07:16:03.044798 containerd[1464]: time="2024-06-26T07:16:03.044279786Z" level=info msg="StartContainer for \"d3dcb06e4c5673fd6a3c0282c7694646dfe29df53a6b61c15bdb13cef4b13c5f\"" Jun 26 07:16:03.171930 systemd[1]: Started cri-containerd-d3dcb06e4c5673fd6a3c0282c7694646dfe29df53a6b61c15bdb13cef4b13c5f.scope - libcontainer container d3dcb06e4c5673fd6a3c0282c7694646dfe29df53a6b61c15bdb13cef4b13c5f. Jun 26 07:16:03.554718 containerd[1464]: time="2024-06-26T07:16:03.554624027Z" level=info msg="StartContainer for \"d3dcb06e4c5673fd6a3c0282c7694646dfe29df53a6b61c15bdb13cef4b13c5f\" returns successfully" Jun 26 07:16:03.651226 systemd[1]: cri-containerd-d3dcb06e4c5673fd6a3c0282c7694646dfe29df53a6b61c15bdb13cef4b13c5f.scope: Deactivated successfully. Jun 26 07:16:03.732155 kubelet[2555]: E0626 07:16:03.732039 2555 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-whchj" podUID="57e08add-fe69-4f05-8bca-834c135d01cc" Jun 26 07:16:03.771814 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d3dcb06e4c5673fd6a3c0282c7694646dfe29df53a6b61c15bdb13cef4b13c5f-rootfs.mount: Deactivated successfully. Jun 26 07:16:03.785887 containerd[1464]: time="2024-06-26T07:16:03.785412798Z" level=info msg="shim disconnected" id=d3dcb06e4c5673fd6a3c0282c7694646dfe29df53a6b61c15bdb13cef4b13c5f namespace=k8s.io Jun 26 07:16:03.785887 containerd[1464]: time="2024-06-26T07:16:03.785510477Z" level=warning msg="cleaning up after shim disconnected" id=d3dcb06e4c5673fd6a3c0282c7694646dfe29df53a6b61c15bdb13cef4b13c5f namespace=k8s.io Jun 26 07:16:03.785887 containerd[1464]: time="2024-06-26T07:16:03.785529935Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 26 07:16:03.947843 kubelet[2555]: E0626 07:16:03.944185 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:16:04.549907 containerd[1464]: time="2024-06-26T07:16:04.549852453Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:16:04.556967 containerd[1464]: time="2024-06-26T07:16:04.556897264Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=29458030" Jun 26 07:16:04.559567 containerd[1464]: time="2024-06-26T07:16:04.559449865Z" level=info msg="ImageCreate event name:\"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:16:04.566921 containerd[1464]: time="2024-06-26T07:16:04.566860885Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:16:04.569263 containerd[1464]: time="2024-06-26T07:16:04.569179985Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"30905782\" in 4.314891964s" Jun 26 07:16:04.569592 containerd[1464]: time="2024-06-26T07:16:04.569553973Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\"" Jun 26 07:16:04.571349 containerd[1464]: time="2024-06-26T07:16:04.571205725Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Jun 26 07:16:04.608175 containerd[1464]: time="2024-06-26T07:16:04.608054761Z" level=info msg="CreateContainer within sandbox \"088e0e913189ae445319f4761f150238e81e8e1dbe42d8c60d8c4168efaef988\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jun 26 07:16:04.650640 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1860041924.mount: Deactivated successfully. Jun 26 07:16:04.661158 containerd[1464]: time="2024-06-26T07:16:04.660408092Z" level=info msg="CreateContainer within sandbox \"088e0e913189ae445319f4761f150238e81e8e1dbe42d8c60d8c4168efaef988\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"997728fea645a1d4ae20218c8f7c76038db13b3162c798a77446c86450f93b73\"" Jun 26 07:16:04.666285 containerd[1464]: time="2024-06-26T07:16:04.663114490Z" level=info msg="StartContainer for \"997728fea645a1d4ae20218c8f7c76038db13b3162c798a77446c86450f93b73\"" Jun 26 07:16:04.733833 systemd[1]: Started cri-containerd-997728fea645a1d4ae20218c8f7c76038db13b3162c798a77446c86450f93b73.scope - libcontainer container 997728fea645a1d4ae20218c8f7c76038db13b3162c798a77446c86450f93b73. Jun 26 07:16:04.855890 containerd[1464]: time="2024-06-26T07:16:04.855277091Z" level=info msg="StartContainer for \"997728fea645a1d4ae20218c8f7c76038db13b3162c798a77446c86450f93b73\" returns successfully" Jun 26 07:16:04.964254 containerd[1464]: time="2024-06-26T07:16:04.963788303Z" level=info msg="StopContainer for \"997728fea645a1d4ae20218c8f7c76038db13b3162c798a77446c86450f93b73\" with timeout 300 (s)" Jun 26 07:16:04.965669 containerd[1464]: time="2024-06-26T07:16:04.965621131Z" level=info msg="Stop container \"997728fea645a1d4ae20218c8f7c76038db13b3162c798a77446c86450f93b73\" with signal terminated" Jun 26 07:16:05.002368 kubelet[2555]: I0626 07:16:05.001309 2555 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-59c845bdc-967vp" podStartSLOduration=1.991548564 podStartE2EDuration="8.001216567s" podCreationTimestamp="2024-06-26 07:15:57 +0000 UTC" firstStartedPulling="2024-06-26 07:15:58.560520872 +0000 UTC m=+20.162558734" lastFinishedPulling="2024-06-26 07:16:04.570188884 +0000 UTC m=+26.172226737" observedRunningTime="2024-06-26 07:16:05.000628119 +0000 UTC m=+26.602666003" watchObservedRunningTime="2024-06-26 07:16:05.001216567 +0000 UTC m=+26.603254433" Jun 26 07:16:05.006379 systemd[1]: cri-containerd-997728fea645a1d4ae20218c8f7c76038db13b3162c798a77446c86450f93b73.scope: Deactivated successfully. Jun 26 07:16:05.077702 containerd[1464]: time="2024-06-26T07:16:05.076602110Z" level=info msg="shim disconnected" id=997728fea645a1d4ae20218c8f7c76038db13b3162c798a77446c86450f93b73 namespace=k8s.io Jun 26 07:16:05.077702 containerd[1464]: time="2024-06-26T07:16:05.076683835Z" level=warning msg="cleaning up after shim disconnected" id=997728fea645a1d4ae20218c8f7c76038db13b3162c798a77446c86450f93b73 namespace=k8s.io Jun 26 07:16:05.077702 containerd[1464]: time="2024-06-26T07:16:05.076695976Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 26 07:16:05.115495 containerd[1464]: time="2024-06-26T07:16:05.114785265Z" level=warning msg="cleanup warnings time=\"2024-06-26T07:16:05Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jun 26 07:16:05.125595 containerd[1464]: time="2024-06-26T07:16:05.125466330Z" level=info msg="StopContainer for \"997728fea645a1d4ae20218c8f7c76038db13b3162c798a77446c86450f93b73\" returns successfully" Jun 26 07:16:05.129808 containerd[1464]: time="2024-06-26T07:16:05.129309199Z" level=info msg="StopPodSandbox for \"088e0e913189ae445319f4761f150238e81e8e1dbe42d8c60d8c4168efaef988\"" Jun 26 07:16:05.129808 containerd[1464]: time="2024-06-26T07:16:05.129407061Z" level=info msg="Container to stop \"997728fea645a1d4ae20218c8f7c76038db13b3162c798a77446c86450f93b73\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 26 07:16:05.156036 systemd[1]: cri-containerd-088e0e913189ae445319f4761f150238e81e8e1dbe42d8c60d8c4168efaef988.scope: Deactivated successfully. Jun 26 07:16:05.241016 containerd[1464]: time="2024-06-26T07:16:05.240725353Z" level=info msg="shim disconnected" id=088e0e913189ae445319f4761f150238e81e8e1dbe42d8c60d8c4168efaef988 namespace=k8s.io Jun 26 07:16:05.241956 containerd[1464]: time="2024-06-26T07:16:05.241599224Z" level=warning msg="cleaning up after shim disconnected" id=088e0e913189ae445319f4761f150238e81e8e1dbe42d8c60d8c4168efaef988 namespace=k8s.io Jun 26 07:16:05.241956 containerd[1464]: time="2024-06-26T07:16:05.241634980Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 26 07:16:05.269431 containerd[1464]: time="2024-06-26T07:16:05.269283204Z" level=warning msg="cleanup warnings time=\"2024-06-26T07:16:05Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jun 26 07:16:05.272330 containerd[1464]: time="2024-06-26T07:16:05.272027550Z" level=info msg="TearDown network for sandbox \"088e0e913189ae445319f4761f150238e81e8e1dbe42d8c60d8c4168efaef988\" successfully" Jun 26 07:16:05.272330 containerd[1464]: time="2024-06-26T07:16:05.272087557Z" level=info msg="StopPodSandbox for \"088e0e913189ae445319f4761f150238e81e8e1dbe42d8c60d8c4168efaef988\" returns successfully" Jun 26 07:16:05.323979 kubelet[2555]: I0626 07:16:05.323905 2555 topology_manager.go:215] "Topology Admit Handler" podUID="a40b0360-da37-40c0-9ec1-bc7339ccef0d" podNamespace="calico-system" podName="calico-typha-7cc66557-6sg2s" Jun 26 07:16:05.323979 kubelet[2555]: E0626 07:16:05.323987 2555 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="008f8b3e-1803-451e-bcae-4d6a41f8f2ef" containerName="calico-typha" Jun 26 07:16:05.324398 kubelet[2555]: I0626 07:16:05.324017 2555 memory_manager.go:354] "RemoveStaleState removing state" podUID="008f8b3e-1803-451e-bcae-4d6a41f8f2ef" containerName="calico-typha" Jun 26 07:16:05.342625 systemd[1]: Created slice kubepods-besteffort-poda40b0360_da37_40c0_9ec1_bc7339ccef0d.slice - libcontainer container kubepods-besteffort-poda40b0360_da37_40c0_9ec1_bc7339ccef0d.slice. Jun 26 07:16:05.427271 kubelet[2555]: I0626 07:16:05.426895 2555 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cz8vd\" (UniqueName: \"kubernetes.io/projected/008f8b3e-1803-451e-bcae-4d6a41f8f2ef-kube-api-access-cz8vd\") pod \"008f8b3e-1803-451e-bcae-4d6a41f8f2ef\" (UID: \"008f8b3e-1803-451e-bcae-4d6a41f8f2ef\") " Jun 26 07:16:05.427271 kubelet[2555]: I0626 07:16:05.426971 2555 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/008f8b3e-1803-451e-bcae-4d6a41f8f2ef-typha-certs\") pod \"008f8b3e-1803-451e-bcae-4d6a41f8f2ef\" (UID: \"008f8b3e-1803-451e-bcae-4d6a41f8f2ef\") " Jun 26 07:16:05.427271 kubelet[2555]: I0626 07:16:05.427193 2555 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/008f8b3e-1803-451e-bcae-4d6a41f8f2ef-tigera-ca-bundle\") pod \"008f8b3e-1803-451e-bcae-4d6a41f8f2ef\" (UID: \"008f8b3e-1803-451e-bcae-4d6a41f8f2ef\") " Jun 26 07:16:05.429058 kubelet[2555]: I0626 07:16:05.428333 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a40b0360-da37-40c0-9ec1-bc7339ccef0d-tigera-ca-bundle\") pod \"calico-typha-7cc66557-6sg2s\" (UID: \"a40b0360-da37-40c0-9ec1-bc7339ccef0d\") " pod="calico-system/calico-typha-7cc66557-6sg2s" Jun 26 07:16:05.429058 kubelet[2555]: I0626 07:16:05.428422 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vjjc\" (UniqueName: \"kubernetes.io/projected/a40b0360-da37-40c0-9ec1-bc7339ccef0d-kube-api-access-5vjjc\") pod \"calico-typha-7cc66557-6sg2s\" (UID: \"a40b0360-da37-40c0-9ec1-bc7339ccef0d\") " pod="calico-system/calico-typha-7cc66557-6sg2s" Jun 26 07:16:05.429058 kubelet[2555]: I0626 07:16:05.428462 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a40b0360-da37-40c0-9ec1-bc7339ccef0d-typha-certs\") pod \"calico-typha-7cc66557-6sg2s\" (UID: \"a40b0360-da37-40c0-9ec1-bc7339ccef0d\") " pod="calico-system/calico-typha-7cc66557-6sg2s" Jun 26 07:16:05.444733 kubelet[2555]: I0626 07:16:05.444404 2555 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/008f8b3e-1803-451e-bcae-4d6a41f8f2ef-kube-api-access-cz8vd" (OuterVolumeSpecName: "kube-api-access-cz8vd") pod "008f8b3e-1803-451e-bcae-4d6a41f8f2ef" (UID: "008f8b3e-1803-451e-bcae-4d6a41f8f2ef"). InnerVolumeSpecName "kube-api-access-cz8vd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 26 07:16:05.446075 kubelet[2555]: I0626 07:16:05.445889 2555 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/008f8b3e-1803-451e-bcae-4d6a41f8f2ef-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "008f8b3e-1803-451e-bcae-4d6a41f8f2ef" (UID: "008f8b3e-1803-451e-bcae-4d6a41f8f2ef"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jun 26 07:16:05.452948 kubelet[2555]: I0626 07:16:05.452806 2555 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/008f8b3e-1803-451e-bcae-4d6a41f8f2ef-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "008f8b3e-1803-451e-bcae-4d6a41f8f2ef" (UID: "008f8b3e-1803-451e-bcae-4d6a41f8f2ef"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jun 26 07:16:05.530101 kubelet[2555]: I0626 07:16:05.530020 2555 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-cz8vd\" (UniqueName: \"kubernetes.io/projected/008f8b3e-1803-451e-bcae-4d6a41f8f2ef-kube-api-access-cz8vd\") on node \"ci-4012.0.0-0-ebda1d1a0c\" DevicePath \"\"" Jun 26 07:16:05.538511 kubelet[2555]: I0626 07:16:05.538390 2555 reconciler_common.go:300] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/008f8b3e-1803-451e-bcae-4d6a41f8f2ef-typha-certs\") on node \"ci-4012.0.0-0-ebda1d1a0c\" DevicePath \"\"" Jun 26 07:16:05.538511 kubelet[2555]: I0626 07:16:05.538450 2555 reconciler_common.go:300] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/008f8b3e-1803-451e-bcae-4d6a41f8f2ef-tigera-ca-bundle\") on node \"ci-4012.0.0-0-ebda1d1a0c\" DevicePath \"\"" Jun 26 07:16:05.589245 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-997728fea645a1d4ae20218c8f7c76038db13b3162c798a77446c86450f93b73-rootfs.mount: Deactivated successfully. Jun 26 07:16:05.589463 systemd[1]: var-lib-kubelet-pods-008f8b3e\x2d1803\x2d451e\x2dbcae\x2d4d6a41f8f2ef-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. Jun 26 07:16:05.589579 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-088e0e913189ae445319f4761f150238e81e8e1dbe42d8c60d8c4168efaef988-rootfs.mount: Deactivated successfully. Jun 26 07:16:05.589675 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-088e0e913189ae445319f4761f150238e81e8e1dbe42d8c60d8c4168efaef988-shm.mount: Deactivated successfully. Jun 26 07:16:05.589757 systemd[1]: var-lib-kubelet-pods-008f8b3e\x2d1803\x2d451e\x2dbcae\x2d4d6a41f8f2ef-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcz8vd.mount: Deactivated successfully. Jun 26 07:16:05.589819 systemd[1]: var-lib-kubelet-pods-008f8b3e\x2d1803\x2d451e\x2dbcae\x2d4d6a41f8f2ef-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. Jun 26 07:16:05.648823 kubelet[2555]: E0626 07:16:05.648353 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:16:05.650942 containerd[1464]: time="2024-06-26T07:16:05.650876101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7cc66557-6sg2s,Uid:a40b0360-da37-40c0-9ec1-bc7339ccef0d,Namespace:calico-system,Attempt:0,}" Jun 26 07:16:05.711092 containerd[1464]: time="2024-06-26T07:16:05.709397678Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 26 07:16:05.711092 containerd[1464]: time="2024-06-26T07:16:05.709463700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:16:05.711092 containerd[1464]: time="2024-06-26T07:16:05.709485825Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 26 07:16:05.711092 containerd[1464]: time="2024-06-26T07:16:05.709499022Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:16:05.732300 kubelet[2555]: E0626 07:16:05.732230 2555 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-whchj" podUID="57e08add-fe69-4f05-8bca-834c135d01cc" Jun 26 07:16:05.782114 systemd[1]: Started cri-containerd-5d9296fb8e75ab681777570b93020409a2a2463379bda8c29d8ff0bb0051e136.scope - libcontainer container 5d9296fb8e75ab681777570b93020409a2a2463379bda8c29d8ff0bb0051e136. Jun 26 07:16:05.900137 containerd[1464]: time="2024-06-26T07:16:05.899042455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7cc66557-6sg2s,Uid:a40b0360-da37-40c0-9ec1-bc7339ccef0d,Namespace:calico-system,Attempt:0,} returns sandbox id \"5d9296fb8e75ab681777570b93020409a2a2463379bda8c29d8ff0bb0051e136\"" Jun 26 07:16:05.903174 kubelet[2555]: E0626 07:16:05.902954 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:16:05.942209 containerd[1464]: time="2024-06-26T07:16:05.941849222Z" level=info msg="CreateContainer within sandbox \"5d9296fb8e75ab681777570b93020409a2a2463379bda8c29d8ff0bb0051e136\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jun 26 07:16:05.980972 kubelet[2555]: I0626 07:16:05.979291 2555 scope.go:117] "RemoveContainer" containerID="997728fea645a1d4ae20218c8f7c76038db13b3162c798a77446c86450f93b73" Jun 26 07:16:05.981162 containerd[1464]: time="2024-06-26T07:16:05.980424436Z" level=info msg="CreateContainer within sandbox \"5d9296fb8e75ab681777570b93020409a2a2463379bda8c29d8ff0bb0051e136\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"1b81700c889a350a921faab5fb33631cf5a5f71bdbb2e1c1c9c6d9f5c2c80af7\"" Jun 26 07:16:05.983864 containerd[1464]: time="2024-06-26T07:16:05.983803435Z" level=info msg="StartContainer for \"1b81700c889a350a921faab5fb33631cf5a5f71bdbb2e1c1c9c6d9f5c2c80af7\"" Jun 26 07:16:05.993888 containerd[1464]: time="2024-06-26T07:16:05.992986104Z" level=info msg="RemoveContainer for \"997728fea645a1d4ae20218c8f7c76038db13b3162c798a77446c86450f93b73\"" Jun 26 07:16:05.996260 systemd[1]: Removed slice kubepods-besteffort-pod008f8b3e_1803_451e_bcae_4d6a41f8f2ef.slice - libcontainer container kubepods-besteffort-pod008f8b3e_1803_451e_bcae_4d6a41f8f2ef.slice. Jun 26 07:16:06.005538 containerd[1464]: time="2024-06-26T07:16:06.005470222Z" level=info msg="RemoveContainer for \"997728fea645a1d4ae20218c8f7c76038db13b3162c798a77446c86450f93b73\" returns successfully" Jun 26 07:16:06.007518 kubelet[2555]: I0626 07:16:06.007457 2555 scope.go:117] "RemoveContainer" containerID="997728fea645a1d4ae20218c8f7c76038db13b3162c798a77446c86450f93b73" Jun 26 07:16:06.009762 containerd[1464]: time="2024-06-26T07:16:06.008071172Z" level=error msg="ContainerStatus for \"997728fea645a1d4ae20218c8f7c76038db13b3162c798a77446c86450f93b73\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"997728fea645a1d4ae20218c8f7c76038db13b3162c798a77446c86450f93b73\": not found" Jun 26 07:16:06.009932 kubelet[2555]: E0626 07:16:06.008332 2555 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"997728fea645a1d4ae20218c8f7c76038db13b3162c798a77446c86450f93b73\": not found" containerID="997728fea645a1d4ae20218c8f7c76038db13b3162c798a77446c86450f93b73" Jun 26 07:16:06.009932 kubelet[2555]: I0626 07:16:06.008450 2555 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"997728fea645a1d4ae20218c8f7c76038db13b3162c798a77446c86450f93b73"} err="failed to get container status \"997728fea645a1d4ae20218c8f7c76038db13b3162c798a77446c86450f93b73\": rpc error: code = NotFound desc = an error occurred when try to find container \"997728fea645a1d4ae20218c8f7c76038db13b3162c798a77446c86450f93b73\": not found" Jun 26 07:16:06.086188 systemd[1]: Started cri-containerd-1b81700c889a350a921faab5fb33631cf5a5f71bdbb2e1c1c9c6d9f5c2c80af7.scope - libcontainer container 1b81700c889a350a921faab5fb33631cf5a5f71bdbb2e1c1c9c6d9f5c2c80af7. Jun 26 07:16:06.528401 containerd[1464]: time="2024-06-26T07:16:06.528331122Z" level=info msg="StartContainer for \"1b81700c889a350a921faab5fb33631cf5a5f71bdbb2e1c1c9c6d9f5c2c80af7\" returns successfully" Jun 26 07:16:06.744288 kubelet[2555]: I0626 07:16:06.743921 2555 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="008f8b3e-1803-451e-bcae-4d6a41f8f2ef" path="/var/lib/kubelet/pods/008f8b3e-1803-451e-bcae-4d6a41f8f2ef/volumes" Jun 26 07:16:06.998799 kubelet[2555]: E0626 07:16:06.998252 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:16:07.732027 kubelet[2555]: E0626 07:16:07.731286 2555 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-whchj" podUID="57e08add-fe69-4f05-8bca-834c135d01cc" Jun 26 07:16:08.003643 kubelet[2555]: I0626 07:16:08.002692 2555 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 26 07:16:08.004456 kubelet[2555]: E0626 07:16:08.003971 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:16:09.424923 containerd[1464]: time="2024-06-26T07:16:09.423468474Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:16:09.424923 containerd[1464]: time="2024-06-26T07:16:09.424475309Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=93087850" Jun 26 07:16:09.426075 containerd[1464]: time="2024-06-26T07:16:09.426025488Z" level=info msg="ImageCreate event name:\"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:16:09.442255 containerd[1464]: time="2024-06-26T07:16:09.442183538Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:16:09.443885 containerd[1464]: time="2024-06-26T07:16:09.443829015Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"94535610\" in 4.872236359s" Jun 26 07:16:09.444211 containerd[1464]: time="2024-06-26T07:16:09.444181228Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\"" Jun 26 07:16:09.449499 containerd[1464]: time="2024-06-26T07:16:09.449445192Z" level=info msg="CreateContainer within sandbox \"81e2a039865d66c16404c3aa53c06493c495fabd2bea934fd621c7704764cbf1\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jun 26 07:16:09.468025 containerd[1464]: time="2024-06-26T07:16:09.467948513Z" level=info msg="CreateContainer within sandbox \"81e2a039865d66c16404c3aa53c06493c495fabd2bea934fd621c7704764cbf1\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"3b69c997a617acaee9e10d8741613a52d0b571e0ee80ed6612ff587e9d0705a0\"" Jun 26 07:16:09.469871 containerd[1464]: time="2024-06-26T07:16:09.469125449Z" level=info msg="StartContainer for \"3b69c997a617acaee9e10d8741613a52d0b571e0ee80ed6612ff587e9d0705a0\"" Jun 26 07:16:09.585991 systemd[1]: Started cri-containerd-3b69c997a617acaee9e10d8741613a52d0b571e0ee80ed6612ff587e9d0705a0.scope - libcontainer container 3b69c997a617acaee9e10d8741613a52d0b571e0ee80ed6612ff587e9d0705a0. Jun 26 07:16:09.634716 containerd[1464]: time="2024-06-26T07:16:09.634616349Z" level=info msg="StartContainer for \"3b69c997a617acaee9e10d8741613a52d0b571e0ee80ed6612ff587e9d0705a0\" returns successfully" Jun 26 07:16:09.731536 kubelet[2555]: E0626 07:16:09.731289 2555 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-whchj" podUID="57e08add-fe69-4f05-8bca-834c135d01cc" Jun 26 07:16:10.025039 kubelet[2555]: E0626 07:16:10.022858 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:16:10.055802 kubelet[2555]: I0626 07:16:10.055739 2555 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-7cc66557-6sg2s" podStartSLOduration=12.055682646 podStartE2EDuration="12.055682646s" podCreationTimestamp="2024-06-26 07:15:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-26 07:16:07.041602585 +0000 UTC m=+28.643640446" watchObservedRunningTime="2024-06-26 07:16:10.055682646 +0000 UTC m=+31.657720547" Jun 26 07:16:10.355257 systemd[1]: cri-containerd-3b69c997a617acaee9e10d8741613a52d0b571e0ee80ed6612ff587e9d0705a0.scope: Deactivated successfully. Jun 26 07:16:10.407468 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3b69c997a617acaee9e10d8741613a52d0b571e0ee80ed6612ff587e9d0705a0-rootfs.mount: Deactivated successfully. Jun 26 07:16:10.424442 containerd[1464]: time="2024-06-26T07:16:10.424180962Z" level=info msg="shim disconnected" id=3b69c997a617acaee9e10d8741613a52d0b571e0ee80ed6612ff587e9d0705a0 namespace=k8s.io Jun 26 07:16:10.424442 containerd[1464]: time="2024-06-26T07:16:10.424239435Z" level=warning msg="cleaning up after shim disconnected" id=3b69c997a617acaee9e10d8741613a52d0b571e0ee80ed6612ff587e9d0705a0 namespace=k8s.io Jun 26 07:16:10.424442 containerd[1464]: time="2024-06-26T07:16:10.424249483Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 26 07:16:10.453341 kubelet[2555]: I0626 07:16:10.452340 2555 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jun 26 07:16:10.487413 kubelet[2555]: I0626 07:16:10.486546 2555 topology_manager.go:215] "Topology Admit Handler" podUID="a5283807-341d-4398-a801-298fb264d1f1" podNamespace="kube-system" podName="coredns-76f75df574-bhgmr" Jun 26 07:16:10.495443 kubelet[2555]: I0626 07:16:10.494910 2555 topology_manager.go:215] "Topology Admit Handler" podUID="39f4ab23-9ba0-40aa-ae86-45cf1e57277c" podNamespace="kube-system" podName="coredns-76f75df574-8wm6w" Jun 26 07:16:10.498463 systemd[1]: Created slice kubepods-burstable-poda5283807_341d_4398_a801_298fb264d1f1.slice - libcontainer container kubepods-burstable-poda5283807_341d_4398_a801_298fb264d1f1.slice. Jun 26 07:16:10.516041 kubelet[2555]: I0626 07:16:10.515834 2555 topology_manager.go:215] "Topology Admit Handler" podUID="d4b567d7-752d-409d-9367-5069e210e1e9" podNamespace="calico-system" podName="calico-kube-controllers-7bbf4c86b7-vs76m" Jun 26 07:16:10.529656 systemd[1]: Created slice kubepods-burstable-pod39f4ab23_9ba0_40aa_ae86_45cf1e57277c.slice - libcontainer container kubepods-burstable-pod39f4ab23_9ba0_40aa_ae86_45cf1e57277c.slice. Jun 26 07:16:10.541535 systemd[1]: Created slice kubepods-besteffort-podd4b567d7_752d_409d_9367_5069e210e1e9.slice - libcontainer container kubepods-besteffort-podd4b567d7_752d_409d_9367_5069e210e1e9.slice. Jun 26 07:16:10.583465 kubelet[2555]: I0626 07:16:10.581121 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a5283807-341d-4398-a801-298fb264d1f1-config-volume\") pod \"coredns-76f75df574-bhgmr\" (UID: \"a5283807-341d-4398-a801-298fb264d1f1\") " pod="kube-system/coredns-76f75df574-bhgmr" Jun 26 07:16:10.583465 kubelet[2555]: I0626 07:16:10.581177 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dgz6\" (UniqueName: \"kubernetes.io/projected/a5283807-341d-4398-a801-298fb264d1f1-kube-api-access-4dgz6\") pod \"coredns-76f75df574-bhgmr\" (UID: \"a5283807-341d-4398-a801-298fb264d1f1\") " pod="kube-system/coredns-76f75df574-bhgmr" Jun 26 07:16:10.682677 kubelet[2555]: I0626 07:16:10.681930 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8467\" (UniqueName: \"kubernetes.io/projected/d4b567d7-752d-409d-9367-5069e210e1e9-kube-api-access-q8467\") pod \"calico-kube-controllers-7bbf4c86b7-vs76m\" (UID: \"d4b567d7-752d-409d-9367-5069e210e1e9\") " pod="calico-system/calico-kube-controllers-7bbf4c86b7-vs76m" Jun 26 07:16:10.682677 kubelet[2555]: I0626 07:16:10.682007 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/39f4ab23-9ba0-40aa-ae86-45cf1e57277c-config-volume\") pod \"coredns-76f75df574-8wm6w\" (UID: \"39f4ab23-9ba0-40aa-ae86-45cf1e57277c\") " pod="kube-system/coredns-76f75df574-8wm6w" Jun 26 07:16:10.682677 kubelet[2555]: I0626 07:16:10.682045 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zpvz\" (UniqueName: \"kubernetes.io/projected/39f4ab23-9ba0-40aa-ae86-45cf1e57277c-kube-api-access-5zpvz\") pod \"coredns-76f75df574-8wm6w\" (UID: \"39f4ab23-9ba0-40aa-ae86-45cf1e57277c\") " pod="kube-system/coredns-76f75df574-8wm6w" Jun 26 07:16:10.682677 kubelet[2555]: I0626 07:16:10.682121 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4b567d7-752d-409d-9367-5069e210e1e9-tigera-ca-bundle\") pod \"calico-kube-controllers-7bbf4c86b7-vs76m\" (UID: \"d4b567d7-752d-409d-9367-5069e210e1e9\") " pod="calico-system/calico-kube-controllers-7bbf4c86b7-vs76m" Jun 26 07:16:10.820082 kubelet[2555]: E0626 07:16:10.820006 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:16:10.822846 containerd[1464]: time="2024-06-26T07:16:10.822635191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-bhgmr,Uid:a5283807-341d-4398-a801-298fb264d1f1,Namespace:kube-system,Attempt:0,}" Jun 26 07:16:10.835807 kubelet[2555]: E0626 07:16:10.835385 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:16:10.836020 containerd[1464]: time="2024-06-26T07:16:10.835909293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-8wm6w,Uid:39f4ab23-9ba0-40aa-ae86-45cf1e57277c,Namespace:kube-system,Attempt:0,}" Jun 26 07:16:10.855772 containerd[1464]: time="2024-06-26T07:16:10.855495507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bbf4c86b7-vs76m,Uid:d4b567d7-752d-409d-9367-5069e210e1e9,Namespace:calico-system,Attempt:0,}" Jun 26 07:16:11.038945 kubelet[2555]: E0626 07:16:11.038563 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:16:11.043885 containerd[1464]: time="2024-06-26T07:16:11.043825305Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Jun 26 07:16:11.136833 containerd[1464]: time="2024-06-26T07:16:11.136723468Z" level=error msg="Failed to destroy network for sandbox \"7433ee3040e500ed4f313a1ca45a3798169aa7dd30e72bc237b80f1cc7e0fc19\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 26 07:16:11.146021 containerd[1464]: time="2024-06-26T07:16:11.145945401Z" level=error msg="Failed to destroy network for sandbox \"70d914ace24303220258698d775c0c8356e250e52d34e44b3c7d52f0b335ff80\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 26 07:16:11.148169 containerd[1464]: time="2024-06-26T07:16:11.148075990Z" level=error msg="encountered an error cleaning up failed sandbox \"7433ee3040e500ed4f313a1ca45a3798169aa7dd30e72bc237b80f1cc7e0fc19\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 26 07:16:11.149027 containerd[1464]: time="2024-06-26T07:16:11.148959183Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-8wm6w,Uid:39f4ab23-9ba0-40aa-ae86-45cf1e57277c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7433ee3040e500ed4f313a1ca45a3798169aa7dd30e72bc237b80f1cc7e0fc19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 26 07:16:11.149826 kubelet[2555]: E0626 07:16:11.149665 2555 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7433ee3040e500ed4f313a1ca45a3798169aa7dd30e72bc237b80f1cc7e0fc19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 26 07:16:11.149826 kubelet[2555]: E0626 07:16:11.149788 2555 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7433ee3040e500ed4f313a1ca45a3798169aa7dd30e72bc237b80f1cc7e0fc19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-8wm6w" Jun 26 07:16:11.149826 kubelet[2555]: E0626 07:16:11.149822 2555 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7433ee3040e500ed4f313a1ca45a3798169aa7dd30e72bc237b80f1cc7e0fc19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-8wm6w" Jun 26 07:16:11.153038 kubelet[2555]: E0626 07:16:11.149953 2555 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-8wm6w_kube-system(39f4ab23-9ba0-40aa-ae86-45cf1e57277c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-8wm6w_kube-system(39f4ab23-9ba0-40aa-ae86-45cf1e57277c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7433ee3040e500ed4f313a1ca45a3798169aa7dd30e72bc237b80f1cc7e0fc19\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-8wm6w" podUID="39f4ab23-9ba0-40aa-ae86-45cf1e57277c" Jun 26 07:16:11.153038 kubelet[2555]: E0626 07:16:11.150639 2555 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70d914ace24303220258698d775c0c8356e250e52d34e44b3c7d52f0b335ff80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 26 07:16:11.153038 kubelet[2555]: E0626 07:16:11.150692 2555 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70d914ace24303220258698d775c0c8356e250e52d34e44b3c7d52f0b335ff80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-bhgmr" Jun 26 07:16:11.153279 containerd[1464]: time="2024-06-26T07:16:11.148131719Z" level=error msg="encountered an error cleaning up failed sandbox \"70d914ace24303220258698d775c0c8356e250e52d34e44b3c7d52f0b335ff80\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 26 07:16:11.153279 containerd[1464]: time="2024-06-26T07:16:11.150259449Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-bhgmr,Uid:a5283807-341d-4398-a801-298fb264d1f1,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"70d914ace24303220258698d775c0c8356e250e52d34e44b3c7d52f0b335ff80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 26 07:16:11.153502 kubelet[2555]: E0626 07:16:11.150720 2555 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70d914ace24303220258698d775c0c8356e250e52d34e44b3c7d52f0b335ff80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-bhgmr" Jun 26 07:16:11.153502 kubelet[2555]: E0626 07:16:11.150942 2555 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-bhgmr_kube-system(a5283807-341d-4398-a801-298fb264d1f1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-bhgmr_kube-system(a5283807-341d-4398-a801-298fb264d1f1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"70d914ace24303220258698d775c0c8356e250e52d34e44b3c7d52f0b335ff80\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-bhgmr" podUID="a5283807-341d-4398-a801-298fb264d1f1" Jun 26 07:16:11.155721 containerd[1464]: time="2024-06-26T07:16:11.155653716Z" level=error msg="Failed to destroy network for sandbox \"0d062ae00e181d837c765207a10fc6ae481583e17c536bd03b961b135a2b21c9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 26 07:16:11.156311 containerd[1464]: time="2024-06-26T07:16:11.156235295Z" level=error msg="encountered an error cleaning up failed sandbox \"0d062ae00e181d837c765207a10fc6ae481583e17c536bd03b961b135a2b21c9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 26 07:16:11.156408 containerd[1464]: time="2024-06-26T07:16:11.156328926Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bbf4c86b7-vs76m,Uid:d4b567d7-752d-409d-9367-5069e210e1e9,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0d062ae00e181d837c765207a10fc6ae481583e17c536bd03b961b135a2b21c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 26 07:16:11.157263 kubelet[2555]: E0626 07:16:11.157201 2555 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d062ae00e181d837c765207a10fc6ae481583e17c536bd03b961b135a2b21c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 26 07:16:11.157717 kubelet[2555]: E0626 07:16:11.157457 2555 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d062ae00e181d837c765207a10fc6ae481583e17c536bd03b961b135a2b21c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7bbf4c86b7-vs76m" Jun 26 07:16:11.157717 kubelet[2555]: E0626 07:16:11.157490 2555 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d062ae00e181d837c765207a10fc6ae481583e17c536bd03b961b135a2b21c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7bbf4c86b7-vs76m" Jun 26 07:16:11.158018 kubelet[2555]: E0626 07:16:11.157955 2555 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7bbf4c86b7-vs76m_calico-system(d4b567d7-752d-409d-9367-5069e210e1e9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7bbf4c86b7-vs76m_calico-system(d4b567d7-752d-409d-9367-5069e210e1e9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0d062ae00e181d837c765207a10fc6ae481583e17c536bd03b961b135a2b21c9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7bbf4c86b7-vs76m" podUID="d4b567d7-752d-409d-9367-5069e210e1e9" Jun 26 07:16:11.702027 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7433ee3040e500ed4f313a1ca45a3798169aa7dd30e72bc237b80f1cc7e0fc19-shm.mount: Deactivated successfully. Jun 26 07:16:11.745008 systemd[1]: Created slice kubepods-besteffort-pod57e08add_fe69_4f05_8bca_834c135d01cc.slice - libcontainer container kubepods-besteffort-pod57e08add_fe69_4f05_8bca_834c135d01cc.slice. Jun 26 07:16:11.749530 containerd[1464]: time="2024-06-26T07:16:11.749468162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-whchj,Uid:57e08add-fe69-4f05-8bca-834c135d01cc,Namespace:calico-system,Attempt:0,}" Jun 26 07:16:11.852956 containerd[1464]: time="2024-06-26T07:16:11.852896712Z" level=error msg="Failed to destroy network for sandbox \"de194b93043586760020d8ab8823aa3c3329a1fbe5dffbba429aacd169df80c4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 26 07:16:11.854787 containerd[1464]: time="2024-06-26T07:16:11.854651783Z" level=error msg="encountered an error cleaning up failed sandbox \"de194b93043586760020d8ab8823aa3c3329a1fbe5dffbba429aacd169df80c4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 26 07:16:11.854787 containerd[1464]: time="2024-06-26T07:16:11.854785446Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-whchj,Uid:57e08add-fe69-4f05-8bca-834c135d01cc,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"de194b93043586760020d8ab8823aa3c3329a1fbe5dffbba429aacd169df80c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 26 07:16:11.855941 kubelet[2555]: E0626 07:16:11.855449 2555 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de194b93043586760020d8ab8823aa3c3329a1fbe5dffbba429aacd169df80c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 26 07:16:11.855941 kubelet[2555]: E0626 07:16:11.855508 2555 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de194b93043586760020d8ab8823aa3c3329a1fbe5dffbba429aacd169df80c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-whchj" Jun 26 07:16:11.855941 kubelet[2555]: E0626 07:16:11.855529 2555 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de194b93043586760020d8ab8823aa3c3329a1fbe5dffbba429aacd169df80c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-whchj" Jun 26 07:16:11.861256 kubelet[2555]: E0626 07:16:11.858206 2555 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-whchj_calico-system(57e08add-fe69-4f05-8bca-834c135d01cc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-whchj_calico-system(57e08add-fe69-4f05-8bca-834c135d01cc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"de194b93043586760020d8ab8823aa3c3329a1fbe5dffbba429aacd169df80c4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-whchj" podUID="57e08add-fe69-4f05-8bca-834c135d01cc" Jun 26 07:16:11.857654 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-de194b93043586760020d8ab8823aa3c3329a1fbe5dffbba429aacd169df80c4-shm.mount: Deactivated successfully. Jun 26 07:16:12.042617 kubelet[2555]: I0626 07:16:12.041857 2555 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7433ee3040e500ed4f313a1ca45a3798169aa7dd30e72bc237b80f1cc7e0fc19" Jun 26 07:16:12.043593 containerd[1464]: time="2024-06-26T07:16:12.043511943Z" level=info msg="StopPodSandbox for \"7433ee3040e500ed4f313a1ca45a3798169aa7dd30e72bc237b80f1cc7e0fc19\"" Jun 26 07:16:12.044145 containerd[1464]: time="2024-06-26T07:16:12.044074227Z" level=info msg="Ensure that sandbox 7433ee3040e500ed4f313a1ca45a3798169aa7dd30e72bc237b80f1cc7e0fc19 in task-service has been cleanup successfully" Jun 26 07:16:12.048397 kubelet[2555]: I0626 07:16:12.047338 2555 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70d914ace24303220258698d775c0c8356e250e52d34e44b3c7d52f0b335ff80" Jun 26 07:16:12.050776 containerd[1464]: time="2024-06-26T07:16:12.050541475Z" level=info msg="StopPodSandbox for \"70d914ace24303220258698d775c0c8356e250e52d34e44b3c7d52f0b335ff80\"" Jun 26 07:16:12.052006 containerd[1464]: time="2024-06-26T07:16:12.051841402Z" level=info msg="Ensure that sandbox 70d914ace24303220258698d775c0c8356e250e52d34e44b3c7d52f0b335ff80 in task-service has been cleanup successfully" Jun 26 07:16:12.058843 kubelet[2555]: I0626 07:16:12.058541 2555 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d062ae00e181d837c765207a10fc6ae481583e17c536bd03b961b135a2b21c9" Jun 26 07:16:12.063466 containerd[1464]: time="2024-06-26T07:16:12.063415394Z" level=info msg="StopPodSandbox for \"0d062ae00e181d837c765207a10fc6ae481583e17c536bd03b961b135a2b21c9\"" Jun 26 07:16:12.063658 containerd[1464]: time="2024-06-26T07:16:12.063639356Z" level=info msg="Ensure that sandbox 0d062ae00e181d837c765207a10fc6ae481583e17c536bd03b961b135a2b21c9 in task-service has been cleanup successfully" Jun 26 07:16:12.068490 kubelet[2555]: I0626 07:16:12.068431 2555 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de194b93043586760020d8ab8823aa3c3329a1fbe5dffbba429aacd169df80c4" Jun 26 07:16:12.074792 containerd[1464]: time="2024-06-26T07:16:12.074092956Z" level=info msg="StopPodSandbox for \"de194b93043586760020d8ab8823aa3c3329a1fbe5dffbba429aacd169df80c4\"" Jun 26 07:16:12.074792 containerd[1464]: time="2024-06-26T07:16:12.074393186Z" level=info msg="Ensure that sandbox de194b93043586760020d8ab8823aa3c3329a1fbe5dffbba429aacd169df80c4 in task-service has been cleanup successfully" Jun 26 07:16:12.140548 containerd[1464]: time="2024-06-26T07:16:12.140476354Z" level=error msg="StopPodSandbox for \"7433ee3040e500ed4f313a1ca45a3798169aa7dd30e72bc237b80f1cc7e0fc19\" failed" error="failed to destroy network for sandbox \"7433ee3040e500ed4f313a1ca45a3798169aa7dd30e72bc237b80f1cc7e0fc19\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 26 07:16:12.141006 kubelet[2555]: E0626 07:16:12.140975 2555 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7433ee3040e500ed4f313a1ca45a3798169aa7dd30e72bc237b80f1cc7e0fc19\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7433ee3040e500ed4f313a1ca45a3798169aa7dd30e72bc237b80f1cc7e0fc19" Jun 26 07:16:12.141569 kubelet[2555]: E0626 07:16:12.141203 2555 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7433ee3040e500ed4f313a1ca45a3798169aa7dd30e72bc237b80f1cc7e0fc19"} Jun 26 07:16:12.141569 kubelet[2555]: E0626 07:16:12.141498 2555 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"39f4ab23-9ba0-40aa-ae86-45cf1e57277c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7433ee3040e500ed4f313a1ca45a3798169aa7dd30e72bc237b80f1cc7e0fc19\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 26 07:16:12.141569 kubelet[2555]: E0626 07:16:12.141544 2555 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"39f4ab23-9ba0-40aa-ae86-45cf1e57277c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7433ee3040e500ed4f313a1ca45a3798169aa7dd30e72bc237b80f1cc7e0fc19\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-8wm6w" podUID="39f4ab23-9ba0-40aa-ae86-45cf1e57277c" Jun 26 07:16:12.166810 containerd[1464]: time="2024-06-26T07:16:12.166256022Z" level=error msg="StopPodSandbox for \"70d914ace24303220258698d775c0c8356e250e52d34e44b3c7d52f0b335ff80\" failed" error="failed to destroy network for sandbox \"70d914ace24303220258698d775c0c8356e250e52d34e44b3c7d52f0b335ff80\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 26 07:16:12.166961 kubelet[2555]: E0626 07:16:12.166611 2555 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"70d914ace24303220258698d775c0c8356e250e52d34e44b3c7d52f0b335ff80\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="70d914ace24303220258698d775c0c8356e250e52d34e44b3c7d52f0b335ff80" Jun 26 07:16:12.166961 kubelet[2555]: E0626 07:16:12.166666 2555 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"70d914ace24303220258698d775c0c8356e250e52d34e44b3c7d52f0b335ff80"} Jun 26 07:16:12.166961 kubelet[2555]: E0626 07:16:12.166711 2555 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a5283807-341d-4398-a801-298fb264d1f1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"70d914ace24303220258698d775c0c8356e250e52d34e44b3c7d52f0b335ff80\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 26 07:16:12.166961 kubelet[2555]: E0626 07:16:12.166770 2555 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a5283807-341d-4398-a801-298fb264d1f1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"70d914ace24303220258698d775c0c8356e250e52d34e44b3c7d52f0b335ff80\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-bhgmr" podUID="a5283807-341d-4398-a801-298fb264d1f1" Jun 26 07:16:12.171089 containerd[1464]: time="2024-06-26T07:16:12.170886575Z" level=error msg="StopPodSandbox for \"de194b93043586760020d8ab8823aa3c3329a1fbe5dffbba429aacd169df80c4\" failed" error="failed to destroy network for sandbox \"de194b93043586760020d8ab8823aa3c3329a1fbe5dffbba429aacd169df80c4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 26 07:16:12.171767 kubelet[2555]: E0626 07:16:12.171577 2555 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"de194b93043586760020d8ab8823aa3c3329a1fbe5dffbba429aacd169df80c4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="de194b93043586760020d8ab8823aa3c3329a1fbe5dffbba429aacd169df80c4" Jun 26 07:16:12.171767 kubelet[2555]: E0626 07:16:12.171673 2555 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"de194b93043586760020d8ab8823aa3c3329a1fbe5dffbba429aacd169df80c4"} Jun 26 07:16:12.171767 kubelet[2555]: E0626 07:16:12.171733 2555 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"57e08add-fe69-4f05-8bca-834c135d01cc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"de194b93043586760020d8ab8823aa3c3329a1fbe5dffbba429aacd169df80c4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 26 07:16:12.172651 kubelet[2555]: E0626 07:16:12.172084 2555 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"57e08add-fe69-4f05-8bca-834c135d01cc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"de194b93043586760020d8ab8823aa3c3329a1fbe5dffbba429aacd169df80c4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-whchj" podUID="57e08add-fe69-4f05-8bca-834c135d01cc" Jun 26 07:16:12.177427 containerd[1464]: time="2024-06-26T07:16:12.177337172Z" level=error msg="StopPodSandbox for \"0d062ae00e181d837c765207a10fc6ae481583e17c536bd03b961b135a2b21c9\" failed" error="failed to destroy network for sandbox \"0d062ae00e181d837c765207a10fc6ae481583e17c536bd03b961b135a2b21c9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 26 07:16:12.178321 kubelet[2555]: E0626 07:16:12.177882 2555 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0d062ae00e181d837c765207a10fc6ae481583e17c536bd03b961b135a2b21c9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0d062ae00e181d837c765207a10fc6ae481583e17c536bd03b961b135a2b21c9" Jun 26 07:16:12.178321 kubelet[2555]: E0626 07:16:12.177953 2555 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0d062ae00e181d837c765207a10fc6ae481583e17c536bd03b961b135a2b21c9"} Jun 26 07:16:12.178321 kubelet[2555]: E0626 07:16:12.178002 2555 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d4b567d7-752d-409d-9367-5069e210e1e9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0d062ae00e181d837c765207a10fc6ae481583e17c536bd03b961b135a2b21c9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 26 07:16:12.178321 kubelet[2555]: E0626 07:16:12.178042 2555 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d4b567d7-752d-409d-9367-5069e210e1e9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0d062ae00e181d837c765207a10fc6ae481583e17c536bd03b961b135a2b21c9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7bbf4c86b7-vs76m" podUID="d4b567d7-752d-409d-9367-5069e210e1e9" Jun 26 07:16:16.446631 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount177046358.mount: Deactivated successfully. Jun 26 07:16:16.505193 containerd[1464]: time="2024-06-26T07:16:16.504124699Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=115238750" Jun 26 07:16:16.506729 containerd[1464]: time="2024-06-26T07:16:16.499320761Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:16:16.507990 containerd[1464]: time="2024-06-26T07:16:16.507928222Z" level=info msg="ImageCreate event name:\"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:16:16.509070 containerd[1464]: time="2024-06-26T07:16:16.509013753Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"115238612\" in 5.465147709s" Jun 26 07:16:16.509070 containerd[1464]: time="2024-06-26T07:16:16.509056150Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\"" Jun 26 07:16:16.511584 containerd[1464]: time="2024-06-26T07:16:16.511509892Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:16:16.555520 containerd[1464]: time="2024-06-26T07:16:16.555438792Z" level=info msg="CreateContainer within sandbox \"81e2a039865d66c16404c3aa53c06493c495fabd2bea934fd621c7704764cbf1\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jun 26 07:16:16.600543 containerd[1464]: time="2024-06-26T07:16:16.600437205Z" level=info msg="CreateContainer within sandbox \"81e2a039865d66c16404c3aa53c06493c495fabd2bea934fd621c7704764cbf1\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a4e6466f19beb2516a143803a29f7e5526b7be106c3014a6148c005b7208a753\"" Jun 26 07:16:16.601230 containerd[1464]: time="2024-06-26T07:16:16.601190818Z" level=info msg="StartContainer for \"a4e6466f19beb2516a143803a29f7e5526b7be106c3014a6148c005b7208a753\"" Jun 26 07:16:16.646289 systemd[1]: Started cri-containerd-a4e6466f19beb2516a143803a29f7e5526b7be106c3014a6148c005b7208a753.scope - libcontainer container a4e6466f19beb2516a143803a29f7e5526b7be106c3014a6148c005b7208a753. Jun 26 07:16:16.709706 containerd[1464]: time="2024-06-26T07:16:16.709487051Z" level=info msg="StartContainer for \"a4e6466f19beb2516a143803a29f7e5526b7be106c3014a6148c005b7208a753\" returns successfully" Jun 26 07:16:16.828786 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jun 26 07:16:16.829458 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jun 26 07:16:17.094597 kubelet[2555]: E0626 07:16:17.093796 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:16:17.209231 kubelet[2555]: I0626 07:16:17.209169 2555 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 26 07:16:17.211163 kubelet[2555]: E0626 07:16:17.211117 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:16:17.262840 kubelet[2555]: I0626 07:16:17.262762 2555 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-k22jp" podStartSLOduration=2.7103811479999997 podStartE2EDuration="15.259879728s" podCreationTimestamp="2024-06-26 07:16:02 +0000 UTC" firstStartedPulling="2024-06-26 07:16:03.959853233 +0000 UTC m=+25.561891076" lastFinishedPulling="2024-06-26 07:16:16.509351819 +0000 UTC m=+38.111389656" observedRunningTime="2024-06-26 07:16:17.120592182 +0000 UTC m=+38.722630042" watchObservedRunningTime="2024-06-26 07:16:17.259879728 +0000 UTC m=+38.861917592" Jun 26 07:16:18.095364 kubelet[2555]: E0626 07:16:18.095267 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:16:19.079910 systemd-networkd[1360]: vxlan.calico: Link UP Jun 26 07:16:19.079925 systemd-networkd[1360]: vxlan.calico: Gained carrier Jun 26 07:16:19.595524 kubelet[2555]: I0626 07:16:19.595447 2555 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 26 07:16:19.598317 kubelet[2555]: E0626 07:16:19.598215 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:16:20.241102 systemd-networkd[1360]: vxlan.calico: Gained IPv6LL Jun 26 07:16:23.733059 containerd[1464]: time="2024-06-26T07:16:23.732514474Z" level=info msg="StopPodSandbox for \"de194b93043586760020d8ab8823aa3c3329a1fbe5dffbba429aacd169df80c4\"" Jun 26 07:16:24.164036 containerd[1464]: 2024-06-26 07:16:23.835 [INFO][4127] k8s.go 608: Cleaning up netns ContainerID="de194b93043586760020d8ab8823aa3c3329a1fbe5dffbba429aacd169df80c4" Jun 26 07:16:24.164036 containerd[1464]: 2024-06-26 07:16:23.837 [INFO][4127] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="de194b93043586760020d8ab8823aa3c3329a1fbe5dffbba429aacd169df80c4" iface="eth0" netns="/var/run/netns/cni-70e41e38-3d6e-035e-7a68-fc0ebaa1263b" Jun 26 07:16:24.164036 containerd[1464]: 2024-06-26 07:16:23.837 [INFO][4127] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="de194b93043586760020d8ab8823aa3c3329a1fbe5dffbba429aacd169df80c4" iface="eth0" netns="/var/run/netns/cni-70e41e38-3d6e-035e-7a68-fc0ebaa1263b" Jun 26 07:16:24.164036 containerd[1464]: 2024-06-26 07:16:23.838 [INFO][4127] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="de194b93043586760020d8ab8823aa3c3329a1fbe5dffbba429aacd169df80c4" iface="eth0" netns="/var/run/netns/cni-70e41e38-3d6e-035e-7a68-fc0ebaa1263b" Jun 26 07:16:24.164036 containerd[1464]: 2024-06-26 07:16:23.838 [INFO][4127] k8s.go 615: Releasing IP address(es) ContainerID="de194b93043586760020d8ab8823aa3c3329a1fbe5dffbba429aacd169df80c4" Jun 26 07:16:24.164036 containerd[1464]: 2024-06-26 07:16:23.838 [INFO][4127] utils.go 188: Calico CNI releasing IP address ContainerID="de194b93043586760020d8ab8823aa3c3329a1fbe5dffbba429aacd169df80c4" Jun 26 07:16:24.164036 containerd[1464]: 2024-06-26 07:16:24.137 [INFO][4133] ipam_plugin.go 411: Releasing address using handleID ContainerID="de194b93043586760020d8ab8823aa3c3329a1fbe5dffbba429aacd169df80c4" HandleID="k8s-pod-network.de194b93043586760020d8ab8823aa3c3329a1fbe5dffbba429aacd169df80c4" Workload="ci--4012.0.0--0--ebda1d1a0c-k8s-csi--node--driver--whchj-eth0" Jun 26 07:16:24.164036 containerd[1464]: 2024-06-26 07:16:24.138 [INFO][4133] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 26 07:16:24.164036 containerd[1464]: 2024-06-26 07:16:24.138 [INFO][4133] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 26 07:16:24.164036 containerd[1464]: 2024-06-26 07:16:24.152 [WARNING][4133] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="de194b93043586760020d8ab8823aa3c3329a1fbe5dffbba429aacd169df80c4" HandleID="k8s-pod-network.de194b93043586760020d8ab8823aa3c3329a1fbe5dffbba429aacd169df80c4" Workload="ci--4012.0.0--0--ebda1d1a0c-k8s-csi--node--driver--whchj-eth0" Jun 26 07:16:24.164036 containerd[1464]: 2024-06-26 07:16:24.152 [INFO][4133] ipam_plugin.go 439: Releasing address using workloadID ContainerID="de194b93043586760020d8ab8823aa3c3329a1fbe5dffbba429aacd169df80c4" HandleID="k8s-pod-network.de194b93043586760020d8ab8823aa3c3329a1fbe5dffbba429aacd169df80c4" Workload="ci--4012.0.0--0--ebda1d1a0c-k8s-csi--node--driver--whchj-eth0" Jun 26 07:16:24.164036 containerd[1464]: 2024-06-26 07:16:24.156 [INFO][4133] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 26 07:16:24.164036 containerd[1464]: 2024-06-26 07:16:24.159 [INFO][4127] k8s.go 621: Teardown processing complete. ContainerID="de194b93043586760020d8ab8823aa3c3329a1fbe5dffbba429aacd169df80c4" Jun 26 07:16:24.164983 containerd[1464]: time="2024-06-26T07:16:24.164402496Z" level=info msg="TearDown network for sandbox \"de194b93043586760020d8ab8823aa3c3329a1fbe5dffbba429aacd169df80c4\" successfully" Jun 26 07:16:24.168783 containerd[1464]: time="2024-06-26T07:16:24.166663003Z" level=info msg="StopPodSandbox for \"de194b93043586760020d8ab8823aa3c3329a1fbe5dffbba429aacd169df80c4\" returns successfully" Jun 26 07:16:24.172148 systemd[1]: run-netns-cni\x2d70e41e38\x2d3d6e\x2d035e\x2d7a68\x2dfc0ebaa1263b.mount: Deactivated successfully. Jun 26 07:16:24.185636 containerd[1464]: time="2024-06-26T07:16:24.185079903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-whchj,Uid:57e08add-fe69-4f05-8bca-834c135d01cc,Namespace:calico-system,Attempt:1,}" Jun 26 07:16:24.470938 systemd-networkd[1360]: caliaf9e40da153: Link UP Jun 26 07:16:24.472026 systemd-networkd[1360]: caliaf9e40da153: Gained carrier Jun 26 07:16:24.509001 containerd[1464]: 2024-06-26 07:16:24.291 [INFO][4143] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4012.0.0--0--ebda1d1a0c-k8s-csi--node--driver--whchj-eth0 csi-node-driver- calico-system 57e08add-fe69-4f05-8bca-834c135d01cc 844 0 2024-06-26 07:15:58 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7d7f6c786c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ci-4012.0.0-0-ebda1d1a0c csi-node-driver-whchj eth0 default [] [] [kns.calico-system ksa.calico-system.default] caliaf9e40da153 [] []}} ContainerID="46473417f7396adabe2b4198958182b00ef9f0b97c3af741a3c97e8d10a6a345" Namespace="calico-system" Pod="csi-node-driver-whchj" WorkloadEndpoint="ci--4012.0.0--0--ebda1d1a0c-k8s-csi--node--driver--whchj-" Jun 26 07:16:24.509001 containerd[1464]: 2024-06-26 07:16:24.292 [INFO][4143] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="46473417f7396adabe2b4198958182b00ef9f0b97c3af741a3c97e8d10a6a345" Namespace="calico-system" Pod="csi-node-driver-whchj" WorkloadEndpoint="ci--4012.0.0--0--ebda1d1a0c-k8s-csi--node--driver--whchj-eth0" Jun 26 07:16:24.509001 containerd[1464]: 2024-06-26 07:16:24.366 [INFO][4153] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="46473417f7396adabe2b4198958182b00ef9f0b97c3af741a3c97e8d10a6a345" HandleID="k8s-pod-network.46473417f7396adabe2b4198958182b00ef9f0b97c3af741a3c97e8d10a6a345" Workload="ci--4012.0.0--0--ebda1d1a0c-k8s-csi--node--driver--whchj-eth0" Jun 26 07:16:24.509001 containerd[1464]: 2024-06-26 07:16:24.382 [INFO][4153] ipam_plugin.go 264: Auto assigning IP ContainerID="46473417f7396adabe2b4198958182b00ef9f0b97c3af741a3c97e8d10a6a345" HandleID="k8s-pod-network.46473417f7396adabe2b4198958182b00ef9f0b97c3af741a3c97e8d10a6a345" Workload="ci--4012.0.0--0--ebda1d1a0c-k8s-csi--node--driver--whchj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000367d30), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4012.0.0-0-ebda1d1a0c", "pod":"csi-node-driver-whchj", "timestamp":"2024-06-26 07:16:24.36602445 +0000 UTC"}, Hostname:"ci-4012.0.0-0-ebda1d1a0c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 26 07:16:24.509001 containerd[1464]: 2024-06-26 07:16:24.382 [INFO][4153] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 26 07:16:24.509001 containerd[1464]: 2024-06-26 07:16:24.382 [INFO][4153] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 26 07:16:24.509001 containerd[1464]: 2024-06-26 07:16:24.383 [INFO][4153] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4012.0.0-0-ebda1d1a0c' Jun 26 07:16:24.509001 containerd[1464]: 2024-06-26 07:16:24.387 [INFO][4153] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.46473417f7396adabe2b4198958182b00ef9f0b97c3af741a3c97e8d10a6a345" host="ci-4012.0.0-0-ebda1d1a0c" Jun 26 07:16:24.509001 containerd[1464]: 2024-06-26 07:16:24.413 [INFO][4153] ipam.go 372: Looking up existing affinities for host host="ci-4012.0.0-0-ebda1d1a0c" Jun 26 07:16:24.509001 containerd[1464]: 2024-06-26 07:16:24.423 [INFO][4153] ipam.go 489: Trying affinity for 192.168.36.192/26 host="ci-4012.0.0-0-ebda1d1a0c" Jun 26 07:16:24.509001 containerd[1464]: 2024-06-26 07:16:24.427 [INFO][4153] ipam.go 155: Attempting to load block cidr=192.168.36.192/26 host="ci-4012.0.0-0-ebda1d1a0c" Jun 26 07:16:24.509001 containerd[1464]: 2024-06-26 07:16:24.432 [INFO][4153] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.36.192/26 host="ci-4012.0.0-0-ebda1d1a0c" Jun 26 07:16:24.509001 containerd[1464]: 2024-06-26 07:16:24.432 [INFO][4153] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.36.192/26 handle="k8s-pod-network.46473417f7396adabe2b4198958182b00ef9f0b97c3af741a3c97e8d10a6a345" host="ci-4012.0.0-0-ebda1d1a0c" Jun 26 07:16:24.509001 containerd[1464]: 2024-06-26 07:16:24.436 [INFO][4153] ipam.go 1685: Creating new handle: k8s-pod-network.46473417f7396adabe2b4198958182b00ef9f0b97c3af741a3c97e8d10a6a345 Jun 26 07:16:24.509001 containerd[1464]: 2024-06-26 07:16:24.445 [INFO][4153] ipam.go 1203: Writing block in order to claim IPs block=192.168.36.192/26 handle="k8s-pod-network.46473417f7396adabe2b4198958182b00ef9f0b97c3af741a3c97e8d10a6a345" host="ci-4012.0.0-0-ebda1d1a0c" Jun 26 07:16:24.509001 containerd[1464]: 2024-06-26 07:16:24.453 [INFO][4153] ipam.go 1216: Successfully claimed IPs: [192.168.36.193/26] block=192.168.36.192/26 handle="k8s-pod-network.46473417f7396adabe2b4198958182b00ef9f0b97c3af741a3c97e8d10a6a345" host="ci-4012.0.0-0-ebda1d1a0c" Jun 26 07:16:24.509001 containerd[1464]: 2024-06-26 07:16:24.453 [INFO][4153] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.36.193/26] handle="k8s-pod-network.46473417f7396adabe2b4198958182b00ef9f0b97c3af741a3c97e8d10a6a345" host="ci-4012.0.0-0-ebda1d1a0c" Jun 26 07:16:24.509001 containerd[1464]: 2024-06-26 07:16:24.454 [INFO][4153] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 26 07:16:24.509001 containerd[1464]: 2024-06-26 07:16:24.454 [INFO][4153] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.36.193/26] IPv6=[] ContainerID="46473417f7396adabe2b4198958182b00ef9f0b97c3af741a3c97e8d10a6a345" HandleID="k8s-pod-network.46473417f7396adabe2b4198958182b00ef9f0b97c3af741a3c97e8d10a6a345" Workload="ci--4012.0.0--0--ebda1d1a0c-k8s-csi--node--driver--whchj-eth0" Jun 26 07:16:24.511241 containerd[1464]: 2024-06-26 07:16:24.461 [INFO][4143] k8s.go 386: Populated endpoint ContainerID="46473417f7396adabe2b4198958182b00ef9f0b97c3af741a3c97e8d10a6a345" Namespace="calico-system" Pod="csi-node-driver-whchj" WorkloadEndpoint="ci--4012.0.0--0--ebda1d1a0c-k8s-csi--node--driver--whchj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--0--ebda1d1a0c-k8s-csi--node--driver--whchj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"57e08add-fe69-4f05-8bca-834c135d01cc", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 7, 15, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-0-ebda1d1a0c", ContainerID:"", Pod:"csi-node-driver-whchj", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.36.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"caliaf9e40da153", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 26 07:16:24.511241 containerd[1464]: 2024-06-26 07:16:24.461 [INFO][4143] k8s.go 387: Calico CNI using IPs: [192.168.36.193/32] ContainerID="46473417f7396adabe2b4198958182b00ef9f0b97c3af741a3c97e8d10a6a345" Namespace="calico-system" Pod="csi-node-driver-whchj" WorkloadEndpoint="ci--4012.0.0--0--ebda1d1a0c-k8s-csi--node--driver--whchj-eth0" Jun 26 07:16:24.511241 containerd[1464]: 2024-06-26 07:16:24.461 [INFO][4143] dataplane_linux.go 68: Setting the host side veth name to caliaf9e40da153 ContainerID="46473417f7396adabe2b4198958182b00ef9f0b97c3af741a3c97e8d10a6a345" Namespace="calico-system" Pod="csi-node-driver-whchj" WorkloadEndpoint="ci--4012.0.0--0--ebda1d1a0c-k8s-csi--node--driver--whchj-eth0" Jun 26 07:16:24.511241 containerd[1464]: 2024-06-26 07:16:24.470 [INFO][4143] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="46473417f7396adabe2b4198958182b00ef9f0b97c3af741a3c97e8d10a6a345" Namespace="calico-system" Pod="csi-node-driver-whchj" WorkloadEndpoint="ci--4012.0.0--0--ebda1d1a0c-k8s-csi--node--driver--whchj-eth0" Jun 26 07:16:24.511241 containerd[1464]: 2024-06-26 07:16:24.474 [INFO][4143] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="46473417f7396adabe2b4198958182b00ef9f0b97c3af741a3c97e8d10a6a345" Namespace="calico-system" Pod="csi-node-driver-whchj" WorkloadEndpoint="ci--4012.0.0--0--ebda1d1a0c-k8s-csi--node--driver--whchj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--0--ebda1d1a0c-k8s-csi--node--driver--whchj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"57e08add-fe69-4f05-8bca-834c135d01cc", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 7, 15, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-0-ebda1d1a0c", ContainerID:"46473417f7396adabe2b4198958182b00ef9f0b97c3af741a3c97e8d10a6a345", Pod:"csi-node-driver-whchj", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.36.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"caliaf9e40da153", MAC:"3e:ff:14:10:2c:cf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 26 07:16:24.511241 containerd[1464]: 2024-06-26 07:16:24.499 [INFO][4143] k8s.go 500: Wrote updated endpoint to datastore ContainerID="46473417f7396adabe2b4198958182b00ef9f0b97c3af741a3c97e8d10a6a345" Namespace="calico-system" Pod="csi-node-driver-whchj" WorkloadEndpoint="ci--4012.0.0--0--ebda1d1a0c-k8s-csi--node--driver--whchj-eth0" Jun 26 07:16:24.585588 containerd[1464]: time="2024-06-26T07:16:24.585198092Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 26 07:16:24.585588 containerd[1464]: time="2024-06-26T07:16:24.585330615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:16:24.585588 containerd[1464]: time="2024-06-26T07:16:24.585348479Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 26 07:16:24.585588 containerd[1464]: time="2024-06-26T07:16:24.585373379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:16:24.634464 systemd[1]: Started cri-containerd-46473417f7396adabe2b4198958182b00ef9f0b97c3af741a3c97e8d10a6a345.scope - libcontainer container 46473417f7396adabe2b4198958182b00ef9f0b97c3af741a3c97e8d10a6a345. Jun 26 07:16:24.693011 containerd[1464]: time="2024-06-26T07:16:24.692869739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-whchj,Uid:57e08add-fe69-4f05-8bca-834c135d01cc,Namespace:calico-system,Attempt:1,} returns sandbox id \"46473417f7396adabe2b4198958182b00ef9f0b97c3af741a3c97e8d10a6a345\"" Jun 26 07:16:24.702973 containerd[1464]: time="2024-06-26T07:16:24.702029851Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Jun 26 07:16:25.681127 systemd-networkd[1360]: caliaf9e40da153: Gained IPv6LL Jun 26 07:16:25.733631 containerd[1464]: time="2024-06-26T07:16:25.732874559Z" level=info msg="StopPodSandbox for \"7433ee3040e500ed4f313a1ca45a3798169aa7dd30e72bc237b80f1cc7e0fc19\"" Jun 26 07:16:25.898604 containerd[1464]: 2024-06-26 07:16:25.818 [INFO][4232] k8s.go 608: Cleaning up netns ContainerID="7433ee3040e500ed4f313a1ca45a3798169aa7dd30e72bc237b80f1cc7e0fc19" Jun 26 07:16:25.898604 containerd[1464]: 2024-06-26 07:16:25.818 [INFO][4232] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="7433ee3040e500ed4f313a1ca45a3798169aa7dd30e72bc237b80f1cc7e0fc19" iface="eth0" netns="/var/run/netns/cni-186a70f4-73af-a5e5-6bd7-534d73bc9336" Jun 26 07:16:25.898604 containerd[1464]: 2024-06-26 07:16:25.819 [INFO][4232] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="7433ee3040e500ed4f313a1ca45a3798169aa7dd30e72bc237b80f1cc7e0fc19" iface="eth0" netns="/var/run/netns/cni-186a70f4-73af-a5e5-6bd7-534d73bc9336" Jun 26 07:16:25.898604 containerd[1464]: 2024-06-26 07:16:25.821 [INFO][4232] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="7433ee3040e500ed4f313a1ca45a3798169aa7dd30e72bc237b80f1cc7e0fc19" iface="eth0" netns="/var/run/netns/cni-186a70f4-73af-a5e5-6bd7-534d73bc9336" Jun 26 07:16:25.898604 containerd[1464]: 2024-06-26 07:16:25.821 [INFO][4232] k8s.go 615: Releasing IP address(es) ContainerID="7433ee3040e500ed4f313a1ca45a3798169aa7dd30e72bc237b80f1cc7e0fc19" Jun 26 07:16:25.898604 containerd[1464]: 2024-06-26 07:16:25.821 [INFO][4232] utils.go 188: Calico CNI releasing IP address ContainerID="7433ee3040e500ed4f313a1ca45a3798169aa7dd30e72bc237b80f1cc7e0fc19" Jun 26 07:16:25.898604 containerd[1464]: 2024-06-26 07:16:25.879 [INFO][4238] ipam_plugin.go 411: Releasing address using handleID ContainerID="7433ee3040e500ed4f313a1ca45a3798169aa7dd30e72bc237b80f1cc7e0fc19" HandleID="k8s-pod-network.7433ee3040e500ed4f313a1ca45a3798169aa7dd30e72bc237b80f1cc7e0fc19" Workload="ci--4012.0.0--0--ebda1d1a0c-k8s-coredns--76f75df574--8wm6w-eth0" Jun 26 07:16:25.898604 containerd[1464]: 2024-06-26 07:16:25.879 [INFO][4238] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 26 07:16:25.898604 containerd[1464]: 2024-06-26 07:16:25.879 [INFO][4238] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 26 07:16:25.898604 containerd[1464]: 2024-06-26 07:16:25.889 [WARNING][4238] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="7433ee3040e500ed4f313a1ca45a3798169aa7dd30e72bc237b80f1cc7e0fc19" HandleID="k8s-pod-network.7433ee3040e500ed4f313a1ca45a3798169aa7dd30e72bc237b80f1cc7e0fc19" Workload="ci--4012.0.0--0--ebda1d1a0c-k8s-coredns--76f75df574--8wm6w-eth0" Jun 26 07:16:25.898604 containerd[1464]: 2024-06-26 07:16:25.889 [INFO][4238] ipam_plugin.go 439: Releasing address using workloadID ContainerID="7433ee3040e500ed4f313a1ca45a3798169aa7dd30e72bc237b80f1cc7e0fc19" HandleID="k8s-pod-network.7433ee3040e500ed4f313a1ca45a3798169aa7dd30e72bc237b80f1cc7e0fc19" Workload="ci--4012.0.0--0--ebda1d1a0c-k8s-coredns--76f75df574--8wm6w-eth0" Jun 26 07:16:25.898604 containerd[1464]: 2024-06-26 07:16:25.892 [INFO][4238] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 26 07:16:25.898604 containerd[1464]: 2024-06-26 07:16:25.895 [INFO][4232] k8s.go 621: Teardown processing complete. ContainerID="7433ee3040e500ed4f313a1ca45a3798169aa7dd30e72bc237b80f1cc7e0fc19" Jun 26 07:16:25.903437 systemd[1]: run-netns-cni\x2d186a70f4\x2d73af\x2da5e5\x2d6bd7\x2d534d73bc9336.mount: Deactivated successfully. Jun 26 07:16:25.903930 containerd[1464]: time="2024-06-26T07:16:25.903883155Z" level=info msg="TearDown network for sandbox \"7433ee3040e500ed4f313a1ca45a3798169aa7dd30e72bc237b80f1cc7e0fc19\" successfully" Jun 26 07:16:25.904028 containerd[1464]: time="2024-06-26T07:16:25.903934245Z" level=info msg="StopPodSandbox for \"7433ee3040e500ed4f313a1ca45a3798169aa7dd30e72bc237b80f1cc7e0fc19\" returns successfully" Jun 26 07:16:25.905616 kubelet[2555]: E0626 07:16:25.904696 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:16:25.909372 containerd[1464]: time="2024-06-26T07:16:25.908690133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-8wm6w,Uid:39f4ab23-9ba0-40aa-ae86-45cf1e57277c,Namespace:kube-system,Attempt:1,}" Jun 26 07:16:26.222059 systemd-networkd[1360]: calia59ee8d28ce: Link UP Jun 26 07:16:26.224495 systemd-networkd[1360]: calia59ee8d28ce: Gained carrier Jun 26 07:16:26.277341 containerd[1464]: 2024-06-26 07:16:26.017 [INFO][4249] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4012.0.0--0--ebda1d1a0c-k8s-coredns--76f75df574--8wm6w-eth0 coredns-76f75df574- kube-system 39f4ab23-9ba0-40aa-ae86-45cf1e57277c 854 0 2024-06-26 07:15:51 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4012.0.0-0-ebda1d1a0c coredns-76f75df574-8wm6w eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia59ee8d28ce [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="5e0b5143a26856c0f4a80d744cc1043b7c508b31c4f39bf75934a83af9e67f37" Namespace="kube-system" Pod="coredns-76f75df574-8wm6w" WorkloadEndpoint="ci--4012.0.0--0--ebda1d1a0c-k8s-coredns--76f75df574--8wm6w-" Jun 26 07:16:26.277341 containerd[1464]: 2024-06-26 07:16:26.017 [INFO][4249] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5e0b5143a26856c0f4a80d744cc1043b7c508b31c4f39bf75934a83af9e67f37" Namespace="kube-system" Pod="coredns-76f75df574-8wm6w" WorkloadEndpoint="ci--4012.0.0--0--ebda1d1a0c-k8s-coredns--76f75df574--8wm6w-eth0" Jun 26 07:16:26.277341 containerd[1464]: 2024-06-26 07:16:26.092 [INFO][4256] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5e0b5143a26856c0f4a80d744cc1043b7c508b31c4f39bf75934a83af9e67f37" HandleID="k8s-pod-network.5e0b5143a26856c0f4a80d744cc1043b7c508b31c4f39bf75934a83af9e67f37" Workload="ci--4012.0.0--0--ebda1d1a0c-k8s-coredns--76f75df574--8wm6w-eth0" Jun 26 07:16:26.277341 containerd[1464]: 2024-06-26 07:16:26.115 [INFO][4256] ipam_plugin.go 264: Auto assigning IP ContainerID="5e0b5143a26856c0f4a80d744cc1043b7c508b31c4f39bf75934a83af9e67f37" HandleID="k8s-pod-network.5e0b5143a26856c0f4a80d744cc1043b7c508b31c4f39bf75934a83af9e67f37" Workload="ci--4012.0.0--0--ebda1d1a0c-k8s-coredns--76f75df574--8wm6w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290db0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4012.0.0-0-ebda1d1a0c", "pod":"coredns-76f75df574-8wm6w", "timestamp":"2024-06-26 07:16:26.092669814 +0000 UTC"}, Hostname:"ci-4012.0.0-0-ebda1d1a0c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 26 07:16:26.277341 containerd[1464]: 2024-06-26 07:16:26.115 [INFO][4256] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 26 07:16:26.277341 containerd[1464]: 2024-06-26 07:16:26.115 [INFO][4256] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 26 07:16:26.277341 containerd[1464]: 2024-06-26 07:16:26.116 [INFO][4256] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4012.0.0-0-ebda1d1a0c' Jun 26 07:16:26.277341 containerd[1464]: 2024-06-26 07:16:26.120 [INFO][4256] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5e0b5143a26856c0f4a80d744cc1043b7c508b31c4f39bf75934a83af9e67f37" host="ci-4012.0.0-0-ebda1d1a0c" Jun 26 07:16:26.277341 containerd[1464]: 2024-06-26 07:16:26.145 [INFO][4256] ipam.go 372: Looking up existing affinities for host host="ci-4012.0.0-0-ebda1d1a0c" Jun 26 07:16:26.277341 containerd[1464]: 2024-06-26 07:16:26.162 [INFO][4256] ipam.go 489: Trying affinity for 192.168.36.192/26 host="ci-4012.0.0-0-ebda1d1a0c" Jun 26 07:16:26.277341 containerd[1464]: 2024-06-26 07:16:26.169 [INFO][4256] ipam.go 155: Attempting to load block cidr=192.168.36.192/26 host="ci-4012.0.0-0-ebda1d1a0c" Jun 26 07:16:26.277341 containerd[1464]: 2024-06-26 07:16:26.174 [INFO][4256] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.36.192/26 host="ci-4012.0.0-0-ebda1d1a0c" Jun 26 07:16:26.277341 containerd[1464]: 2024-06-26 07:16:26.174 [INFO][4256] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.36.192/26 handle="k8s-pod-network.5e0b5143a26856c0f4a80d744cc1043b7c508b31c4f39bf75934a83af9e67f37" host="ci-4012.0.0-0-ebda1d1a0c" Jun 26 07:16:26.277341 containerd[1464]: 2024-06-26 07:16:26.178 [INFO][4256] ipam.go 1685: Creating new handle: k8s-pod-network.5e0b5143a26856c0f4a80d744cc1043b7c508b31c4f39bf75934a83af9e67f37 Jun 26 07:16:26.277341 containerd[1464]: 2024-06-26 07:16:26.184 [INFO][4256] ipam.go 1203: Writing block in order to claim IPs block=192.168.36.192/26 handle="k8s-pod-network.5e0b5143a26856c0f4a80d744cc1043b7c508b31c4f39bf75934a83af9e67f37" host="ci-4012.0.0-0-ebda1d1a0c" Jun 26 07:16:26.277341 containerd[1464]: 2024-06-26 07:16:26.203 [INFO][4256] ipam.go 1216: Successfully claimed IPs: [192.168.36.194/26] block=192.168.36.192/26 handle="k8s-pod-network.5e0b5143a26856c0f4a80d744cc1043b7c508b31c4f39bf75934a83af9e67f37" host="ci-4012.0.0-0-ebda1d1a0c" Jun 26 07:16:26.277341 containerd[1464]: 2024-06-26 07:16:26.203 [INFO][4256] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.36.194/26] handle="k8s-pod-network.5e0b5143a26856c0f4a80d744cc1043b7c508b31c4f39bf75934a83af9e67f37" host="ci-4012.0.0-0-ebda1d1a0c" Jun 26 07:16:26.277341 containerd[1464]: 2024-06-26 07:16:26.203 [INFO][4256] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 26 07:16:26.277341 containerd[1464]: 2024-06-26 07:16:26.203 [INFO][4256] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.36.194/26] IPv6=[] ContainerID="5e0b5143a26856c0f4a80d744cc1043b7c508b31c4f39bf75934a83af9e67f37" HandleID="k8s-pod-network.5e0b5143a26856c0f4a80d744cc1043b7c508b31c4f39bf75934a83af9e67f37" Workload="ci--4012.0.0--0--ebda1d1a0c-k8s-coredns--76f75df574--8wm6w-eth0" Jun 26 07:16:26.278770 containerd[1464]: 2024-06-26 07:16:26.212 [INFO][4249] k8s.go 386: Populated endpoint ContainerID="5e0b5143a26856c0f4a80d744cc1043b7c508b31c4f39bf75934a83af9e67f37" Namespace="kube-system" Pod="coredns-76f75df574-8wm6w" WorkloadEndpoint="ci--4012.0.0--0--ebda1d1a0c-k8s-coredns--76f75df574--8wm6w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--0--ebda1d1a0c-k8s-coredns--76f75df574--8wm6w-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"39f4ab23-9ba0-40aa-ae86-45cf1e57277c", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 7, 15, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-0-ebda1d1a0c", ContainerID:"", Pod:"coredns-76f75df574-8wm6w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.36.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia59ee8d28ce", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 26 07:16:26.278770 containerd[1464]: 2024-06-26 07:16:26.213 [INFO][4249] k8s.go 387: Calico CNI using IPs: [192.168.36.194/32] ContainerID="5e0b5143a26856c0f4a80d744cc1043b7c508b31c4f39bf75934a83af9e67f37" Namespace="kube-system" Pod="coredns-76f75df574-8wm6w" WorkloadEndpoint="ci--4012.0.0--0--ebda1d1a0c-k8s-coredns--76f75df574--8wm6w-eth0" Jun 26 07:16:26.278770 containerd[1464]: 2024-06-26 07:16:26.213 [INFO][4249] dataplane_linux.go 68: Setting the host side veth name to calia59ee8d28ce ContainerID="5e0b5143a26856c0f4a80d744cc1043b7c508b31c4f39bf75934a83af9e67f37" Namespace="kube-system" Pod="coredns-76f75df574-8wm6w" WorkloadEndpoint="ci--4012.0.0--0--ebda1d1a0c-k8s-coredns--76f75df574--8wm6w-eth0" Jun 26 07:16:26.278770 containerd[1464]: 2024-06-26 07:16:26.226 [INFO][4249] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="5e0b5143a26856c0f4a80d744cc1043b7c508b31c4f39bf75934a83af9e67f37" Namespace="kube-system" Pod="coredns-76f75df574-8wm6w" WorkloadEndpoint="ci--4012.0.0--0--ebda1d1a0c-k8s-coredns--76f75df574--8wm6w-eth0" Jun 26 07:16:26.278770 containerd[1464]: 2024-06-26 07:16:26.230 [INFO][4249] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5e0b5143a26856c0f4a80d744cc1043b7c508b31c4f39bf75934a83af9e67f37" Namespace="kube-system" Pod="coredns-76f75df574-8wm6w" WorkloadEndpoint="ci--4012.0.0--0--ebda1d1a0c-k8s-coredns--76f75df574--8wm6w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--0--ebda1d1a0c-k8s-coredns--76f75df574--8wm6w-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"39f4ab23-9ba0-40aa-ae86-45cf1e57277c", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 7, 15, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-0-ebda1d1a0c", ContainerID:"5e0b5143a26856c0f4a80d744cc1043b7c508b31c4f39bf75934a83af9e67f37", Pod:"coredns-76f75df574-8wm6w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.36.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia59ee8d28ce", MAC:"22:4b:f2:e8:c9:a2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 26 07:16:26.278770 containerd[1464]: 2024-06-26 07:16:26.260 [INFO][4249] k8s.go 500: Wrote updated endpoint to datastore ContainerID="5e0b5143a26856c0f4a80d744cc1043b7c508b31c4f39bf75934a83af9e67f37" Namespace="kube-system" Pod="coredns-76f75df574-8wm6w" WorkloadEndpoint="ci--4012.0.0--0--ebda1d1a0c-k8s-coredns--76f75df574--8wm6w-eth0" Jun 26 07:16:26.353772 containerd[1464]: time="2024-06-26T07:16:26.351833446Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 26 07:16:26.353772 containerd[1464]: time="2024-06-26T07:16:26.351975830Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:16:26.353772 containerd[1464]: time="2024-06-26T07:16:26.352103169Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 26 07:16:26.353772 containerd[1464]: time="2024-06-26T07:16:26.352225367Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:16:26.408170 systemd[1]: Started cri-containerd-5e0b5143a26856c0f4a80d744cc1043b7c508b31c4f39bf75934a83af9e67f37.scope - libcontainer container 5e0b5143a26856c0f4a80d744cc1043b7c508b31c4f39bf75934a83af9e67f37. Jun 26 07:16:26.515253 containerd[1464]: time="2024-06-26T07:16:26.515020973Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:16:26.520826 containerd[1464]: time="2024-06-26T07:16:26.520681548Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7641062" Jun 26 07:16:26.522052 containerd[1464]: time="2024-06-26T07:16:26.522012661Z" level=info msg="ImageCreate event name:\"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:16:26.524820 containerd[1464]: time="2024-06-26T07:16:26.524079792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-8wm6w,Uid:39f4ab23-9ba0-40aa-ae86-45cf1e57277c,Namespace:kube-system,Attempt:1,} returns sandbox id \"5e0b5143a26856c0f4a80d744cc1043b7c508b31c4f39bf75934a83af9e67f37\"" Jun 26 07:16:26.526650 kubelet[2555]: E0626 07:16:26.526611 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:16:26.534990 containerd[1464]: time="2024-06-26T07:16:26.534698477Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:16:26.537992 containerd[1464]: time="2024-06-26T07:16:26.537721142Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"9088822\" in 1.835539148s" Jun 26 07:16:26.537992 containerd[1464]: time="2024-06-26T07:16:26.537859416Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\"" Jun 26 07:16:26.546635 containerd[1464]: time="2024-06-26T07:16:26.546313981Z" level=info msg="CreateContainer within sandbox \"46473417f7396adabe2b4198958182b00ef9f0b97c3af741a3c97e8d10a6a345\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jun 26 07:16:26.563235 containerd[1464]: time="2024-06-26T07:16:26.562973527Z" level=info msg="CreateContainer within sandbox \"5e0b5143a26856c0f4a80d744cc1043b7c508b31c4f39bf75934a83af9e67f37\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 26 07:16:26.623777 containerd[1464]: time="2024-06-26T07:16:26.623529468Z" level=info msg="CreateContainer within sandbox \"5e0b5143a26856c0f4a80d744cc1043b7c508b31c4f39bf75934a83af9e67f37\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c44790707f34dd53a6aca0d432d20b4f6212864fb76fc0e5895c4bf4d5e4619f\"" Jun 26 07:16:26.624872 containerd[1464]: time="2024-06-26T07:16:26.624780508Z" level=info msg="StartContainer for \"c44790707f34dd53a6aca0d432d20b4f6212864fb76fc0e5895c4bf4d5e4619f\"" Jun 26 07:16:26.627065 containerd[1464]: time="2024-06-26T07:16:26.627010028Z" level=info msg="CreateContainer within sandbox \"46473417f7396adabe2b4198958182b00ef9f0b97c3af741a3c97e8d10a6a345\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"b504d0b2db5f00e3e69d940895ef88c6a76577030b094dd23a69e71697dcf90a\"" Jun 26 07:16:26.628827 containerd[1464]: time="2024-06-26T07:16:26.627843545Z" level=info msg="StartContainer for \"b504d0b2db5f00e3e69d940895ef88c6a76577030b094dd23a69e71697dcf90a\"" Jun 26 07:16:26.700138 systemd[1]: Started cri-containerd-c44790707f34dd53a6aca0d432d20b4f6212864fb76fc0e5895c4bf4d5e4619f.scope - libcontainer container c44790707f34dd53a6aca0d432d20b4f6212864fb76fc0e5895c4bf4d5e4619f. Jun 26 07:16:26.709117 systemd[1]: Started cri-containerd-b504d0b2db5f00e3e69d940895ef88c6a76577030b094dd23a69e71697dcf90a.scope - libcontainer container b504d0b2db5f00e3e69d940895ef88c6a76577030b094dd23a69e71697dcf90a. Jun 26 07:16:26.737999 containerd[1464]: time="2024-06-26T07:16:26.737950272Z" level=info msg="StopPodSandbox for \"0d062ae00e181d837c765207a10fc6ae481583e17c536bd03b961b135a2b21c9\"" Jun 26 07:16:26.755035 containerd[1464]: time="2024-06-26T07:16:26.753002577Z" level=info msg="StopPodSandbox for \"70d914ace24303220258698d775c0c8356e250e52d34e44b3c7d52f0b335ff80\"" Jun 26 07:16:26.837805 containerd[1464]: time="2024-06-26T07:16:26.837578550Z" level=info msg="StartContainer for \"c44790707f34dd53a6aca0d432d20b4f6212864fb76fc0e5895c4bf4d5e4619f\" returns successfully" Jun 26 07:16:26.886382 containerd[1464]: time="2024-06-26T07:16:26.886326770Z" level=info msg="StartContainer for \"b504d0b2db5f00e3e69d940895ef88c6a76577030b094dd23a69e71697dcf90a\" returns successfully" Jun 26 07:16:26.892100 containerd[1464]: time="2024-06-26T07:16:26.892022353Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Jun 26 07:16:27.036212 containerd[1464]: 2024-06-26 07:16:26.943 [INFO][4407] k8s.go 608: Cleaning up netns ContainerID="70d914ace24303220258698d775c0c8356e250e52d34e44b3c7d52f0b335ff80" Jun 26 07:16:27.036212 containerd[1464]: 2024-06-26 07:16:26.944 [INFO][4407] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="70d914ace24303220258698d775c0c8356e250e52d34e44b3c7d52f0b335ff80" iface="eth0" netns="/var/run/netns/cni-2a2cae75-0344-e03e-28ea-a79ca4245cbf" Jun 26 07:16:27.036212 containerd[1464]: 2024-06-26 07:16:26.946 [INFO][4407] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="70d914ace24303220258698d775c0c8356e250e52d34e44b3c7d52f0b335ff80" iface="eth0" netns="/var/run/netns/cni-2a2cae75-0344-e03e-28ea-a79ca4245cbf" Jun 26 07:16:27.036212 containerd[1464]: 2024-06-26 07:16:26.946 [INFO][4407] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="70d914ace24303220258698d775c0c8356e250e52d34e44b3c7d52f0b335ff80" iface="eth0" netns="/var/run/netns/cni-2a2cae75-0344-e03e-28ea-a79ca4245cbf" Jun 26 07:16:27.036212 containerd[1464]: 2024-06-26 07:16:26.946 [INFO][4407] k8s.go 615: Releasing IP address(es) ContainerID="70d914ace24303220258698d775c0c8356e250e52d34e44b3c7d52f0b335ff80" Jun 26 07:16:27.036212 containerd[1464]: 2024-06-26 07:16:26.946 [INFO][4407] utils.go 188: Calico CNI releasing IP address ContainerID="70d914ace24303220258698d775c0c8356e250e52d34e44b3c7d52f0b335ff80" Jun 26 07:16:27.036212 containerd[1464]: 2024-06-26 07:16:27.007 [INFO][4436] ipam_plugin.go 411: Releasing address using handleID ContainerID="70d914ace24303220258698d775c0c8356e250e52d34e44b3c7d52f0b335ff80" HandleID="k8s-pod-network.70d914ace24303220258698d775c0c8356e250e52d34e44b3c7d52f0b335ff80" Workload="ci--4012.0.0--0--ebda1d1a0c-k8s-coredns--76f75df574--bhgmr-eth0" Jun 26 07:16:27.036212 containerd[1464]: 2024-06-26 07:16:27.008 [INFO][4436] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 26 07:16:27.036212 containerd[1464]: 2024-06-26 07:16:27.008 [INFO][4436] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 26 07:16:27.036212 containerd[1464]: 2024-06-26 07:16:27.022 [WARNING][4436] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="70d914ace24303220258698d775c0c8356e250e52d34e44b3c7d52f0b335ff80" HandleID="k8s-pod-network.70d914ace24303220258698d775c0c8356e250e52d34e44b3c7d52f0b335ff80" Workload="ci--4012.0.0--0--ebda1d1a0c-k8s-coredns--76f75df574--bhgmr-eth0" Jun 26 07:16:27.036212 containerd[1464]: 2024-06-26 07:16:27.022 [INFO][4436] ipam_plugin.go 439: Releasing address using workloadID ContainerID="70d914ace24303220258698d775c0c8356e250e52d34e44b3c7d52f0b335ff80" HandleID="k8s-pod-network.70d914ace24303220258698d775c0c8356e250e52d34e44b3c7d52f0b335ff80" Workload="ci--4012.0.0--0--ebda1d1a0c-k8s-coredns--76f75df574--bhgmr-eth0" Jun 26 07:16:27.036212 containerd[1464]: 2024-06-26 07:16:27.025 [INFO][4436] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 26 07:16:27.036212 containerd[1464]: 2024-06-26 07:16:27.031 [INFO][4407] k8s.go 621: Teardown processing complete. ContainerID="70d914ace24303220258698d775c0c8356e250e52d34e44b3c7d52f0b335ff80" Jun 26 07:16:27.038605 containerd[1464]: time="2024-06-26T07:16:27.036495444Z" level=info msg="TearDown network for sandbox \"70d914ace24303220258698d775c0c8356e250e52d34e44b3c7d52f0b335ff80\" successfully" Jun 26 07:16:27.038605 containerd[1464]: time="2024-06-26T07:16:27.036546017Z" level=info msg="StopPodSandbox for \"70d914ace24303220258698d775c0c8356e250e52d34e44b3c7d52f0b335ff80\" returns successfully" Jun 26 07:16:27.045455 kubelet[2555]: E0626 07:16:27.040928 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:16:27.048735 containerd[1464]: time="2024-06-26T07:16:27.041879938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-bhgmr,Uid:a5283807-341d-4398-a801-298fb264d1f1,Namespace:kube-system,Attempt:1,}" Jun 26 07:16:27.041523 systemd[1]: run-netns-cni\x2d2a2cae75\x2d0344\x2de03e\x2d28ea\x2da79ca4245cbf.mount: Deactivated successfully. Jun 26 07:16:27.076589 containerd[1464]: 2024-06-26 07:16:26.935 [INFO][4398] k8s.go 608: Cleaning up netns ContainerID="0d062ae00e181d837c765207a10fc6ae481583e17c536bd03b961b135a2b21c9" Jun 26 07:16:27.076589 containerd[1464]: 2024-06-26 07:16:26.935 [INFO][4398] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="0d062ae00e181d837c765207a10fc6ae481583e17c536bd03b961b135a2b21c9" iface="eth0" netns="/var/run/netns/cni-963bd4ad-a6a3-00e5-0246-b284aa9065da" Jun 26 07:16:27.076589 containerd[1464]: 2024-06-26 07:16:26.936 [INFO][4398] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="0d062ae00e181d837c765207a10fc6ae481583e17c536bd03b961b135a2b21c9" iface="eth0" netns="/var/run/netns/cni-963bd4ad-a6a3-00e5-0246-b284aa9065da" Jun 26 07:16:27.076589 containerd[1464]: 2024-06-26 07:16:26.937 [INFO][4398] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="0d062ae00e181d837c765207a10fc6ae481583e17c536bd03b961b135a2b21c9" iface="eth0" netns="/var/run/netns/cni-963bd4ad-a6a3-00e5-0246-b284aa9065da" Jun 26 07:16:27.076589 containerd[1464]: 2024-06-26 07:16:26.937 [INFO][4398] k8s.go 615: Releasing IP address(es) ContainerID="0d062ae00e181d837c765207a10fc6ae481583e17c536bd03b961b135a2b21c9" Jun 26 07:16:27.076589 containerd[1464]: 2024-06-26 07:16:26.937 [INFO][4398] utils.go 188: Calico CNI releasing IP address ContainerID="0d062ae00e181d837c765207a10fc6ae481583e17c536bd03b961b135a2b21c9" Jun 26 07:16:27.076589 containerd[1464]: 2024-06-26 07:16:27.022 [INFO][4434] ipam_plugin.go 411: Releasing address using handleID ContainerID="0d062ae00e181d837c765207a10fc6ae481583e17c536bd03b961b135a2b21c9" HandleID="k8s-pod-network.0d062ae00e181d837c765207a10fc6ae481583e17c536bd03b961b135a2b21c9" Workload="ci--4012.0.0--0--ebda1d1a0c-k8s-calico--kube--controllers--7bbf4c86b7--vs76m-eth0" Jun 26 07:16:27.076589 containerd[1464]: 2024-06-26 07:16:27.022 [INFO][4434] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 26 07:16:27.076589 containerd[1464]: 2024-06-26 07:16:27.026 [INFO][4434] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 26 07:16:27.076589 containerd[1464]: 2024-06-26 07:16:27.055 [WARNING][4434] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="0d062ae00e181d837c765207a10fc6ae481583e17c536bd03b961b135a2b21c9" HandleID="k8s-pod-network.0d062ae00e181d837c765207a10fc6ae481583e17c536bd03b961b135a2b21c9" Workload="ci--4012.0.0--0--ebda1d1a0c-k8s-calico--kube--controllers--7bbf4c86b7--vs76m-eth0" Jun 26 07:16:27.076589 containerd[1464]: 2024-06-26 07:16:27.056 [INFO][4434] ipam_plugin.go 439: Releasing address using workloadID ContainerID="0d062ae00e181d837c765207a10fc6ae481583e17c536bd03b961b135a2b21c9" HandleID="k8s-pod-network.0d062ae00e181d837c765207a10fc6ae481583e17c536bd03b961b135a2b21c9" Workload="ci--4012.0.0--0--ebda1d1a0c-k8s-calico--kube--controllers--7bbf4c86b7--vs76m-eth0" Jun 26 07:16:27.076589 containerd[1464]: 2024-06-26 07:16:27.061 [INFO][4434] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 26 07:16:27.076589 containerd[1464]: 2024-06-26 07:16:27.068 [INFO][4398] k8s.go 621: Teardown processing complete. ContainerID="0d062ae00e181d837c765207a10fc6ae481583e17c536bd03b961b135a2b21c9" Jun 26 07:16:27.079509 containerd[1464]: time="2024-06-26T07:16:27.077320952Z" level=info msg="TearDown network for sandbox \"0d062ae00e181d837c765207a10fc6ae481583e17c536bd03b961b135a2b21c9\" successfully" Jun 26 07:16:27.079509 containerd[1464]: time="2024-06-26T07:16:27.077453829Z" level=info msg="StopPodSandbox for \"0d062ae00e181d837c765207a10fc6ae481583e17c536bd03b961b135a2b21c9\" returns successfully" Jun 26 07:16:27.080925 containerd[1464]: time="2024-06-26T07:16:27.080265769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bbf4c86b7-vs76m,Uid:d4b567d7-752d-409d-9367-5069e210e1e9,Namespace:calico-system,Attempt:1,}" Jun 26 07:16:27.091555 systemd[1]: run-netns-cni\x2d963bd4ad\x2da6a3\x2d00e5\x2d0246\x2db284aa9065da.mount: Deactivated successfully. Jun 26 07:16:27.165128 kubelet[2555]: E0626 07:16:27.163096 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:16:27.416209 systemd-networkd[1360]: calif7faefb6ade: Link UP Jun 26 07:16:27.419075 systemd-networkd[1360]: calif7faefb6ade: Gained carrier Jun 26 07:16:27.447833 kubelet[2555]: I0626 07:16:27.447708 2555 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-8wm6w" podStartSLOduration=36.447652028 podStartE2EDuration="36.447652028s" podCreationTimestamp="2024-06-26 07:15:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-26 07:16:27.224095528 +0000 UTC m=+48.826133388" watchObservedRunningTime="2024-06-26 07:16:27.447652028 +0000 UTC m=+49.049689887" Jun 26 07:16:27.453898 containerd[1464]: 2024-06-26 07:16:27.237 [INFO][4460] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4012.0.0--0--ebda1d1a0c-k8s-calico--kube--controllers--7bbf4c86b7--vs76m-eth0 calico-kube-controllers-7bbf4c86b7- calico-system d4b567d7-752d-409d-9367-5069e210e1e9 870 0 2024-06-26 07:15:59 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7bbf4c86b7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4012.0.0-0-ebda1d1a0c calico-kube-controllers-7bbf4c86b7-vs76m eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif7faefb6ade [] []}} ContainerID="8a659c1ab7037293afef3b42048a398e1a2bf302e6a4ae9ebaa77e82aea1dcd1" Namespace="calico-system" Pod="calico-kube-controllers-7bbf4c86b7-vs76m" WorkloadEndpoint="ci--4012.0.0--0--ebda1d1a0c-k8s-calico--kube--controllers--7bbf4c86b7--vs76m-" Jun 26 07:16:27.453898 containerd[1464]: 2024-06-26 07:16:27.237 [INFO][4460] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8a659c1ab7037293afef3b42048a398e1a2bf302e6a4ae9ebaa77e82aea1dcd1" Namespace="calico-system" Pod="calico-kube-controllers-7bbf4c86b7-vs76m" WorkloadEndpoint="ci--4012.0.0--0--ebda1d1a0c-k8s-calico--kube--controllers--7bbf4c86b7--vs76m-eth0" Jun 26 07:16:27.453898 containerd[1464]: 2024-06-26 07:16:27.321 [INFO][4478] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8a659c1ab7037293afef3b42048a398e1a2bf302e6a4ae9ebaa77e82aea1dcd1" HandleID="k8s-pod-network.8a659c1ab7037293afef3b42048a398e1a2bf302e6a4ae9ebaa77e82aea1dcd1" Workload="ci--4012.0.0--0--ebda1d1a0c-k8s-calico--kube--controllers--7bbf4c86b7--vs76m-eth0" Jun 26 07:16:27.453898 containerd[1464]: 2024-06-26 07:16:27.343 [INFO][4478] ipam_plugin.go 264: Auto assigning IP ContainerID="8a659c1ab7037293afef3b42048a398e1a2bf302e6a4ae9ebaa77e82aea1dcd1" HandleID="k8s-pod-network.8a659c1ab7037293afef3b42048a398e1a2bf302e6a4ae9ebaa77e82aea1dcd1" Workload="ci--4012.0.0--0--ebda1d1a0c-k8s-calico--kube--controllers--7bbf4c86b7--vs76m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004faa90), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4012.0.0-0-ebda1d1a0c", "pod":"calico-kube-controllers-7bbf4c86b7-vs76m", "timestamp":"2024-06-26 07:16:27.321088793 +0000 UTC"}, Hostname:"ci-4012.0.0-0-ebda1d1a0c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 26 07:16:27.453898 containerd[1464]: 2024-06-26 07:16:27.343 [INFO][4478] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 26 07:16:27.453898 containerd[1464]: 2024-06-26 07:16:27.343 [INFO][4478] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 26 07:16:27.453898 containerd[1464]: 2024-06-26 07:16:27.343 [INFO][4478] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4012.0.0-0-ebda1d1a0c' Jun 26 07:16:27.453898 containerd[1464]: 2024-06-26 07:16:27.347 [INFO][4478] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8a659c1ab7037293afef3b42048a398e1a2bf302e6a4ae9ebaa77e82aea1dcd1" host="ci-4012.0.0-0-ebda1d1a0c" Jun 26 07:16:27.453898 containerd[1464]: 2024-06-26 07:16:27.357 [INFO][4478] ipam.go 372: Looking up existing affinities for host host="ci-4012.0.0-0-ebda1d1a0c" Jun 26 07:16:27.453898 containerd[1464]: 2024-06-26 07:16:27.375 [INFO][4478] ipam.go 489: Trying affinity for 192.168.36.192/26 host="ci-4012.0.0-0-ebda1d1a0c" Jun 26 07:16:27.453898 containerd[1464]: 2024-06-26 07:16:27.380 [INFO][4478] ipam.go 155: Attempting to load block cidr=192.168.36.192/26 host="ci-4012.0.0-0-ebda1d1a0c" Jun 26 07:16:27.453898 containerd[1464]: 2024-06-26 07:16:27.384 [INFO][4478] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.36.192/26 host="ci-4012.0.0-0-ebda1d1a0c" Jun 26 07:16:27.453898 containerd[1464]: 2024-06-26 07:16:27.384 [INFO][4478] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.36.192/26 handle="k8s-pod-network.8a659c1ab7037293afef3b42048a398e1a2bf302e6a4ae9ebaa77e82aea1dcd1" host="ci-4012.0.0-0-ebda1d1a0c" Jun 26 07:16:27.453898 containerd[1464]: 2024-06-26 07:16:27.387 [INFO][4478] ipam.go 1685: Creating new handle: k8s-pod-network.8a659c1ab7037293afef3b42048a398e1a2bf302e6a4ae9ebaa77e82aea1dcd1 Jun 26 07:16:27.453898 containerd[1464]: 2024-06-26 07:16:27.393 [INFO][4478] ipam.go 1203: Writing block in order to claim IPs block=192.168.36.192/26 handle="k8s-pod-network.8a659c1ab7037293afef3b42048a398e1a2bf302e6a4ae9ebaa77e82aea1dcd1" host="ci-4012.0.0-0-ebda1d1a0c" Jun 26 07:16:27.453898 containerd[1464]: 2024-06-26 07:16:27.402 [INFO][4478] ipam.go 1216: Successfully claimed IPs: [192.168.36.195/26] block=192.168.36.192/26 handle="k8s-pod-network.8a659c1ab7037293afef3b42048a398e1a2bf302e6a4ae9ebaa77e82aea1dcd1" host="ci-4012.0.0-0-ebda1d1a0c" Jun 26 07:16:27.453898 containerd[1464]: 2024-06-26 07:16:27.403 [INFO][4478] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.36.195/26] handle="k8s-pod-network.8a659c1ab7037293afef3b42048a398e1a2bf302e6a4ae9ebaa77e82aea1dcd1" host="ci-4012.0.0-0-ebda1d1a0c" Jun 26 07:16:27.453898 containerd[1464]: 2024-06-26 07:16:27.403 [INFO][4478] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 26 07:16:27.453898 containerd[1464]: 2024-06-26 07:16:27.403 [INFO][4478] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.36.195/26] IPv6=[] ContainerID="8a659c1ab7037293afef3b42048a398e1a2bf302e6a4ae9ebaa77e82aea1dcd1" HandleID="k8s-pod-network.8a659c1ab7037293afef3b42048a398e1a2bf302e6a4ae9ebaa77e82aea1dcd1" Workload="ci--4012.0.0--0--ebda1d1a0c-k8s-calico--kube--controllers--7bbf4c86b7--vs76m-eth0" Jun 26 07:16:27.455882 containerd[1464]: 2024-06-26 07:16:27.408 [INFO][4460] k8s.go 386: Populated endpoint ContainerID="8a659c1ab7037293afef3b42048a398e1a2bf302e6a4ae9ebaa77e82aea1dcd1" Namespace="calico-system" Pod="calico-kube-controllers-7bbf4c86b7-vs76m" WorkloadEndpoint="ci--4012.0.0--0--ebda1d1a0c-k8s-calico--kube--controllers--7bbf4c86b7--vs76m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--0--ebda1d1a0c-k8s-calico--kube--controllers--7bbf4c86b7--vs76m-eth0", GenerateName:"calico-kube-controllers-7bbf4c86b7-", Namespace:"calico-system", SelfLink:"", UID:"d4b567d7-752d-409d-9367-5069e210e1e9", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 7, 15, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7bbf4c86b7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-0-ebda1d1a0c", ContainerID:"", Pod:"calico-kube-controllers-7bbf4c86b7-vs76m", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.36.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif7faefb6ade", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 26 07:16:27.455882 containerd[1464]: 2024-06-26 07:16:27.408 [INFO][4460] k8s.go 387: Calico CNI using IPs: [192.168.36.195/32] ContainerID="8a659c1ab7037293afef3b42048a398e1a2bf302e6a4ae9ebaa77e82aea1dcd1" Namespace="calico-system" Pod="calico-kube-controllers-7bbf4c86b7-vs76m" WorkloadEndpoint="ci--4012.0.0--0--ebda1d1a0c-k8s-calico--kube--controllers--7bbf4c86b7--vs76m-eth0" Jun 26 07:16:27.455882 containerd[1464]: 2024-06-26 07:16:27.408 [INFO][4460] dataplane_linux.go 68: Setting the host side veth name to calif7faefb6ade ContainerID="8a659c1ab7037293afef3b42048a398e1a2bf302e6a4ae9ebaa77e82aea1dcd1" Namespace="calico-system" Pod="calico-kube-controllers-7bbf4c86b7-vs76m" WorkloadEndpoint="ci--4012.0.0--0--ebda1d1a0c-k8s-calico--kube--controllers--7bbf4c86b7--vs76m-eth0" Jun 26 07:16:27.455882 containerd[1464]: 2024-06-26 07:16:27.417 [INFO][4460] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="8a659c1ab7037293afef3b42048a398e1a2bf302e6a4ae9ebaa77e82aea1dcd1" Namespace="calico-system" Pod="calico-kube-controllers-7bbf4c86b7-vs76m" WorkloadEndpoint="ci--4012.0.0--0--ebda1d1a0c-k8s-calico--kube--controllers--7bbf4c86b7--vs76m-eth0" Jun 26 07:16:27.455882 containerd[1464]: 2024-06-26 07:16:27.418 [INFO][4460] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8a659c1ab7037293afef3b42048a398e1a2bf302e6a4ae9ebaa77e82aea1dcd1" Namespace="calico-system" Pod="calico-kube-controllers-7bbf4c86b7-vs76m" WorkloadEndpoint="ci--4012.0.0--0--ebda1d1a0c-k8s-calico--kube--controllers--7bbf4c86b7--vs76m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--0--ebda1d1a0c-k8s-calico--kube--controllers--7bbf4c86b7--vs76m-eth0", GenerateName:"calico-kube-controllers-7bbf4c86b7-", Namespace:"calico-system", SelfLink:"", UID:"d4b567d7-752d-409d-9367-5069e210e1e9", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 7, 15, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7bbf4c86b7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-0-ebda1d1a0c", ContainerID:"8a659c1ab7037293afef3b42048a398e1a2bf302e6a4ae9ebaa77e82aea1dcd1", Pod:"calico-kube-controllers-7bbf4c86b7-vs76m", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.36.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif7faefb6ade", MAC:"72:5c:ab:d8:65:ae", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 26 07:16:27.455882 containerd[1464]: 2024-06-26 07:16:27.445 [INFO][4460] k8s.go 500: Wrote updated endpoint to datastore ContainerID="8a659c1ab7037293afef3b42048a398e1a2bf302e6a4ae9ebaa77e82aea1dcd1" Namespace="calico-system" Pod="calico-kube-controllers-7bbf4c86b7-vs76m" WorkloadEndpoint="ci--4012.0.0--0--ebda1d1a0c-k8s-calico--kube--controllers--7bbf4c86b7--vs76m-eth0" Jun 26 07:16:27.514687 systemd-networkd[1360]: calie2e8ef9e653: Link UP Jun 26 07:16:27.515053 systemd-networkd[1360]: calie2e8ef9e653: Gained carrier Jun 26 07:16:27.521009 containerd[1464]: time="2024-06-26T07:16:27.520851297Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 26 07:16:27.524464 containerd[1464]: time="2024-06-26T07:16:27.520933541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:16:27.526871 containerd[1464]: time="2024-06-26T07:16:27.525920737Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 26 07:16:27.526871 containerd[1464]: time="2024-06-26T07:16:27.525973316Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:16:27.538180 systemd-networkd[1360]: calia59ee8d28ce: Gained IPv6LL Jun 26 07:16:27.562016 containerd[1464]: 2024-06-26 07:16:27.249 [INFO][4451] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4012.0.0--0--ebda1d1a0c-k8s-coredns--76f75df574--bhgmr-eth0 coredns-76f75df574- kube-system a5283807-341d-4398-a801-298fb264d1f1 871 0 2024-06-26 07:15:51 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4012.0.0-0-ebda1d1a0c coredns-76f75df574-bhgmr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie2e8ef9e653 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="a8aa38bba98ef6df52938630ccfbe3dd7f753e7af100218862a9582fe5844843" Namespace="kube-system" Pod="coredns-76f75df574-bhgmr" WorkloadEndpoint="ci--4012.0.0--0--ebda1d1a0c-k8s-coredns--76f75df574--bhgmr-" Jun 26 07:16:27.562016 containerd[1464]: 2024-06-26 07:16:27.249 [INFO][4451] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a8aa38bba98ef6df52938630ccfbe3dd7f753e7af100218862a9582fe5844843" Namespace="kube-system" Pod="coredns-76f75df574-bhgmr" WorkloadEndpoint="ci--4012.0.0--0--ebda1d1a0c-k8s-coredns--76f75df574--bhgmr-eth0" Jun 26 07:16:27.562016 containerd[1464]: 2024-06-26 07:16:27.356 [INFO][4482] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a8aa38bba98ef6df52938630ccfbe3dd7f753e7af100218862a9582fe5844843" HandleID="k8s-pod-network.a8aa38bba98ef6df52938630ccfbe3dd7f753e7af100218862a9582fe5844843" Workload="ci--4012.0.0--0--ebda1d1a0c-k8s-coredns--76f75df574--bhgmr-eth0" Jun 26 07:16:27.562016 containerd[1464]: 2024-06-26 07:16:27.379 [INFO][4482] ipam_plugin.go 264: Auto assigning IP ContainerID="a8aa38bba98ef6df52938630ccfbe3dd7f753e7af100218862a9582fe5844843" HandleID="k8s-pod-network.a8aa38bba98ef6df52938630ccfbe3dd7f753e7af100218862a9582fe5844843" Workload="ci--4012.0.0--0--ebda1d1a0c-k8s-coredns--76f75df574--bhgmr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00034e920), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4012.0.0-0-ebda1d1a0c", "pod":"coredns-76f75df574-bhgmr", "timestamp":"2024-06-26 07:16:27.356863743 +0000 UTC"}, Hostname:"ci-4012.0.0-0-ebda1d1a0c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 26 07:16:27.562016 containerd[1464]: 2024-06-26 07:16:27.379 [INFO][4482] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 26 07:16:27.562016 containerd[1464]: 2024-06-26 07:16:27.403 [INFO][4482] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 26 07:16:27.562016 containerd[1464]: 2024-06-26 07:16:27.403 [INFO][4482] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4012.0.0-0-ebda1d1a0c' Jun 26 07:16:27.562016 containerd[1464]: 2024-06-26 07:16:27.411 [INFO][4482] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a8aa38bba98ef6df52938630ccfbe3dd7f753e7af100218862a9582fe5844843" host="ci-4012.0.0-0-ebda1d1a0c" Jun 26 07:16:27.562016 containerd[1464]: 2024-06-26 07:16:27.426 [INFO][4482] ipam.go 372: Looking up existing affinities for host host="ci-4012.0.0-0-ebda1d1a0c" Jun 26 07:16:27.562016 containerd[1464]: 2024-06-26 07:16:27.450 [INFO][4482] ipam.go 489: Trying affinity for 192.168.36.192/26 host="ci-4012.0.0-0-ebda1d1a0c" Jun 26 07:16:27.562016 containerd[1464]: 2024-06-26 07:16:27.456 [INFO][4482] ipam.go 155: Attempting to load block cidr=192.168.36.192/26 host="ci-4012.0.0-0-ebda1d1a0c" Jun 26 07:16:27.562016 containerd[1464]: 2024-06-26 07:16:27.461 [INFO][4482] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.36.192/26 host="ci-4012.0.0-0-ebda1d1a0c" Jun 26 07:16:27.562016 containerd[1464]: 2024-06-26 07:16:27.461 [INFO][4482] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.36.192/26 handle="k8s-pod-network.a8aa38bba98ef6df52938630ccfbe3dd7f753e7af100218862a9582fe5844843" host="ci-4012.0.0-0-ebda1d1a0c" Jun 26 07:16:27.562016 containerd[1464]: 2024-06-26 07:16:27.467 [INFO][4482] ipam.go 1685: Creating new handle: k8s-pod-network.a8aa38bba98ef6df52938630ccfbe3dd7f753e7af100218862a9582fe5844843 Jun 26 07:16:27.562016 containerd[1464]: 2024-06-26 07:16:27.477 [INFO][4482] ipam.go 1203: Writing block in order to claim IPs block=192.168.36.192/26 handle="k8s-pod-network.a8aa38bba98ef6df52938630ccfbe3dd7f753e7af100218862a9582fe5844843" host="ci-4012.0.0-0-ebda1d1a0c" Jun 26 07:16:27.562016 containerd[1464]: 2024-06-26 07:16:27.489 [INFO][4482] ipam.go 1216: Successfully claimed IPs: [192.168.36.196/26] block=192.168.36.192/26 handle="k8s-pod-network.a8aa38bba98ef6df52938630ccfbe3dd7f753e7af100218862a9582fe5844843" host="ci-4012.0.0-0-ebda1d1a0c" Jun 26 07:16:27.562016 containerd[1464]: 2024-06-26 07:16:27.489 [INFO][4482] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.36.196/26] handle="k8s-pod-network.a8aa38bba98ef6df52938630ccfbe3dd7f753e7af100218862a9582fe5844843" host="ci-4012.0.0-0-ebda1d1a0c" Jun 26 07:16:27.562016 containerd[1464]: 2024-06-26 07:16:27.489 [INFO][4482] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 26 07:16:27.562016 containerd[1464]: 2024-06-26 07:16:27.489 [INFO][4482] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.36.196/26] IPv6=[] ContainerID="a8aa38bba98ef6df52938630ccfbe3dd7f753e7af100218862a9582fe5844843" HandleID="k8s-pod-network.a8aa38bba98ef6df52938630ccfbe3dd7f753e7af100218862a9582fe5844843" Workload="ci--4012.0.0--0--ebda1d1a0c-k8s-coredns--76f75df574--bhgmr-eth0" Jun 26 07:16:27.564489 containerd[1464]: 2024-06-26 07:16:27.500 [INFO][4451] k8s.go 386: Populated endpoint ContainerID="a8aa38bba98ef6df52938630ccfbe3dd7f753e7af100218862a9582fe5844843" Namespace="kube-system" Pod="coredns-76f75df574-bhgmr" WorkloadEndpoint="ci--4012.0.0--0--ebda1d1a0c-k8s-coredns--76f75df574--bhgmr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--0--ebda1d1a0c-k8s-coredns--76f75df574--bhgmr-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"a5283807-341d-4398-a801-298fb264d1f1", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 7, 15, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-0-ebda1d1a0c", ContainerID:"", Pod:"coredns-76f75df574-bhgmr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.36.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie2e8ef9e653", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 26 07:16:27.564489 containerd[1464]: 2024-06-26 07:16:27.503 [INFO][4451] k8s.go 387: Calico CNI using IPs: [192.168.36.196/32] ContainerID="a8aa38bba98ef6df52938630ccfbe3dd7f753e7af100218862a9582fe5844843" Namespace="kube-system" Pod="coredns-76f75df574-bhgmr" WorkloadEndpoint="ci--4012.0.0--0--ebda1d1a0c-k8s-coredns--76f75df574--bhgmr-eth0" Jun 26 07:16:27.564489 containerd[1464]: 2024-06-26 07:16:27.503 [INFO][4451] dataplane_linux.go 68: Setting the host side veth name to calie2e8ef9e653 ContainerID="a8aa38bba98ef6df52938630ccfbe3dd7f753e7af100218862a9582fe5844843" Namespace="kube-system" Pod="coredns-76f75df574-bhgmr" WorkloadEndpoint="ci--4012.0.0--0--ebda1d1a0c-k8s-coredns--76f75df574--bhgmr-eth0" Jun 26 07:16:27.564489 containerd[1464]: 2024-06-26 07:16:27.515 [INFO][4451] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="a8aa38bba98ef6df52938630ccfbe3dd7f753e7af100218862a9582fe5844843" Namespace="kube-system" Pod="coredns-76f75df574-bhgmr" WorkloadEndpoint="ci--4012.0.0--0--ebda1d1a0c-k8s-coredns--76f75df574--bhgmr-eth0" Jun 26 07:16:27.564489 containerd[1464]: 2024-06-26 07:16:27.520 [INFO][4451] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a8aa38bba98ef6df52938630ccfbe3dd7f753e7af100218862a9582fe5844843" Namespace="kube-system" Pod="coredns-76f75df574-bhgmr" WorkloadEndpoint="ci--4012.0.0--0--ebda1d1a0c-k8s-coredns--76f75df574--bhgmr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--0--ebda1d1a0c-k8s-coredns--76f75df574--bhgmr-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"a5283807-341d-4398-a801-298fb264d1f1", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 7, 15, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-0-ebda1d1a0c", ContainerID:"a8aa38bba98ef6df52938630ccfbe3dd7f753e7af100218862a9582fe5844843", Pod:"coredns-76f75df574-bhgmr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.36.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie2e8ef9e653", MAC:"4e:42:97:36:70:da", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 26 07:16:27.564489 containerd[1464]: 2024-06-26 07:16:27.542 [INFO][4451] k8s.go 500: Wrote updated endpoint to datastore ContainerID="a8aa38bba98ef6df52938630ccfbe3dd7f753e7af100218862a9582fe5844843" Namespace="kube-system" Pod="coredns-76f75df574-bhgmr" WorkloadEndpoint="ci--4012.0.0--0--ebda1d1a0c-k8s-coredns--76f75df574--bhgmr-eth0" Jun 26 07:16:27.601131 systemd[1]: Started cri-containerd-8a659c1ab7037293afef3b42048a398e1a2bf302e6a4ae9ebaa77e82aea1dcd1.scope - libcontainer container 8a659c1ab7037293afef3b42048a398e1a2bf302e6a4ae9ebaa77e82aea1dcd1. Jun 26 07:16:27.623173 containerd[1464]: time="2024-06-26T07:16:27.621515749Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 26 07:16:27.623173 containerd[1464]: time="2024-06-26T07:16:27.622528671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:16:27.623173 containerd[1464]: time="2024-06-26T07:16:27.622556872Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 26 07:16:27.623173 containerd[1464]: time="2024-06-26T07:16:27.622570603Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:16:27.652273 systemd[1]: Started cri-containerd-a8aa38bba98ef6df52938630ccfbe3dd7f753e7af100218862a9582fe5844843.scope - libcontainer container a8aa38bba98ef6df52938630ccfbe3dd7f753e7af100218862a9582fe5844843. Jun 26 07:16:27.731710 containerd[1464]: time="2024-06-26T07:16:27.731566843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bbf4c86b7-vs76m,Uid:d4b567d7-752d-409d-9367-5069e210e1e9,Namespace:calico-system,Attempt:1,} returns sandbox id \"8a659c1ab7037293afef3b42048a398e1a2bf302e6a4ae9ebaa77e82aea1dcd1\"" Jun 26 07:16:27.765826 containerd[1464]: time="2024-06-26T07:16:27.765591365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-bhgmr,Uid:a5283807-341d-4398-a801-298fb264d1f1,Namespace:kube-system,Attempt:1,} returns sandbox id \"a8aa38bba98ef6df52938630ccfbe3dd7f753e7af100218862a9582fe5844843\"" Jun 26 07:16:27.768648 kubelet[2555]: E0626 07:16:27.767962 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:16:27.771494 containerd[1464]: time="2024-06-26T07:16:27.771439769Z" level=info msg="CreateContainer within sandbox \"a8aa38bba98ef6df52938630ccfbe3dd7f753e7af100218862a9582fe5844843\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 26 07:16:27.795632 containerd[1464]: time="2024-06-26T07:16:27.795534530Z" level=info msg="CreateContainer within sandbox \"a8aa38bba98ef6df52938630ccfbe3dd7f753e7af100218862a9582fe5844843\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"04582fcfe2ac0ed0c59a84221edcba9ba2822b38207aa3076902af546e408dbe\"" Jun 26 07:16:27.796695 containerd[1464]: time="2024-06-26T07:16:27.796376384Z" level=info msg="StartContainer for \"04582fcfe2ac0ed0c59a84221edcba9ba2822b38207aa3076902af546e408dbe\"" Jun 26 07:16:27.839118 systemd[1]: Started cri-containerd-04582fcfe2ac0ed0c59a84221edcba9ba2822b38207aa3076902af546e408dbe.scope - libcontainer container 04582fcfe2ac0ed0c59a84221edcba9ba2822b38207aa3076902af546e408dbe. Jun 26 07:16:27.893647 containerd[1464]: time="2024-06-26T07:16:27.893574085Z" level=info msg="StartContainer for \"04582fcfe2ac0ed0c59a84221edcba9ba2822b38207aa3076902af546e408dbe\" returns successfully" Jun 26 07:16:28.171734 kubelet[2555]: E0626 07:16:28.169921 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:16:28.173725 kubelet[2555]: E0626 07:16:28.173637 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:16:28.217819 kubelet[2555]: I0626 07:16:28.216604 2555 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-bhgmr" podStartSLOduration=37.216537171 podStartE2EDuration="37.216537171s" podCreationTimestamp="2024-06-26 07:15:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-26 07:16:28.190703105 +0000 UTC m=+49.792740955" watchObservedRunningTime="2024-06-26 07:16:28.216537171 +0000 UTC m=+49.818575040" Jun 26 07:16:28.596689 containerd[1464]: time="2024-06-26T07:16:28.596541117Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:16:28.598989 containerd[1464]: time="2024-06-26T07:16:28.598861073Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=10147655" Jun 26 07:16:28.599771 containerd[1464]: time="2024-06-26T07:16:28.599675388Z" level=info msg="ImageCreate event name:\"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:16:28.604878 containerd[1464]: time="2024-06-26T07:16:28.604721965Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:16:28.609495 containerd[1464]: time="2024-06-26T07:16:28.609277787Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"11595367\" in 1.71720407s" Jun 26 07:16:28.609495 containerd[1464]: time="2024-06-26T07:16:28.609374626Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\"" Jun 26 07:16:28.616956 containerd[1464]: time="2024-06-26T07:16:28.616235207Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Jun 26 07:16:28.624842 containerd[1464]: time="2024-06-26T07:16:28.624067635Z" level=info msg="CreateContainer within sandbox \"46473417f7396adabe2b4198958182b00ef9f0b97c3af741a3c97e8d10a6a345\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jun 26 07:16:28.669268 containerd[1464]: time="2024-06-26T07:16:28.669174647Z" level=info msg="CreateContainer within sandbox \"46473417f7396adabe2b4198958182b00ef9f0b97c3af741a3c97e8d10a6a345\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"3aed25d402fd4bce19e3aa3510da70db4b6fc8e3df9636afd56572224166a2d2\"" Jun 26 07:16:28.671241 containerd[1464]: time="2024-06-26T07:16:28.670635070Z" level=info msg="StartContainer for \"3aed25d402fd4bce19e3aa3510da70db4b6fc8e3df9636afd56572224166a2d2\"" Jun 26 07:16:28.761578 systemd[1]: Started cri-containerd-3aed25d402fd4bce19e3aa3510da70db4b6fc8e3df9636afd56572224166a2d2.scope - libcontainer container 3aed25d402fd4bce19e3aa3510da70db4b6fc8e3df9636afd56572224166a2d2. Jun 26 07:16:28.842493 containerd[1464]: time="2024-06-26T07:16:28.842435858Z" level=info msg="StartContainer for \"3aed25d402fd4bce19e3aa3510da70db4b6fc8e3df9636afd56572224166a2d2\" returns successfully" Jun 26 07:16:28.945103 systemd-networkd[1360]: calie2e8ef9e653: Gained IPv6LL Jun 26 07:16:29.135127 kubelet[2555]: I0626 07:16:29.135084 2555 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jun 26 07:16:29.138794 systemd-networkd[1360]: calif7faefb6ade: Gained IPv6LL Jun 26 07:16:29.149100 kubelet[2555]: I0626 07:16:29.149025 2555 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jun 26 07:16:29.174649 kubelet[2555]: E0626 07:16:29.174410 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:16:29.182946 kubelet[2555]: E0626 07:16:29.182893 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:16:29.206616 kubelet[2555]: I0626 07:16:29.206455 2555 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-whchj" podStartSLOduration=27.291985226 podStartE2EDuration="31.206407557s" podCreationTimestamp="2024-06-26 07:15:58 +0000 UTC" firstStartedPulling="2024-06-26 07:16:24.69778488 +0000 UTC m=+46.299822732" lastFinishedPulling="2024-06-26 07:16:28.612207226 +0000 UTC m=+50.214245063" observedRunningTime="2024-06-26 07:16:29.204728034 +0000 UTC m=+50.806765894" watchObservedRunningTime="2024-06-26 07:16:29.206407557 +0000 UTC m=+50.808445416" Jun 26 07:16:30.177886 kubelet[2555]: E0626 07:16:30.177267 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:16:30.200461 kubelet[2555]: E0626 07:16:30.199820 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:16:31.589839 containerd[1464]: time="2024-06-26T07:16:31.589074360Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:16:31.590490 containerd[1464]: time="2024-06-26T07:16:31.590439369Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=33505793" Jun 26 07:16:31.591245 containerd[1464]: time="2024-06-26T07:16:31.591166427Z" level=info msg="ImageCreate event name:\"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:16:31.594087 containerd[1464]: time="2024-06-26T07:16:31.594026412Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:16:31.595461 containerd[1464]: time="2024-06-26T07:16:31.595416017Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"34953521\" in 2.978986635s" Jun 26 07:16:31.595649 containerd[1464]: time="2024-06-26T07:16:31.595629794Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\"" Jun 26 07:16:31.621060 containerd[1464]: time="2024-06-26T07:16:31.621002404Z" level=info msg="CreateContainer within sandbox \"8a659c1ab7037293afef3b42048a398e1a2bf302e6a4ae9ebaa77e82aea1dcd1\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jun 26 07:16:31.643103 containerd[1464]: time="2024-06-26T07:16:31.643055908Z" level=info msg="CreateContainer within sandbox \"8a659c1ab7037293afef3b42048a398e1a2bf302e6a4ae9ebaa77e82aea1dcd1\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"fd25302e861b9518232e0b4fac8a9fbd3a6db6c3fac5fc3453b8b0daf9fac266\"" Jun 26 07:16:31.646188 containerd[1464]: time="2024-06-26T07:16:31.646117124Z" level=info msg="StartContainer for \"fd25302e861b9518232e0b4fac8a9fbd3a6db6c3fac5fc3453b8b0daf9fac266\"" Jun 26 07:16:31.706273 systemd[1]: Started cri-containerd-fd25302e861b9518232e0b4fac8a9fbd3a6db6c3fac5fc3453b8b0daf9fac266.scope - libcontainer container fd25302e861b9518232e0b4fac8a9fbd3a6db6c3fac5fc3453b8b0daf9fac266. Jun 26 07:16:31.780381 containerd[1464]: time="2024-06-26T07:16:31.780306533Z" level=info msg="StartContainer for \"fd25302e861b9518232e0b4fac8a9fbd3a6db6c3fac5fc3453b8b0daf9fac266\" returns successfully" Jun 26 07:16:32.346067 kubelet[2555]: I0626 07:16:32.345277 2555 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7bbf4c86b7-vs76m" podStartSLOduration=29.493403776 podStartE2EDuration="33.34522183s" podCreationTimestamp="2024-06-26 07:15:59 +0000 UTC" firstStartedPulling="2024-06-26 07:16:27.744351123 +0000 UTC m=+49.346388970" lastFinishedPulling="2024-06-26 07:16:31.596169175 +0000 UTC m=+53.198207024" observedRunningTime="2024-06-26 07:16:32.287971166 +0000 UTC m=+53.890009049" watchObservedRunningTime="2024-06-26 07:16:32.34522183 +0000 UTC m=+53.947259696" Jun 26 07:16:34.675865 systemd[1]: Started sshd@9-165.232.133.181:22-147.75.109.163:44906.service - OpenSSH per-connection server daemon (147.75.109.163:44906). Jun 26 07:16:34.787632 sshd[4761]: Accepted publickey for core from 147.75.109.163 port 44906 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:16:34.795281 sshd[4761]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:16:34.809271 systemd-logind[1448]: New session 10 of user core. Jun 26 07:16:34.818045 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 26 07:16:35.111859 sshd[4761]: pam_unix(sshd:session): session closed for user core Jun 26 07:16:35.117232 systemd[1]: sshd@9-165.232.133.181:22-147.75.109.163:44906.service: Deactivated successfully. Jun 26 07:16:35.120634 systemd[1]: session-10.scope: Deactivated successfully. Jun 26 07:16:35.122224 systemd-logind[1448]: Session 10 logged out. Waiting for processes to exit. Jun 26 07:16:35.123552 systemd-logind[1448]: Removed session 10. Jun 26 07:16:38.739590 containerd[1464]: time="2024-06-26T07:16:38.739550410Z" level=info msg="StopPodSandbox for \"7433ee3040e500ed4f313a1ca45a3798169aa7dd30e72bc237b80f1cc7e0fc19\"" Jun 26 07:16:38.840833 containerd[1464]: 2024-06-26 07:16:38.793 [WARNING][4789] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7433ee3040e500ed4f313a1ca45a3798169aa7dd30e72bc237b80f1cc7e0fc19" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--0--ebda1d1a0c-k8s-coredns--76f75df574--8wm6w-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"39f4ab23-9ba0-40aa-ae86-45cf1e57277c", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 7, 15, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-0-ebda1d1a0c", ContainerID:"5e0b5143a26856c0f4a80d744cc1043b7c508b31c4f39bf75934a83af9e67f37", Pod:"coredns-76f75df574-8wm6w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.36.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia59ee8d28ce", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 26 07:16:38.840833 containerd[1464]: 2024-06-26 07:16:38.793 [INFO][4789] k8s.go 608: Cleaning up netns ContainerID="7433ee3040e500ed4f313a1ca45a3798169aa7dd30e72bc237b80f1cc7e0fc19" Jun 26 07:16:38.840833 containerd[1464]: 2024-06-26 07:16:38.793 [INFO][4789] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="7433ee3040e500ed4f313a1ca45a3798169aa7dd30e72bc237b80f1cc7e0fc19" iface="eth0" netns="" Jun 26 07:16:38.840833 containerd[1464]: 2024-06-26 07:16:38.793 [INFO][4789] k8s.go 615: Releasing IP address(es) ContainerID="7433ee3040e500ed4f313a1ca45a3798169aa7dd30e72bc237b80f1cc7e0fc19" Jun 26 07:16:38.840833 containerd[1464]: 2024-06-26 07:16:38.793 [INFO][4789] utils.go 188: Calico CNI releasing IP address ContainerID="7433ee3040e500ed4f313a1ca45a3798169aa7dd30e72bc237b80f1cc7e0fc19" Jun 26 07:16:38.840833 containerd[1464]: 2024-06-26 07:16:38.824 [INFO][4797] ipam_plugin.go 411: Releasing address using handleID ContainerID="7433ee3040e500ed4f313a1ca45a3798169aa7dd30e72bc237b80f1cc7e0fc19" HandleID="k8s-pod-network.7433ee3040e500ed4f313a1ca45a3798169aa7dd30e72bc237b80f1cc7e0fc19" Workload="ci--4012.0.0--0--ebda1d1a0c-k8s-coredns--76f75df574--8wm6w-eth0" Jun 26 07:16:38.840833 containerd[1464]: 2024-06-26 07:16:38.824 [INFO][4797] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 26 07:16:38.840833 containerd[1464]: 2024-06-26 07:16:38.824 [INFO][4797] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 26 07:16:38.840833 containerd[1464]: 2024-06-26 07:16:38.832 [WARNING][4797] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="7433ee3040e500ed4f313a1ca45a3798169aa7dd30e72bc237b80f1cc7e0fc19" HandleID="k8s-pod-network.7433ee3040e500ed4f313a1ca45a3798169aa7dd30e72bc237b80f1cc7e0fc19" Workload="ci--4012.0.0--0--ebda1d1a0c-k8s-coredns--76f75df574--8wm6w-eth0" Jun 26 07:16:38.840833 containerd[1464]: 2024-06-26 07:16:38.832 [INFO][4797] ipam_plugin.go 439: Releasing address using workloadID ContainerID="7433ee3040e500ed4f313a1ca45a3798169aa7dd30e72bc237b80f1cc7e0fc19" HandleID="k8s-pod-network.7433ee3040e500ed4f313a1ca45a3798169aa7dd30e72bc237b80f1cc7e0fc19" Workload="ci--4012.0.0--0--ebda1d1a0c-k8s-coredns--76f75df574--8wm6w-eth0" Jun 26 07:16:38.840833 containerd[1464]: 2024-06-26 07:16:38.835 [INFO][4797] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 26 07:16:38.840833 containerd[1464]: 2024-06-26 07:16:38.838 [INFO][4789] k8s.go 621: Teardown processing complete. ContainerID="7433ee3040e500ed4f313a1ca45a3798169aa7dd30e72bc237b80f1cc7e0fc19" Jun 26 07:16:38.842147 containerd[1464]: time="2024-06-26T07:16:38.840920899Z" level=info msg="TearDown network for sandbox \"7433ee3040e500ed4f313a1ca45a3798169aa7dd30e72bc237b80f1cc7e0fc19\" successfully" Jun 26 07:16:38.842147 containerd[1464]: time="2024-06-26T07:16:38.840961377Z" level=info msg="StopPodSandbox for \"7433ee3040e500ed4f313a1ca45a3798169aa7dd30e72bc237b80f1cc7e0fc19\" returns successfully" Jun 26 07:16:38.847805 containerd[1464]: time="2024-06-26T07:16:38.846693285Z" level=info msg="RemovePodSandbox for \"7433ee3040e500ed4f313a1ca45a3798169aa7dd30e72bc237b80f1cc7e0fc19\"" Jun 26 07:16:38.850177 containerd[1464]: time="2024-06-26T07:16:38.850116012Z" level=info msg="Forcibly stopping sandbox \"7433ee3040e500ed4f313a1ca45a3798169aa7dd30e72bc237b80f1cc7e0fc19\"" Jun 26 07:16:39.002468 containerd[1464]: 2024-06-26 07:16:38.941 [WARNING][4816] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7433ee3040e500ed4f313a1ca45a3798169aa7dd30e72bc237b80f1cc7e0fc19" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--0--ebda1d1a0c-k8s-coredns--76f75df574--8wm6w-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"39f4ab23-9ba0-40aa-ae86-45cf1e57277c", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 7, 15, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-0-ebda1d1a0c", ContainerID:"5e0b5143a26856c0f4a80d744cc1043b7c508b31c4f39bf75934a83af9e67f37", Pod:"coredns-76f75df574-8wm6w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.36.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia59ee8d28ce", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 26 07:16:39.002468 containerd[1464]: 2024-06-26 07:16:38.942 [INFO][4816] k8s.go 608: Cleaning up netns ContainerID="7433ee3040e500ed4f313a1ca45a3798169aa7dd30e72bc237b80f1cc7e0fc19" Jun 26 07:16:39.002468 containerd[1464]: 2024-06-26 07:16:38.942 [INFO][4816] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="7433ee3040e500ed4f313a1ca45a3798169aa7dd30e72bc237b80f1cc7e0fc19" iface="eth0" netns="" Jun 26 07:16:39.002468 containerd[1464]: 2024-06-26 07:16:38.943 [INFO][4816] k8s.go 615: Releasing IP address(es) ContainerID="7433ee3040e500ed4f313a1ca45a3798169aa7dd30e72bc237b80f1cc7e0fc19" Jun 26 07:16:39.002468 containerd[1464]: 2024-06-26 07:16:38.943 [INFO][4816] utils.go 188: Calico CNI releasing IP address ContainerID="7433ee3040e500ed4f313a1ca45a3798169aa7dd30e72bc237b80f1cc7e0fc19" Jun 26 07:16:39.002468 containerd[1464]: 2024-06-26 07:16:38.978 [INFO][4822] ipam_plugin.go 411: Releasing address using handleID ContainerID="7433ee3040e500ed4f313a1ca45a3798169aa7dd30e72bc237b80f1cc7e0fc19" HandleID="k8s-pod-network.7433ee3040e500ed4f313a1ca45a3798169aa7dd30e72bc237b80f1cc7e0fc19" Workload="ci--4012.0.0--0--ebda1d1a0c-k8s-coredns--76f75df574--8wm6w-eth0" Jun 26 07:16:39.002468 containerd[1464]: 2024-06-26 07:16:38.978 [INFO][4822] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 26 07:16:39.002468 containerd[1464]: 2024-06-26 07:16:38.978 [INFO][4822] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 26 07:16:39.002468 containerd[1464]: 2024-06-26 07:16:38.988 [WARNING][4822] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="7433ee3040e500ed4f313a1ca45a3798169aa7dd30e72bc237b80f1cc7e0fc19" HandleID="k8s-pod-network.7433ee3040e500ed4f313a1ca45a3798169aa7dd30e72bc237b80f1cc7e0fc19" Workload="ci--4012.0.0--0--ebda1d1a0c-k8s-coredns--76f75df574--8wm6w-eth0" Jun 26 07:16:39.002468 containerd[1464]: 2024-06-26 07:16:38.988 [INFO][4822] ipam_plugin.go 439: Releasing address using workloadID ContainerID="7433ee3040e500ed4f313a1ca45a3798169aa7dd30e72bc237b80f1cc7e0fc19" HandleID="k8s-pod-network.7433ee3040e500ed4f313a1ca45a3798169aa7dd30e72bc237b80f1cc7e0fc19" Workload="ci--4012.0.0--0--ebda1d1a0c-k8s-coredns--76f75df574--8wm6w-eth0" Jun 26 07:16:39.002468 containerd[1464]: 2024-06-26 07:16:38.992 [INFO][4822] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 26 07:16:39.002468 containerd[1464]: 2024-06-26 07:16:38.996 [INFO][4816] k8s.go 621: Teardown processing complete. ContainerID="7433ee3040e500ed4f313a1ca45a3798169aa7dd30e72bc237b80f1cc7e0fc19" Jun 26 07:16:39.003179 containerd[1464]: time="2024-06-26T07:16:39.002465373Z" level=info msg="TearDown network for sandbox \"7433ee3040e500ed4f313a1ca45a3798169aa7dd30e72bc237b80f1cc7e0fc19\" successfully" Jun 26 07:16:39.061059 containerd[1464]: time="2024-06-26T07:16:39.060981180Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7433ee3040e500ed4f313a1ca45a3798169aa7dd30e72bc237b80f1cc7e0fc19\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 26 07:16:39.061257 containerd[1464]: time="2024-06-26T07:16:39.061136118Z" level=info msg="RemovePodSandbox \"7433ee3040e500ed4f313a1ca45a3798169aa7dd30e72bc237b80f1cc7e0fc19\" returns successfully" Jun 26 07:16:39.063076 containerd[1464]: time="2024-06-26T07:16:39.062404106Z" level=info msg="StopPodSandbox for \"de194b93043586760020d8ab8823aa3c3329a1fbe5dffbba429aacd169df80c4\"" Jun 26 07:16:39.207517 containerd[1464]: 2024-06-26 07:16:39.159 [WARNING][4840] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="de194b93043586760020d8ab8823aa3c3329a1fbe5dffbba429aacd169df80c4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--0--ebda1d1a0c-k8s-csi--node--driver--whchj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"57e08add-fe69-4f05-8bca-834c135d01cc", ResourceVersion:"913", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 7, 15, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-0-ebda1d1a0c", ContainerID:"46473417f7396adabe2b4198958182b00ef9f0b97c3af741a3c97e8d10a6a345", Pod:"csi-node-driver-whchj", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.36.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"caliaf9e40da153", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 26 07:16:39.207517 containerd[1464]: 2024-06-26 07:16:39.159 [INFO][4840] k8s.go 608: Cleaning up netns ContainerID="de194b93043586760020d8ab8823aa3c3329a1fbe5dffbba429aacd169df80c4" Jun 26 07:16:39.207517 containerd[1464]: 2024-06-26 07:16:39.159 [INFO][4840] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="de194b93043586760020d8ab8823aa3c3329a1fbe5dffbba429aacd169df80c4" iface="eth0" netns="" Jun 26 07:16:39.207517 containerd[1464]: 2024-06-26 07:16:39.159 [INFO][4840] k8s.go 615: Releasing IP address(es) ContainerID="de194b93043586760020d8ab8823aa3c3329a1fbe5dffbba429aacd169df80c4" Jun 26 07:16:39.207517 containerd[1464]: 2024-06-26 07:16:39.159 [INFO][4840] utils.go 188: Calico CNI releasing IP address ContainerID="de194b93043586760020d8ab8823aa3c3329a1fbe5dffbba429aacd169df80c4" Jun 26 07:16:39.207517 containerd[1464]: 2024-06-26 07:16:39.188 [INFO][4846] ipam_plugin.go 411: Releasing address using handleID ContainerID="de194b93043586760020d8ab8823aa3c3329a1fbe5dffbba429aacd169df80c4" HandleID="k8s-pod-network.de194b93043586760020d8ab8823aa3c3329a1fbe5dffbba429aacd169df80c4" Workload="ci--4012.0.0--0--ebda1d1a0c-k8s-csi--node--driver--whchj-eth0" Jun 26 07:16:39.207517 containerd[1464]: 2024-06-26 07:16:39.188 [INFO][4846] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 26 07:16:39.207517 containerd[1464]: 2024-06-26 07:16:39.188 [INFO][4846] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 26 07:16:39.207517 containerd[1464]: 2024-06-26 07:16:39.196 [WARNING][4846] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="de194b93043586760020d8ab8823aa3c3329a1fbe5dffbba429aacd169df80c4" HandleID="k8s-pod-network.de194b93043586760020d8ab8823aa3c3329a1fbe5dffbba429aacd169df80c4" Workload="ci--4012.0.0--0--ebda1d1a0c-k8s-csi--node--driver--whchj-eth0" Jun 26 07:16:39.207517 containerd[1464]: 2024-06-26 07:16:39.196 [INFO][4846] ipam_plugin.go 439: Releasing address using workloadID ContainerID="de194b93043586760020d8ab8823aa3c3329a1fbe5dffbba429aacd169df80c4" HandleID="k8s-pod-network.de194b93043586760020d8ab8823aa3c3329a1fbe5dffbba429aacd169df80c4" Workload="ci--4012.0.0--0--ebda1d1a0c-k8s-csi--node--driver--whchj-eth0" Jun 26 07:16:39.207517 containerd[1464]: 2024-06-26 07:16:39.203 [INFO][4846] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 26 07:16:39.207517 containerd[1464]: 2024-06-26 07:16:39.205 [INFO][4840] k8s.go 621: Teardown processing complete. ContainerID="de194b93043586760020d8ab8823aa3c3329a1fbe5dffbba429aacd169df80c4" Jun 26 07:16:39.209127 containerd[1464]: time="2024-06-26T07:16:39.208067243Z" level=info msg="TearDown network for sandbox \"de194b93043586760020d8ab8823aa3c3329a1fbe5dffbba429aacd169df80c4\" successfully" Jun 26 07:16:39.209127 containerd[1464]: time="2024-06-26T07:16:39.208106022Z" level=info msg="StopPodSandbox for \"de194b93043586760020d8ab8823aa3c3329a1fbe5dffbba429aacd169df80c4\" returns successfully" Jun 26 07:16:39.209127 containerd[1464]: time="2024-06-26T07:16:39.208685144Z" level=info msg="RemovePodSandbox for \"de194b93043586760020d8ab8823aa3c3329a1fbe5dffbba429aacd169df80c4\"" Jun 26 07:16:39.209127 containerd[1464]: time="2024-06-26T07:16:39.208723748Z" level=info msg="Forcibly stopping sandbox \"de194b93043586760020d8ab8823aa3c3329a1fbe5dffbba429aacd169df80c4\"" Jun 26 07:16:39.368961 containerd[1464]: 2024-06-26 07:16:39.277 [WARNING][4864] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="de194b93043586760020d8ab8823aa3c3329a1fbe5dffbba429aacd169df80c4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--0--ebda1d1a0c-k8s-csi--node--driver--whchj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"57e08add-fe69-4f05-8bca-834c135d01cc", ResourceVersion:"913", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 7, 15, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-0-ebda1d1a0c", ContainerID:"46473417f7396adabe2b4198958182b00ef9f0b97c3af741a3c97e8d10a6a345", Pod:"csi-node-driver-whchj", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.36.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"caliaf9e40da153", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 26 07:16:39.368961 containerd[1464]: 2024-06-26 07:16:39.280 [INFO][4864] k8s.go 608: Cleaning up netns ContainerID="de194b93043586760020d8ab8823aa3c3329a1fbe5dffbba429aacd169df80c4" Jun 26 07:16:39.368961 containerd[1464]: 2024-06-26 07:16:39.280 [INFO][4864] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="de194b93043586760020d8ab8823aa3c3329a1fbe5dffbba429aacd169df80c4" iface="eth0" netns="" Jun 26 07:16:39.368961 containerd[1464]: 2024-06-26 07:16:39.280 [INFO][4864] k8s.go 615: Releasing IP address(es) ContainerID="de194b93043586760020d8ab8823aa3c3329a1fbe5dffbba429aacd169df80c4" Jun 26 07:16:39.368961 containerd[1464]: 2024-06-26 07:16:39.280 [INFO][4864] utils.go 188: Calico CNI releasing IP address ContainerID="de194b93043586760020d8ab8823aa3c3329a1fbe5dffbba429aacd169df80c4" Jun 26 07:16:39.368961 containerd[1464]: 2024-06-26 07:16:39.348 [INFO][4871] ipam_plugin.go 411: Releasing address using handleID ContainerID="de194b93043586760020d8ab8823aa3c3329a1fbe5dffbba429aacd169df80c4" HandleID="k8s-pod-network.de194b93043586760020d8ab8823aa3c3329a1fbe5dffbba429aacd169df80c4" Workload="ci--4012.0.0--0--ebda1d1a0c-k8s-csi--node--driver--whchj-eth0" Jun 26 07:16:39.368961 containerd[1464]: 2024-06-26 07:16:39.348 [INFO][4871] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 26 07:16:39.368961 containerd[1464]: 2024-06-26 07:16:39.349 [INFO][4871] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 26 07:16:39.368961 containerd[1464]: 2024-06-26 07:16:39.360 [WARNING][4871] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="de194b93043586760020d8ab8823aa3c3329a1fbe5dffbba429aacd169df80c4" HandleID="k8s-pod-network.de194b93043586760020d8ab8823aa3c3329a1fbe5dffbba429aacd169df80c4" Workload="ci--4012.0.0--0--ebda1d1a0c-k8s-csi--node--driver--whchj-eth0" Jun 26 07:16:39.368961 containerd[1464]: 2024-06-26 07:16:39.360 [INFO][4871] ipam_plugin.go 439: Releasing address using workloadID ContainerID="de194b93043586760020d8ab8823aa3c3329a1fbe5dffbba429aacd169df80c4" HandleID="k8s-pod-network.de194b93043586760020d8ab8823aa3c3329a1fbe5dffbba429aacd169df80c4" Workload="ci--4012.0.0--0--ebda1d1a0c-k8s-csi--node--driver--whchj-eth0" Jun 26 07:16:39.368961 containerd[1464]: 2024-06-26 07:16:39.364 [INFO][4871] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 26 07:16:39.368961 containerd[1464]: 2024-06-26 07:16:39.366 [INFO][4864] k8s.go 621: Teardown processing complete. ContainerID="de194b93043586760020d8ab8823aa3c3329a1fbe5dffbba429aacd169df80c4" Jun 26 07:16:39.370680 containerd[1464]: time="2024-06-26T07:16:39.369885813Z" level=info msg="TearDown network for sandbox \"de194b93043586760020d8ab8823aa3c3329a1fbe5dffbba429aacd169df80c4\" successfully" Jun 26 07:16:39.374317 containerd[1464]: time="2024-06-26T07:16:39.374033567Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"de194b93043586760020d8ab8823aa3c3329a1fbe5dffbba429aacd169df80c4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 26 07:16:39.374317 containerd[1464]: time="2024-06-26T07:16:39.374190602Z" level=info msg="RemovePodSandbox \"de194b93043586760020d8ab8823aa3c3329a1fbe5dffbba429aacd169df80c4\" returns successfully" Jun 26 07:16:39.375339 containerd[1464]: time="2024-06-26T07:16:39.375210618Z" level=info msg="StopPodSandbox for \"0d062ae00e181d837c765207a10fc6ae481583e17c536bd03b961b135a2b21c9\"" Jun 26 07:16:39.476819 containerd[1464]: 2024-06-26 07:16:39.430 [WARNING][4895] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0d062ae00e181d837c765207a10fc6ae481583e17c536bd03b961b135a2b21c9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--0--ebda1d1a0c-k8s-calico--kube--controllers--7bbf4c86b7--vs76m-eth0", GenerateName:"calico-kube-controllers-7bbf4c86b7-", Namespace:"calico-system", SelfLink:"", UID:"d4b567d7-752d-409d-9367-5069e210e1e9", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 7, 15, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7bbf4c86b7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-0-ebda1d1a0c", ContainerID:"8a659c1ab7037293afef3b42048a398e1a2bf302e6a4ae9ebaa77e82aea1dcd1", Pod:"calico-kube-controllers-7bbf4c86b7-vs76m", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.36.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif7faefb6ade", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 26 07:16:39.476819 containerd[1464]: 2024-06-26 07:16:39.431 [INFO][4895] k8s.go 608: Cleaning up netns ContainerID="0d062ae00e181d837c765207a10fc6ae481583e17c536bd03b961b135a2b21c9" Jun 26 07:16:39.476819 containerd[1464]: 2024-06-26 07:16:39.431 [INFO][4895] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="0d062ae00e181d837c765207a10fc6ae481583e17c536bd03b961b135a2b21c9" iface="eth0" netns="" Jun 26 07:16:39.476819 containerd[1464]: 2024-06-26 07:16:39.431 [INFO][4895] k8s.go 615: Releasing IP address(es) ContainerID="0d062ae00e181d837c765207a10fc6ae481583e17c536bd03b961b135a2b21c9" Jun 26 07:16:39.476819 containerd[1464]: 2024-06-26 07:16:39.431 [INFO][4895] utils.go 188: Calico CNI releasing IP address ContainerID="0d062ae00e181d837c765207a10fc6ae481583e17c536bd03b961b135a2b21c9" Jun 26 07:16:39.476819 containerd[1464]: 2024-06-26 07:16:39.460 [INFO][4901] ipam_plugin.go 411: Releasing address using handleID ContainerID="0d062ae00e181d837c765207a10fc6ae481583e17c536bd03b961b135a2b21c9" HandleID="k8s-pod-network.0d062ae00e181d837c765207a10fc6ae481583e17c536bd03b961b135a2b21c9" Workload="ci--4012.0.0--0--ebda1d1a0c-k8s-calico--kube--controllers--7bbf4c86b7--vs76m-eth0" Jun 26 07:16:39.476819 containerd[1464]: 2024-06-26 07:16:39.460 [INFO][4901] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 26 07:16:39.476819 containerd[1464]: 2024-06-26 07:16:39.460 [INFO][4901] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 26 07:16:39.476819 containerd[1464]: 2024-06-26 07:16:39.468 [WARNING][4901] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="0d062ae00e181d837c765207a10fc6ae481583e17c536bd03b961b135a2b21c9" HandleID="k8s-pod-network.0d062ae00e181d837c765207a10fc6ae481583e17c536bd03b961b135a2b21c9" Workload="ci--4012.0.0--0--ebda1d1a0c-k8s-calico--kube--controllers--7bbf4c86b7--vs76m-eth0" Jun 26 07:16:39.476819 containerd[1464]: 2024-06-26 07:16:39.468 [INFO][4901] ipam_plugin.go 439: Releasing address using workloadID ContainerID="0d062ae00e181d837c765207a10fc6ae481583e17c536bd03b961b135a2b21c9" HandleID="k8s-pod-network.0d062ae00e181d837c765207a10fc6ae481583e17c536bd03b961b135a2b21c9" Workload="ci--4012.0.0--0--ebda1d1a0c-k8s-calico--kube--controllers--7bbf4c86b7--vs76m-eth0" Jun 26 07:16:39.476819 containerd[1464]: 2024-06-26 07:16:39.471 [INFO][4901] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 26 07:16:39.476819 containerd[1464]: 2024-06-26 07:16:39.474 [INFO][4895] k8s.go 621: Teardown processing complete. ContainerID="0d062ae00e181d837c765207a10fc6ae481583e17c536bd03b961b135a2b21c9" Jun 26 07:16:39.476819 containerd[1464]: time="2024-06-26T07:16:39.476807693Z" level=info msg="TearDown network for sandbox \"0d062ae00e181d837c765207a10fc6ae481583e17c536bd03b961b135a2b21c9\" successfully" Jun 26 07:16:39.476819 containerd[1464]: time="2024-06-26T07:16:39.476836846Z" level=info msg="StopPodSandbox for \"0d062ae00e181d837c765207a10fc6ae481583e17c536bd03b961b135a2b21c9\" returns successfully" Jun 26 07:16:39.477994 containerd[1464]: time="2024-06-26T07:16:39.477559139Z" level=info msg="RemovePodSandbox for \"0d062ae00e181d837c765207a10fc6ae481583e17c536bd03b961b135a2b21c9\"" Jun 26 07:16:39.477994 containerd[1464]: time="2024-06-26T07:16:39.477596601Z" level=info msg="Forcibly stopping sandbox \"0d062ae00e181d837c765207a10fc6ae481583e17c536bd03b961b135a2b21c9\"" Jun 26 07:16:39.577508 containerd[1464]: 2024-06-26 07:16:39.530 [WARNING][4919] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0d062ae00e181d837c765207a10fc6ae481583e17c536bd03b961b135a2b21c9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--0--ebda1d1a0c-k8s-calico--kube--controllers--7bbf4c86b7--vs76m-eth0", GenerateName:"calico-kube-controllers-7bbf4c86b7-", Namespace:"calico-system", SelfLink:"", UID:"d4b567d7-752d-409d-9367-5069e210e1e9", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 7, 15, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7bbf4c86b7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-0-ebda1d1a0c", ContainerID:"8a659c1ab7037293afef3b42048a398e1a2bf302e6a4ae9ebaa77e82aea1dcd1", Pod:"calico-kube-controllers-7bbf4c86b7-vs76m", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.36.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif7faefb6ade", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 26 07:16:39.577508 containerd[1464]: 2024-06-26 07:16:39.530 [INFO][4919] k8s.go 608: Cleaning up netns ContainerID="0d062ae00e181d837c765207a10fc6ae481583e17c536bd03b961b135a2b21c9" Jun 26 07:16:39.577508 containerd[1464]: 2024-06-26 07:16:39.530 [INFO][4919] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="0d062ae00e181d837c765207a10fc6ae481583e17c536bd03b961b135a2b21c9" iface="eth0" netns="" Jun 26 07:16:39.577508 containerd[1464]: 2024-06-26 07:16:39.530 [INFO][4919] k8s.go 615: Releasing IP address(es) ContainerID="0d062ae00e181d837c765207a10fc6ae481583e17c536bd03b961b135a2b21c9" Jun 26 07:16:39.577508 containerd[1464]: 2024-06-26 07:16:39.530 [INFO][4919] utils.go 188: Calico CNI releasing IP address ContainerID="0d062ae00e181d837c765207a10fc6ae481583e17c536bd03b961b135a2b21c9" Jun 26 07:16:39.577508 containerd[1464]: 2024-06-26 07:16:39.562 [INFO][4925] ipam_plugin.go 411: Releasing address using handleID ContainerID="0d062ae00e181d837c765207a10fc6ae481583e17c536bd03b961b135a2b21c9" HandleID="k8s-pod-network.0d062ae00e181d837c765207a10fc6ae481583e17c536bd03b961b135a2b21c9" Workload="ci--4012.0.0--0--ebda1d1a0c-k8s-calico--kube--controllers--7bbf4c86b7--vs76m-eth0" Jun 26 07:16:39.577508 containerd[1464]: 2024-06-26 07:16:39.562 [INFO][4925] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 26 07:16:39.577508 containerd[1464]: 2024-06-26 07:16:39.562 [INFO][4925] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 26 07:16:39.577508 containerd[1464]: 2024-06-26 07:16:39.570 [WARNING][4925] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="0d062ae00e181d837c765207a10fc6ae481583e17c536bd03b961b135a2b21c9" HandleID="k8s-pod-network.0d062ae00e181d837c765207a10fc6ae481583e17c536bd03b961b135a2b21c9" Workload="ci--4012.0.0--0--ebda1d1a0c-k8s-calico--kube--controllers--7bbf4c86b7--vs76m-eth0" Jun 26 07:16:39.577508 containerd[1464]: 2024-06-26 07:16:39.570 [INFO][4925] ipam_plugin.go 439: Releasing address using workloadID ContainerID="0d062ae00e181d837c765207a10fc6ae481583e17c536bd03b961b135a2b21c9" HandleID="k8s-pod-network.0d062ae00e181d837c765207a10fc6ae481583e17c536bd03b961b135a2b21c9" Workload="ci--4012.0.0--0--ebda1d1a0c-k8s-calico--kube--controllers--7bbf4c86b7--vs76m-eth0" Jun 26 07:16:39.577508 containerd[1464]: 2024-06-26 07:16:39.573 [INFO][4925] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 26 07:16:39.577508 containerd[1464]: 2024-06-26 07:16:39.575 [INFO][4919] k8s.go 621: Teardown processing complete. ContainerID="0d062ae00e181d837c765207a10fc6ae481583e17c536bd03b961b135a2b21c9" Jun 26 07:16:39.578094 containerd[1464]: time="2024-06-26T07:16:39.577586062Z" level=info msg="TearDown network for sandbox \"0d062ae00e181d837c765207a10fc6ae481583e17c536bd03b961b135a2b21c9\" successfully" Jun 26 07:16:39.581429 containerd[1464]: time="2024-06-26T07:16:39.581327483Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0d062ae00e181d837c765207a10fc6ae481583e17c536bd03b961b135a2b21c9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 26 07:16:39.581611 containerd[1464]: time="2024-06-26T07:16:39.581480549Z" level=info msg="RemovePodSandbox \"0d062ae00e181d837c765207a10fc6ae481583e17c536bd03b961b135a2b21c9\" returns successfully" Jun 26 07:16:39.582491 containerd[1464]: time="2024-06-26T07:16:39.582279725Z" level=info msg="StopPodSandbox for \"70d914ace24303220258698d775c0c8356e250e52d34e44b3c7d52f0b335ff80\"" Jun 26 07:16:39.712648 containerd[1464]: 2024-06-26 07:16:39.644 [WARNING][4943] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="70d914ace24303220258698d775c0c8356e250e52d34e44b3c7d52f0b335ff80" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--0--ebda1d1a0c-k8s-coredns--76f75df574--bhgmr-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"a5283807-341d-4398-a801-298fb264d1f1", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 7, 15, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-0-ebda1d1a0c", ContainerID:"a8aa38bba98ef6df52938630ccfbe3dd7f753e7af100218862a9582fe5844843", Pod:"coredns-76f75df574-bhgmr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.36.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie2e8ef9e653", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 26 07:16:39.712648 containerd[1464]: 2024-06-26 07:16:39.644 [INFO][4943] k8s.go 608: Cleaning up netns ContainerID="70d914ace24303220258698d775c0c8356e250e52d34e44b3c7d52f0b335ff80" Jun 26 07:16:39.712648 containerd[1464]: 2024-06-26 07:16:39.645 [INFO][4943] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="70d914ace24303220258698d775c0c8356e250e52d34e44b3c7d52f0b335ff80" iface="eth0" netns="" Jun 26 07:16:39.712648 containerd[1464]: 2024-06-26 07:16:39.645 [INFO][4943] k8s.go 615: Releasing IP address(es) ContainerID="70d914ace24303220258698d775c0c8356e250e52d34e44b3c7d52f0b335ff80" Jun 26 07:16:39.712648 containerd[1464]: 2024-06-26 07:16:39.645 [INFO][4943] utils.go 188: Calico CNI releasing IP address ContainerID="70d914ace24303220258698d775c0c8356e250e52d34e44b3c7d52f0b335ff80" Jun 26 07:16:39.712648 containerd[1464]: 2024-06-26 07:16:39.694 [INFO][4949] ipam_plugin.go 411: Releasing address using handleID ContainerID="70d914ace24303220258698d775c0c8356e250e52d34e44b3c7d52f0b335ff80" HandleID="k8s-pod-network.70d914ace24303220258698d775c0c8356e250e52d34e44b3c7d52f0b335ff80" Workload="ci--4012.0.0--0--ebda1d1a0c-k8s-coredns--76f75df574--bhgmr-eth0" Jun 26 07:16:39.712648 containerd[1464]: 2024-06-26 07:16:39.694 [INFO][4949] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 26 07:16:39.712648 containerd[1464]: 2024-06-26 07:16:39.695 [INFO][4949] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 26 07:16:39.712648 containerd[1464]: 2024-06-26 07:16:39.703 [WARNING][4949] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="70d914ace24303220258698d775c0c8356e250e52d34e44b3c7d52f0b335ff80" HandleID="k8s-pod-network.70d914ace24303220258698d775c0c8356e250e52d34e44b3c7d52f0b335ff80" Workload="ci--4012.0.0--0--ebda1d1a0c-k8s-coredns--76f75df574--bhgmr-eth0" Jun 26 07:16:39.712648 containerd[1464]: 2024-06-26 07:16:39.703 [INFO][4949] ipam_plugin.go 439: Releasing address using workloadID ContainerID="70d914ace24303220258698d775c0c8356e250e52d34e44b3c7d52f0b335ff80" HandleID="k8s-pod-network.70d914ace24303220258698d775c0c8356e250e52d34e44b3c7d52f0b335ff80" Workload="ci--4012.0.0--0--ebda1d1a0c-k8s-coredns--76f75df574--bhgmr-eth0" Jun 26 07:16:39.712648 containerd[1464]: 2024-06-26 07:16:39.706 [INFO][4949] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 26 07:16:39.712648 containerd[1464]: 2024-06-26 07:16:39.709 [INFO][4943] k8s.go 621: Teardown processing complete. ContainerID="70d914ace24303220258698d775c0c8356e250e52d34e44b3c7d52f0b335ff80" Jun 26 07:16:39.712648 containerd[1464]: time="2024-06-26T07:16:39.712340995Z" level=info msg="TearDown network for sandbox \"70d914ace24303220258698d775c0c8356e250e52d34e44b3c7d52f0b335ff80\" successfully" Jun 26 07:16:39.712648 containerd[1464]: time="2024-06-26T07:16:39.712368223Z" level=info msg="StopPodSandbox for \"70d914ace24303220258698d775c0c8356e250e52d34e44b3c7d52f0b335ff80\" returns successfully" Jun 26 07:16:39.713241 containerd[1464]: time="2024-06-26T07:16:39.712921926Z" level=info msg="RemovePodSandbox for \"70d914ace24303220258698d775c0c8356e250e52d34e44b3c7d52f0b335ff80\"" Jun 26 07:16:39.713241 containerd[1464]: time="2024-06-26T07:16:39.712958261Z" level=info msg="Forcibly stopping sandbox \"70d914ace24303220258698d775c0c8356e250e52d34e44b3c7d52f0b335ff80\"" Jun 26 07:16:39.826487 containerd[1464]: 2024-06-26 07:16:39.779 [WARNING][4967] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="70d914ace24303220258698d775c0c8356e250e52d34e44b3c7d52f0b335ff80" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--0--ebda1d1a0c-k8s-coredns--76f75df574--bhgmr-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"a5283807-341d-4398-a801-298fb264d1f1", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 7, 15, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-0-ebda1d1a0c", ContainerID:"a8aa38bba98ef6df52938630ccfbe3dd7f753e7af100218862a9582fe5844843", Pod:"coredns-76f75df574-bhgmr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.36.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie2e8ef9e653", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 26 07:16:39.826487 containerd[1464]: 2024-06-26 07:16:39.779 [INFO][4967] k8s.go 608: Cleaning up netns ContainerID="70d914ace24303220258698d775c0c8356e250e52d34e44b3c7d52f0b335ff80" Jun 26 07:16:39.826487 containerd[1464]: 2024-06-26 07:16:39.779 [INFO][4967] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="70d914ace24303220258698d775c0c8356e250e52d34e44b3c7d52f0b335ff80" iface="eth0" netns="" Jun 26 07:16:39.826487 containerd[1464]: 2024-06-26 07:16:39.779 [INFO][4967] k8s.go 615: Releasing IP address(es) ContainerID="70d914ace24303220258698d775c0c8356e250e52d34e44b3c7d52f0b335ff80" Jun 26 07:16:39.826487 containerd[1464]: 2024-06-26 07:16:39.779 [INFO][4967] utils.go 188: Calico CNI releasing IP address ContainerID="70d914ace24303220258698d775c0c8356e250e52d34e44b3c7d52f0b335ff80" Jun 26 07:16:39.826487 containerd[1464]: 2024-06-26 07:16:39.809 [INFO][4973] ipam_plugin.go 411: Releasing address using handleID ContainerID="70d914ace24303220258698d775c0c8356e250e52d34e44b3c7d52f0b335ff80" HandleID="k8s-pod-network.70d914ace24303220258698d775c0c8356e250e52d34e44b3c7d52f0b335ff80" Workload="ci--4012.0.0--0--ebda1d1a0c-k8s-coredns--76f75df574--bhgmr-eth0" Jun 26 07:16:39.826487 containerd[1464]: 2024-06-26 07:16:39.810 [INFO][4973] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 26 07:16:39.826487 containerd[1464]: 2024-06-26 07:16:39.810 [INFO][4973] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 26 07:16:39.826487 containerd[1464]: 2024-06-26 07:16:39.817 [WARNING][4973] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="70d914ace24303220258698d775c0c8356e250e52d34e44b3c7d52f0b335ff80" HandleID="k8s-pod-network.70d914ace24303220258698d775c0c8356e250e52d34e44b3c7d52f0b335ff80" Workload="ci--4012.0.0--0--ebda1d1a0c-k8s-coredns--76f75df574--bhgmr-eth0" Jun 26 07:16:39.826487 containerd[1464]: 2024-06-26 07:16:39.818 [INFO][4973] ipam_plugin.go 439: Releasing address using workloadID ContainerID="70d914ace24303220258698d775c0c8356e250e52d34e44b3c7d52f0b335ff80" HandleID="k8s-pod-network.70d914ace24303220258698d775c0c8356e250e52d34e44b3c7d52f0b335ff80" Workload="ci--4012.0.0--0--ebda1d1a0c-k8s-coredns--76f75df574--bhgmr-eth0" Jun 26 07:16:39.826487 containerd[1464]: 2024-06-26 07:16:39.821 [INFO][4973] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 26 07:16:39.826487 containerd[1464]: 2024-06-26 07:16:39.823 [INFO][4967] k8s.go 621: Teardown processing complete. ContainerID="70d914ace24303220258698d775c0c8356e250e52d34e44b3c7d52f0b335ff80" Jun 26 07:16:39.827754 containerd[1464]: time="2024-06-26T07:16:39.826552528Z" level=info msg="TearDown network for sandbox \"70d914ace24303220258698d775c0c8356e250e52d34e44b3c7d52f0b335ff80\" successfully" Jun 26 07:16:39.831248 containerd[1464]: time="2024-06-26T07:16:39.831166191Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"70d914ace24303220258698d775c0c8356e250e52d34e44b3c7d52f0b335ff80\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 26 07:16:39.831726 containerd[1464]: time="2024-06-26T07:16:39.831291650Z" level=info msg="RemovePodSandbox \"70d914ace24303220258698d775c0c8356e250e52d34e44b3c7d52f0b335ff80\" returns successfully" Jun 26 07:16:39.831978 containerd[1464]: time="2024-06-26T07:16:39.831886240Z" level=info msg="StopPodSandbox for \"94ab79e09be45b684350370b2dda6b1285df379c87ffb196fe2d3cf2c0765330\"" Jun 26 07:16:39.851508 containerd[1464]: time="2024-06-26T07:16:39.832025033Z" level=info msg="TearDown network for sandbox \"94ab79e09be45b684350370b2dda6b1285df379c87ffb196fe2d3cf2c0765330\" successfully" Jun 26 07:16:39.851508 containerd[1464]: time="2024-06-26T07:16:39.851494595Z" level=info msg="StopPodSandbox for \"94ab79e09be45b684350370b2dda6b1285df379c87ffb196fe2d3cf2c0765330\" returns successfully" Jun 26 07:16:39.853197 containerd[1464]: time="2024-06-26T07:16:39.852993290Z" level=info msg="RemovePodSandbox for \"94ab79e09be45b684350370b2dda6b1285df379c87ffb196fe2d3cf2c0765330\"" Jun 26 07:16:39.853197 containerd[1464]: time="2024-06-26T07:16:39.853056986Z" level=info msg="Forcibly stopping sandbox \"94ab79e09be45b684350370b2dda6b1285df379c87ffb196fe2d3cf2c0765330\"" Jun 26 07:16:39.853197 containerd[1464]: time="2024-06-26T07:16:39.853153247Z" level=info msg="TearDown network for sandbox \"94ab79e09be45b684350370b2dda6b1285df379c87ffb196fe2d3cf2c0765330\" successfully" Jun 26 07:16:39.857690 containerd[1464]: time="2024-06-26T07:16:39.857574511Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"94ab79e09be45b684350370b2dda6b1285df379c87ffb196fe2d3cf2c0765330\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 26 07:16:39.857942 containerd[1464]: time="2024-06-26T07:16:39.857709728Z" level=info msg="RemovePodSandbox \"94ab79e09be45b684350370b2dda6b1285df379c87ffb196fe2d3cf2c0765330\" returns successfully" Jun 26 07:16:39.859127 containerd[1464]: time="2024-06-26T07:16:39.859029505Z" level=info msg="StopPodSandbox for \"088e0e913189ae445319f4761f150238e81e8e1dbe42d8c60d8c4168efaef988\"" Jun 26 07:16:39.859274 containerd[1464]: time="2024-06-26T07:16:39.859189182Z" level=info msg="TearDown network for sandbox \"088e0e913189ae445319f4761f150238e81e8e1dbe42d8c60d8c4168efaef988\" successfully" Jun 26 07:16:39.859274 containerd[1464]: time="2024-06-26T07:16:39.859207035Z" level=info msg="StopPodSandbox for \"088e0e913189ae445319f4761f150238e81e8e1dbe42d8c60d8c4168efaef988\" returns successfully" Jun 26 07:16:39.860848 containerd[1464]: time="2024-06-26T07:16:39.859826728Z" level=info msg="RemovePodSandbox for \"088e0e913189ae445319f4761f150238e81e8e1dbe42d8c60d8c4168efaef988\"" Jun 26 07:16:39.860848 containerd[1464]: time="2024-06-26T07:16:39.859859621Z" level=info msg="Forcibly stopping sandbox \"088e0e913189ae445319f4761f150238e81e8e1dbe42d8c60d8c4168efaef988\"" Jun 26 07:16:39.860848 containerd[1464]: time="2024-06-26T07:16:39.859932184Z" level=info msg="TearDown network for sandbox \"088e0e913189ae445319f4761f150238e81e8e1dbe42d8c60d8c4168efaef988\" successfully" Jun 26 07:16:39.864524 containerd[1464]: time="2024-06-26T07:16:39.864464068Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"088e0e913189ae445319f4761f150238e81e8e1dbe42d8c60d8c4168efaef988\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 26 07:16:39.864950 containerd[1464]: time="2024-06-26T07:16:39.864849319Z" level=info msg="RemovePodSandbox \"088e0e913189ae445319f4761f150238e81e8e1dbe42d8c60d8c4168efaef988\" returns successfully" Jun 26 07:16:40.133590 systemd[1]: Started sshd@10-165.232.133.181:22-147.75.109.163:53226.service - OpenSSH per-connection server daemon (147.75.109.163:53226). Jun 26 07:16:40.239854 sshd[4980]: Accepted publickey for core from 147.75.109.163 port 53226 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:16:40.241916 sshd[4980]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:16:40.247800 systemd-logind[1448]: New session 11 of user core. Jun 26 07:16:40.253081 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 26 07:16:40.476618 sshd[4980]: pam_unix(sshd:session): session closed for user core Jun 26 07:16:40.482067 systemd-logind[1448]: Session 11 logged out. Waiting for processes to exit. Jun 26 07:16:40.482299 systemd[1]: sshd@10-165.232.133.181:22-147.75.109.163:53226.service: Deactivated successfully. Jun 26 07:16:40.485149 systemd[1]: session-11.scope: Deactivated successfully. Jun 26 07:16:40.487910 systemd-logind[1448]: Removed session 11. Jun 26 07:16:45.495326 systemd[1]: Started sshd@11-165.232.133.181:22-147.75.109.163:53242.service - OpenSSH per-connection server daemon (147.75.109.163:53242). Jun 26 07:16:45.550590 sshd[5017]: Accepted publickey for core from 147.75.109.163 port 53242 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:16:45.553046 sshd[5017]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:16:45.561093 systemd-logind[1448]: New session 12 of user core. Jun 26 07:16:45.566132 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 26 07:16:45.716943 sshd[5017]: pam_unix(sshd:session): session closed for user core Jun 26 07:16:45.720837 systemd-logind[1448]: Session 12 logged out. Waiting for processes to exit. Jun 26 07:16:45.721315 systemd[1]: sshd@11-165.232.133.181:22-147.75.109.163:53242.service: Deactivated successfully. Jun 26 07:16:45.725098 systemd[1]: session-12.scope: Deactivated successfully. Jun 26 07:16:45.728453 systemd-logind[1448]: Removed session 12. Jun 26 07:16:49.734928 kubelet[2555]: E0626 07:16:49.734765 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:16:50.741380 systemd[1]: Started sshd@12-165.232.133.181:22-147.75.109.163:55048.service - OpenSSH per-connection server daemon (147.75.109.163:55048). Jun 26 07:16:50.850985 sshd[5057]: Accepted publickey for core from 147.75.109.163 port 55048 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:16:50.856546 sshd[5057]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:16:50.867079 systemd-logind[1448]: New session 13 of user core. Jun 26 07:16:50.873153 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 26 07:16:51.101971 sshd[5057]: pam_unix(sshd:session): session closed for user core Jun 26 07:16:51.115390 systemd[1]: sshd@12-165.232.133.181:22-147.75.109.163:55048.service: Deactivated successfully. Jun 26 07:16:51.119432 systemd[1]: session-13.scope: Deactivated successfully. Jun 26 07:16:51.121274 systemd-logind[1448]: Session 13 logged out. Waiting for processes to exit. Jun 26 07:16:51.131667 systemd[1]: Started sshd@13-165.232.133.181:22-147.75.109.163:55054.service - OpenSSH per-connection server daemon (147.75.109.163:55054). Jun 26 07:16:51.134729 systemd-logind[1448]: Removed session 13. Jun 26 07:16:51.210388 sshd[5071]: Accepted publickey for core from 147.75.109.163 port 55054 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:16:51.213642 sshd[5071]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:16:51.234916 systemd-logind[1448]: New session 14 of user core. Jun 26 07:16:51.247873 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 26 07:16:51.570426 sshd[5071]: pam_unix(sshd:session): session closed for user core Jun 26 07:16:51.593321 systemd[1]: Started sshd@14-165.232.133.181:22-147.75.109.163:55068.service - OpenSSH per-connection server daemon (147.75.109.163:55068). Jun 26 07:16:51.595187 systemd[1]: sshd@13-165.232.133.181:22-147.75.109.163:55054.service: Deactivated successfully. Jun 26 07:16:51.604532 systemd[1]: session-14.scope: Deactivated successfully. Jun 26 07:16:51.610148 systemd-logind[1448]: Session 14 logged out. Waiting for processes to exit. Jun 26 07:16:51.617935 systemd-logind[1448]: Removed session 14. Jun 26 07:16:51.740908 sshd[5086]: Accepted publickey for core from 147.75.109.163 port 55068 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:16:51.743955 sshd[5086]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:16:51.753688 systemd-logind[1448]: New session 15 of user core. Jun 26 07:16:51.761191 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 26 07:16:52.060624 sshd[5086]: pam_unix(sshd:session): session closed for user core Jun 26 07:16:52.068647 systemd[1]: sshd@14-165.232.133.181:22-147.75.109.163:55068.service: Deactivated successfully. Jun 26 07:16:52.073981 systemd[1]: session-15.scope: Deactivated successfully. Jun 26 07:16:52.078043 systemd-logind[1448]: Session 15 logged out. Waiting for processes to exit. Jun 26 07:16:52.080836 systemd-logind[1448]: Removed session 15. Jun 26 07:16:57.092338 systemd[1]: Started sshd@15-165.232.133.181:22-147.75.109.163:52752.service - OpenSSH per-connection server daemon (147.75.109.163:52752). Jun 26 07:16:57.150426 sshd[5107]: Accepted publickey for core from 147.75.109.163 port 52752 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:16:57.152861 sshd[5107]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:16:57.160909 systemd-logind[1448]: New session 16 of user core. Jun 26 07:16:57.169206 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 26 07:16:57.387733 sshd[5107]: pam_unix(sshd:session): session closed for user core Jun 26 07:16:57.394715 systemd[1]: sshd@15-165.232.133.181:22-147.75.109.163:52752.service: Deactivated successfully. Jun 26 07:16:57.399463 systemd[1]: session-16.scope: Deactivated successfully. Jun 26 07:16:57.408602 systemd-logind[1448]: Session 16 logged out. Waiting for processes to exit. Jun 26 07:16:57.410833 systemd-logind[1448]: Removed session 16. Jun 26 07:17:01.732738 kubelet[2555]: E0626 07:17:01.732677 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:17:02.410348 systemd[1]: Started sshd@16-165.232.133.181:22-147.75.109.163:52754.service - OpenSSH per-connection server daemon (147.75.109.163:52754). Jun 26 07:17:02.502804 sshd[5130]: Accepted publickey for core from 147.75.109.163 port 52754 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:17:02.505422 sshd[5130]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:17:02.512143 systemd-logind[1448]: New session 17 of user core. Jun 26 07:17:02.519133 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 26 07:17:02.831769 sshd[5130]: pam_unix(sshd:session): session closed for user core Jun 26 07:17:02.846920 systemd[1]: sshd@16-165.232.133.181:22-147.75.109.163:52754.service: Deactivated successfully. Jun 26 07:17:02.855676 systemd[1]: session-17.scope: Deactivated successfully. Jun 26 07:17:02.858279 systemd-logind[1448]: Session 17 logged out. Waiting for processes to exit. Jun 26 07:17:02.861800 systemd-logind[1448]: Removed session 17. Jun 26 07:17:07.732529 kubelet[2555]: E0626 07:17:07.732462 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:17:07.855111 systemd[1]: Started sshd@17-165.232.133.181:22-147.75.109.163:58106.service - OpenSSH per-connection server daemon (147.75.109.163:58106). Jun 26 07:17:07.908732 sshd[5145]: Accepted publickey for core from 147.75.109.163 port 58106 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:17:07.912107 sshd[5145]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:17:07.926650 systemd-logind[1448]: New session 18 of user core. Jun 26 07:17:07.934170 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 26 07:17:08.199624 sshd[5145]: pam_unix(sshd:session): session closed for user core Jun 26 07:17:08.207708 systemd-logind[1448]: Session 18 logged out. Waiting for processes to exit. Jun 26 07:17:08.208289 systemd[1]: sshd@17-165.232.133.181:22-147.75.109.163:58106.service: Deactivated successfully. Jun 26 07:17:08.210901 systemd[1]: session-18.scope: Deactivated successfully. Jun 26 07:17:08.213926 systemd-logind[1448]: Removed session 18. Jun 26 07:17:09.732456 kubelet[2555]: E0626 07:17:09.732389 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:17:12.736422 kubelet[2555]: E0626 07:17:12.734838 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:17:13.221857 kubelet[2555]: I0626 07:17:13.221086 2555 topology_manager.go:215] "Topology Admit Handler" podUID="5c0588e0-5afa-4d4e-a3ef-71c15aee95fd" podNamespace="calico-apiserver" podName="calico-apiserver-7b7756c7fd-2ntwt" Jun 26 07:17:13.233076 systemd[1]: Started sshd@18-165.232.133.181:22-147.75.109.163:58120.service - OpenSSH per-connection server daemon (147.75.109.163:58120). Jun 26 07:17:13.285238 systemd[1]: Created slice kubepods-besteffort-pod5c0588e0_5afa_4d4e_a3ef_71c15aee95fd.slice - libcontainer container kubepods-besteffort-pod5c0588e0_5afa_4d4e_a3ef_71c15aee95fd.slice. Jun 26 07:17:13.384530 sshd[5182]: Accepted publickey for core from 147.75.109.163 port 58120 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:17:13.388772 sshd[5182]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:17:13.402450 systemd-logind[1448]: New session 19 of user core. Jun 26 07:17:13.409072 kubelet[2555]: I0626 07:17:13.408955 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdtqd\" (UniqueName: \"kubernetes.io/projected/5c0588e0-5afa-4d4e-a3ef-71c15aee95fd-kube-api-access-tdtqd\") pod \"calico-apiserver-7b7756c7fd-2ntwt\" (UID: \"5c0588e0-5afa-4d4e-a3ef-71c15aee95fd\") " pod="calico-apiserver/calico-apiserver-7b7756c7fd-2ntwt" Jun 26 07:17:13.411877 kubelet[2555]: I0626 07:17:13.411089 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5c0588e0-5afa-4d4e-a3ef-71c15aee95fd-calico-apiserver-certs\") pod \"calico-apiserver-7b7756c7fd-2ntwt\" (UID: \"5c0588e0-5afa-4d4e-a3ef-71c15aee95fd\") " pod="calico-apiserver/calico-apiserver-7b7756c7fd-2ntwt" Jun 26 07:17:13.413274 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 26 07:17:13.513338 kubelet[2555]: E0626 07:17:13.513043 2555 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Jun 26 07:17:13.524340 kubelet[2555]: E0626 07:17:13.524275 2555 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5c0588e0-5afa-4d4e-a3ef-71c15aee95fd-calico-apiserver-certs podName:5c0588e0-5afa-4d4e-a3ef-71c15aee95fd nodeName:}" failed. No retries permitted until 2024-06-26 07:17:14.013150341 +0000 UTC m=+95.615188202 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/5c0588e0-5afa-4d4e-a3ef-71c15aee95fd-calico-apiserver-certs") pod "calico-apiserver-7b7756c7fd-2ntwt" (UID: "5c0588e0-5afa-4d4e-a3ef-71c15aee95fd") : secret "calico-apiserver-certs" not found Jun 26 07:17:14.192854 containerd[1464]: time="2024-06-26T07:17:14.192577650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b7756c7fd-2ntwt,Uid:5c0588e0-5afa-4d4e-a3ef-71c15aee95fd,Namespace:calico-apiserver,Attempt:0,}" Jun 26 07:17:14.272140 sshd[5182]: pam_unix(sshd:session): session closed for user core Jun 26 07:17:14.299333 systemd[1]: Started sshd@19-165.232.133.181:22-147.75.109.163:58128.service - OpenSSH per-connection server daemon (147.75.109.163:58128). Jun 26 07:17:14.302600 systemd[1]: sshd@18-165.232.133.181:22-147.75.109.163:58120.service: Deactivated successfully. Jun 26 07:17:14.311898 systemd[1]: session-19.scope: Deactivated successfully. Jun 26 07:17:14.326889 systemd-logind[1448]: Session 19 logged out. Waiting for processes to exit. Jun 26 07:17:14.337460 systemd-logind[1448]: Removed session 19. Jun 26 07:17:14.436879 sshd[5210]: Accepted publickey for core from 147.75.109.163 port 58128 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:17:14.440145 sshd[5210]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:17:14.456004 systemd-logind[1448]: New session 20 of user core. Jun 26 07:17:14.465220 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 26 07:17:14.754696 systemd-networkd[1360]: cali486fb63ee2b: Link UP Jun 26 07:17:14.762019 systemd-networkd[1360]: cali486fb63ee2b: Gained carrier Jun 26 07:17:14.810690 containerd[1464]: 2024-06-26 07:17:14.438 [INFO][5200] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4012.0.0--0--ebda1d1a0c-k8s-calico--apiserver--7b7756c7fd--2ntwt-eth0 calico-apiserver-7b7756c7fd- calico-apiserver 5c0588e0-5afa-4d4e-a3ef-71c15aee95fd 1217 0 2024-06-26 07:17:13 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7b7756c7fd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4012.0.0-0-ebda1d1a0c calico-apiserver-7b7756c7fd-2ntwt eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali486fb63ee2b [] []}} ContainerID="a46c0428b5cbe1cd18b4493878b52778bdd7e8ee5bbffd48905be826ec2118f0" Namespace="calico-apiserver" Pod="calico-apiserver-7b7756c7fd-2ntwt" WorkloadEndpoint="ci--4012.0.0--0--ebda1d1a0c-k8s-calico--apiserver--7b7756c7fd--2ntwt-" Jun 26 07:17:14.810690 containerd[1464]: 2024-06-26 07:17:14.438 [INFO][5200] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a46c0428b5cbe1cd18b4493878b52778bdd7e8ee5bbffd48905be826ec2118f0" Namespace="calico-apiserver" Pod="calico-apiserver-7b7756c7fd-2ntwt" WorkloadEndpoint="ci--4012.0.0--0--ebda1d1a0c-k8s-calico--apiserver--7b7756c7fd--2ntwt-eth0" Jun 26 07:17:14.810690 containerd[1464]: 2024-06-26 07:17:14.538 [INFO][5217] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a46c0428b5cbe1cd18b4493878b52778bdd7e8ee5bbffd48905be826ec2118f0" HandleID="k8s-pod-network.a46c0428b5cbe1cd18b4493878b52778bdd7e8ee5bbffd48905be826ec2118f0" Workload="ci--4012.0.0--0--ebda1d1a0c-k8s-calico--apiserver--7b7756c7fd--2ntwt-eth0" Jun 26 07:17:14.810690 containerd[1464]: 2024-06-26 07:17:14.576 [INFO][5217] ipam_plugin.go 264: Auto assigning IP ContainerID="a46c0428b5cbe1cd18b4493878b52778bdd7e8ee5bbffd48905be826ec2118f0" HandleID="k8s-pod-network.a46c0428b5cbe1cd18b4493878b52778bdd7e8ee5bbffd48905be826ec2118f0" Workload="ci--4012.0.0--0--ebda1d1a0c-k8s-calico--apiserver--7b7756c7fd--2ntwt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051690), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4012.0.0-0-ebda1d1a0c", "pod":"calico-apiserver-7b7756c7fd-2ntwt", "timestamp":"2024-06-26 07:17:14.538846853 +0000 UTC"}, Hostname:"ci-4012.0.0-0-ebda1d1a0c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 26 07:17:14.810690 containerd[1464]: 2024-06-26 07:17:14.576 [INFO][5217] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 26 07:17:14.810690 containerd[1464]: 2024-06-26 07:17:14.577 [INFO][5217] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 26 07:17:14.810690 containerd[1464]: 2024-06-26 07:17:14.577 [INFO][5217] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4012.0.0-0-ebda1d1a0c' Jun 26 07:17:14.810690 containerd[1464]: 2024-06-26 07:17:14.581 [INFO][5217] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a46c0428b5cbe1cd18b4493878b52778bdd7e8ee5bbffd48905be826ec2118f0" host="ci-4012.0.0-0-ebda1d1a0c" Jun 26 07:17:14.810690 containerd[1464]: 2024-06-26 07:17:14.598 [INFO][5217] ipam.go 372: Looking up existing affinities for host host="ci-4012.0.0-0-ebda1d1a0c" Jun 26 07:17:14.810690 containerd[1464]: 2024-06-26 07:17:14.628 [INFO][5217] ipam.go 489: Trying affinity for 192.168.36.192/26 host="ci-4012.0.0-0-ebda1d1a0c" Jun 26 07:17:14.810690 containerd[1464]: 2024-06-26 07:17:14.633 [INFO][5217] ipam.go 155: Attempting to load block cidr=192.168.36.192/26 host="ci-4012.0.0-0-ebda1d1a0c" Jun 26 07:17:14.810690 containerd[1464]: 2024-06-26 07:17:14.638 [INFO][5217] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.36.192/26 host="ci-4012.0.0-0-ebda1d1a0c" Jun 26 07:17:14.810690 containerd[1464]: 2024-06-26 07:17:14.639 [INFO][5217] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.36.192/26 handle="k8s-pod-network.a46c0428b5cbe1cd18b4493878b52778bdd7e8ee5bbffd48905be826ec2118f0" host="ci-4012.0.0-0-ebda1d1a0c" Jun 26 07:17:14.810690 containerd[1464]: 2024-06-26 07:17:14.643 [INFO][5217] ipam.go 1685: Creating new handle: k8s-pod-network.a46c0428b5cbe1cd18b4493878b52778bdd7e8ee5bbffd48905be826ec2118f0 Jun 26 07:17:14.810690 containerd[1464]: 2024-06-26 07:17:14.669 [INFO][5217] ipam.go 1203: Writing block in order to claim IPs block=192.168.36.192/26 handle="k8s-pod-network.a46c0428b5cbe1cd18b4493878b52778bdd7e8ee5bbffd48905be826ec2118f0" host="ci-4012.0.0-0-ebda1d1a0c" Jun 26 07:17:14.810690 containerd[1464]: 2024-06-26 07:17:14.720 [INFO][5217] ipam.go 1216: Successfully claimed IPs: [192.168.36.197/26] block=192.168.36.192/26 handle="k8s-pod-network.a46c0428b5cbe1cd18b4493878b52778bdd7e8ee5bbffd48905be826ec2118f0" host="ci-4012.0.0-0-ebda1d1a0c" Jun 26 07:17:14.810690 containerd[1464]: 2024-06-26 07:17:14.720 [INFO][5217] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.36.197/26] handle="k8s-pod-network.a46c0428b5cbe1cd18b4493878b52778bdd7e8ee5bbffd48905be826ec2118f0" host="ci-4012.0.0-0-ebda1d1a0c" Jun 26 07:17:14.810690 containerd[1464]: 2024-06-26 07:17:14.720 [INFO][5217] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 26 07:17:14.810690 containerd[1464]: 2024-06-26 07:17:14.720 [INFO][5217] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.36.197/26] IPv6=[] ContainerID="a46c0428b5cbe1cd18b4493878b52778bdd7e8ee5bbffd48905be826ec2118f0" HandleID="k8s-pod-network.a46c0428b5cbe1cd18b4493878b52778bdd7e8ee5bbffd48905be826ec2118f0" Workload="ci--4012.0.0--0--ebda1d1a0c-k8s-calico--apiserver--7b7756c7fd--2ntwt-eth0" Jun 26 07:17:14.817088 containerd[1464]: 2024-06-26 07:17:14.734 [INFO][5200] k8s.go 386: Populated endpoint ContainerID="a46c0428b5cbe1cd18b4493878b52778bdd7e8ee5bbffd48905be826ec2118f0" Namespace="calico-apiserver" Pod="calico-apiserver-7b7756c7fd-2ntwt" WorkloadEndpoint="ci--4012.0.0--0--ebda1d1a0c-k8s-calico--apiserver--7b7756c7fd--2ntwt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--0--ebda1d1a0c-k8s-calico--apiserver--7b7756c7fd--2ntwt-eth0", GenerateName:"calico-apiserver-7b7756c7fd-", Namespace:"calico-apiserver", SelfLink:"", UID:"5c0588e0-5afa-4d4e-a3ef-71c15aee95fd", ResourceVersion:"1217", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 7, 17, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b7756c7fd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-0-ebda1d1a0c", ContainerID:"", Pod:"calico-apiserver-7b7756c7fd-2ntwt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.36.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali486fb63ee2b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 26 07:17:14.817088 containerd[1464]: 2024-06-26 07:17:14.738 [INFO][5200] k8s.go 387: Calico CNI using IPs: [192.168.36.197/32] ContainerID="a46c0428b5cbe1cd18b4493878b52778bdd7e8ee5bbffd48905be826ec2118f0" Namespace="calico-apiserver" Pod="calico-apiserver-7b7756c7fd-2ntwt" WorkloadEndpoint="ci--4012.0.0--0--ebda1d1a0c-k8s-calico--apiserver--7b7756c7fd--2ntwt-eth0" Jun 26 07:17:14.817088 containerd[1464]: 2024-06-26 07:17:14.740 [INFO][5200] dataplane_linux.go 68: Setting the host side veth name to cali486fb63ee2b ContainerID="a46c0428b5cbe1cd18b4493878b52778bdd7e8ee5bbffd48905be826ec2118f0" Namespace="calico-apiserver" Pod="calico-apiserver-7b7756c7fd-2ntwt" WorkloadEndpoint="ci--4012.0.0--0--ebda1d1a0c-k8s-calico--apiserver--7b7756c7fd--2ntwt-eth0" Jun 26 07:17:14.817088 containerd[1464]: 2024-06-26 07:17:14.763 [INFO][5200] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="a46c0428b5cbe1cd18b4493878b52778bdd7e8ee5bbffd48905be826ec2118f0" Namespace="calico-apiserver" Pod="calico-apiserver-7b7756c7fd-2ntwt" WorkloadEndpoint="ci--4012.0.0--0--ebda1d1a0c-k8s-calico--apiserver--7b7756c7fd--2ntwt-eth0" Jun 26 07:17:14.817088 containerd[1464]: 2024-06-26 07:17:14.765 [INFO][5200] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a46c0428b5cbe1cd18b4493878b52778bdd7e8ee5bbffd48905be826ec2118f0" Namespace="calico-apiserver" Pod="calico-apiserver-7b7756c7fd-2ntwt" WorkloadEndpoint="ci--4012.0.0--0--ebda1d1a0c-k8s-calico--apiserver--7b7756c7fd--2ntwt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--0--ebda1d1a0c-k8s-calico--apiserver--7b7756c7fd--2ntwt-eth0", GenerateName:"calico-apiserver-7b7756c7fd-", Namespace:"calico-apiserver", SelfLink:"", UID:"5c0588e0-5afa-4d4e-a3ef-71c15aee95fd", ResourceVersion:"1217", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 7, 17, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b7756c7fd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-0-ebda1d1a0c", ContainerID:"a46c0428b5cbe1cd18b4493878b52778bdd7e8ee5bbffd48905be826ec2118f0", Pod:"calico-apiserver-7b7756c7fd-2ntwt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.36.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali486fb63ee2b", MAC:"7a:c6:ca:6c:5c:43", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 26 07:17:14.817088 containerd[1464]: 2024-06-26 07:17:14.799 [INFO][5200] k8s.go 500: Wrote updated endpoint to datastore ContainerID="a46c0428b5cbe1cd18b4493878b52778bdd7e8ee5bbffd48905be826ec2118f0" Namespace="calico-apiserver" Pod="calico-apiserver-7b7756c7fd-2ntwt" WorkloadEndpoint="ci--4012.0.0--0--ebda1d1a0c-k8s-calico--apiserver--7b7756c7fd--2ntwt-eth0" Jun 26 07:17:14.976570 containerd[1464]: time="2024-06-26T07:17:14.972604953Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 26 07:17:14.976570 containerd[1464]: time="2024-06-26T07:17:14.972697809Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:17:14.976570 containerd[1464]: time="2024-06-26T07:17:14.972739834Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 26 07:17:14.976570 containerd[1464]: time="2024-06-26T07:17:14.972788648Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:17:15.138466 systemd[1]: Started cri-containerd-a46c0428b5cbe1cd18b4493878b52778bdd7e8ee5bbffd48905be826ec2118f0.scope - libcontainer container a46c0428b5cbe1cd18b4493878b52778bdd7e8ee5bbffd48905be826ec2118f0. Jun 26 07:17:15.181513 sshd[5210]: pam_unix(sshd:session): session closed for user core Jun 26 07:17:15.198201 systemd[1]: Started sshd@20-165.232.133.181:22-147.75.109.163:58136.service - OpenSSH per-connection server daemon (147.75.109.163:58136). Jun 26 07:17:15.219187 systemd[1]: sshd@19-165.232.133.181:22-147.75.109.163:58128.service: Deactivated successfully. Jun 26 07:17:15.219673 systemd-logind[1448]: Session 20 logged out. Waiting for processes to exit. Jun 26 07:17:15.229855 systemd[1]: session-20.scope: Deactivated successfully. Jun 26 07:17:15.245765 systemd-logind[1448]: Removed session 20. Jun 26 07:17:15.335470 sshd[5275]: Accepted publickey for core from 147.75.109.163 port 58136 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:17:15.342248 sshd[5275]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:17:15.357285 systemd-logind[1448]: New session 21 of user core. Jun 26 07:17:15.360097 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 26 07:17:15.471890 containerd[1464]: time="2024-06-26T07:17:15.470526803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b7756c7fd-2ntwt,Uid:5c0588e0-5afa-4d4e-a3ef-71c15aee95fd,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a46c0428b5cbe1cd18b4493878b52778bdd7e8ee5bbffd48905be826ec2118f0\"" Jun 26 07:17:15.477096 containerd[1464]: time="2024-06-26T07:17:15.476532728Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jun 26 07:17:16.689185 systemd-networkd[1360]: cali486fb63ee2b: Gained IPv6LL Jun 26 07:17:18.762088 sshd[5275]: pam_unix(sshd:session): session closed for user core Jun 26 07:17:18.786207 systemd[1]: sshd@20-165.232.133.181:22-147.75.109.163:58136.service: Deactivated successfully. Jun 26 07:17:18.788824 systemd-logind[1448]: Session 21 logged out. Waiting for processes to exit. Jun 26 07:17:18.793566 systemd[1]: session-21.scope: Deactivated successfully. Jun 26 07:17:18.809053 systemd[1]: Started sshd@21-165.232.133.181:22-147.75.109.163:47330.service - OpenSSH per-connection server daemon (147.75.109.163:47330). Jun 26 07:17:18.812046 systemd-logind[1448]: Removed session 21. Jun 26 07:17:19.018942 sshd[5308]: Accepted publickey for core from 147.75.109.163 port 47330 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:17:19.024976 sshd[5308]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:17:19.048018 systemd-logind[1448]: New session 22 of user core. Jun 26 07:17:19.055091 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 26 07:17:20.162826 containerd[1464]: time="2024-06-26T07:17:20.159319048Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=40421260" Jun 26 07:17:20.164803 containerd[1464]: time="2024-06-26T07:17:20.163919919Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:17:20.172517 containerd[1464]: time="2024-06-26T07:17:20.172456924Z" level=info msg="ImageCreate event name:\"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:17:20.196784 containerd[1464]: time="2024-06-26T07:17:20.193850887Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:17:20.196784 containerd[1464]: time="2024-06-26T07:17:20.195470504Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"41869036\" in 4.718845479s" Jun 26 07:17:20.196784 containerd[1464]: time="2024-06-26T07:17:20.195527775Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\"" Jun 26 07:17:20.222304 containerd[1464]: time="2024-06-26T07:17:20.221542237Z" level=info msg="CreateContainer within sandbox \"a46c0428b5cbe1cd18b4493878b52778bdd7e8ee5bbffd48905be826ec2118f0\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 26 07:17:20.266915 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4232973203.mount: Deactivated successfully. Jun 26 07:17:20.281790 containerd[1464]: time="2024-06-26T07:17:20.278869786Z" level=info msg="CreateContainer within sandbox \"a46c0428b5cbe1cd18b4493878b52778bdd7e8ee5bbffd48905be826ec2118f0\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ae6b7937d350922633aac69192c9ee082b498b0ba18355728980e31d81f9d4de\"" Jun 26 07:17:20.282161 containerd[1464]: time="2024-06-26T07:17:20.281994543Z" level=info msg="StartContainer for \"ae6b7937d350922633aac69192c9ee082b498b0ba18355728980e31d81f9d4de\"" Jun 26 07:17:20.460865 systemd[1]: Started cri-containerd-ae6b7937d350922633aac69192c9ee082b498b0ba18355728980e31d81f9d4de.scope - libcontainer container ae6b7937d350922633aac69192c9ee082b498b0ba18355728980e31d81f9d4de. Jun 26 07:17:20.553442 sshd[5308]: pam_unix(sshd:session): session closed for user core Jun 26 07:17:20.563456 systemd[1]: sshd@21-165.232.133.181:22-147.75.109.163:47330.service: Deactivated successfully. Jun 26 07:17:20.567960 systemd[1]: session-22.scope: Deactivated successfully. Jun 26 07:17:20.573130 systemd-logind[1448]: Session 22 logged out. Waiting for processes to exit. Jun 26 07:17:20.584202 systemd[1]: Started sshd@22-165.232.133.181:22-147.75.109.163:47332.service - OpenSSH per-connection server daemon (147.75.109.163:47332). Jun 26 07:17:20.588390 systemd-logind[1448]: Removed session 22. Jun 26 07:17:20.653236 containerd[1464]: time="2024-06-26T07:17:20.652737394Z" level=info msg="StartContainer for \"ae6b7937d350922633aac69192c9ee082b498b0ba18355728980e31d81f9d4de\" returns successfully" Jun 26 07:17:20.678620 sshd[5378]: Accepted publickey for core from 147.75.109.163 port 47332 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:17:20.683355 sshd[5378]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:17:20.694636 systemd-logind[1448]: New session 23 of user core. Jun 26 07:17:20.701395 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 26 07:17:20.876079 sshd[5378]: pam_unix(sshd:session): session closed for user core Jun 26 07:17:20.881862 systemd[1]: sshd@22-165.232.133.181:22-147.75.109.163:47332.service: Deactivated successfully. Jun 26 07:17:20.882070 systemd-logind[1448]: Session 23 logged out. Waiting for processes to exit. Jun 26 07:17:20.885432 systemd[1]: session-23.scope: Deactivated successfully. Jun 26 07:17:20.888421 systemd-logind[1448]: Removed session 23. Jun 26 07:17:21.677795 kubelet[2555]: I0626 07:17:21.677110 2555 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7b7756c7fd-2ntwt" podStartSLOduration=3.943489315 podStartE2EDuration="8.665222786s" podCreationTimestamp="2024-06-26 07:17:13 +0000 UTC" firstStartedPulling="2024-06-26 07:17:15.475802297 +0000 UTC m=+97.077840137" lastFinishedPulling="2024-06-26 07:17:20.197535751 +0000 UTC m=+101.799573608" observedRunningTime="2024-06-26 07:17:21.44242346 +0000 UTC m=+103.044461319" watchObservedRunningTime="2024-06-26 07:17:21.665222786 +0000 UTC m=+103.267260645" Jun 26 07:17:25.895114 systemd[1]: Started sshd@23-165.232.133.181:22-147.75.109.163:47334.service - OpenSSH per-connection server daemon (147.75.109.163:47334). Jun 26 07:17:26.004869 sshd[5416]: Accepted publickey for core from 147.75.109.163 port 47334 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:17:26.006540 sshd[5416]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:17:26.013951 systemd-logind[1448]: New session 24 of user core. Jun 26 07:17:26.020136 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 26 07:17:26.198778 sshd[5416]: pam_unix(sshd:session): session closed for user core Jun 26 07:17:26.205545 systemd[1]: sshd@23-165.232.133.181:22-147.75.109.163:47334.service: Deactivated successfully. Jun 26 07:17:26.208436 systemd[1]: session-24.scope: Deactivated successfully. Jun 26 07:17:26.209820 systemd-logind[1448]: Session 24 logged out. Waiting for processes to exit. Jun 26 07:17:26.210984 systemd-logind[1448]: Removed session 24. Jun 26 07:17:31.222286 systemd[1]: Started sshd@24-165.232.133.181:22-147.75.109.163:45352.service - OpenSSH per-connection server daemon (147.75.109.163:45352). Jun 26 07:17:31.281787 sshd[5432]: Accepted publickey for core from 147.75.109.163 port 45352 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:17:31.284488 sshd[5432]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:17:31.296888 systemd-logind[1448]: New session 25 of user core. Jun 26 07:17:31.300084 systemd[1]: Started session-25.scope - Session 25 of User core. Jun 26 07:17:31.460006 sshd[5432]: pam_unix(sshd:session): session closed for user core Jun 26 07:17:31.466085 systemd-logind[1448]: Session 25 logged out. Waiting for processes to exit. Jun 26 07:17:31.466515 systemd[1]: sshd@24-165.232.133.181:22-147.75.109.163:45352.service: Deactivated successfully. Jun 26 07:17:31.469351 systemd[1]: session-25.scope: Deactivated successfully. Jun 26 07:17:31.470668 systemd-logind[1448]: Removed session 25. Jun 26 07:17:36.487117 systemd[1]: Started sshd@25-165.232.133.181:22-147.75.109.163:45218.service - OpenSSH per-connection server daemon (147.75.109.163:45218). Jun 26 07:17:36.582827 sshd[5453]: Accepted publickey for core from 147.75.109.163 port 45218 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:17:36.586524 sshd[5453]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:17:36.595766 systemd-logind[1448]: New session 26 of user core. Jun 26 07:17:36.609173 systemd[1]: Started session-26.scope - Session 26 of User core. Jun 26 07:17:36.876151 sshd[5453]: pam_unix(sshd:session): session closed for user core Jun 26 07:17:36.883201 systemd[1]: sshd@25-165.232.133.181:22-147.75.109.163:45218.service: Deactivated successfully. Jun 26 07:17:36.889250 systemd[1]: session-26.scope: Deactivated successfully. Jun 26 07:17:36.894434 systemd-logind[1448]: Session 26 logged out. Waiting for processes to exit. Jun 26 07:17:36.897199 systemd-logind[1448]: Removed session 26. Jun 26 07:17:41.899203 systemd[1]: Started sshd@26-165.232.133.181:22-147.75.109.163:45234.service - OpenSSH per-connection server daemon (147.75.109.163:45234). Jun 26 07:17:42.006794 sshd[5516]: Accepted publickey for core from 147.75.109.163 port 45234 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:17:42.011116 sshd[5516]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:17:42.020937 systemd-logind[1448]: New session 27 of user core. Jun 26 07:17:42.027107 systemd[1]: Started session-27.scope - Session 27 of User core. Jun 26 07:17:42.235853 sshd[5516]: pam_unix(sshd:session): session closed for user core Jun 26 07:17:42.243069 systemd[1]: sshd@26-165.232.133.181:22-147.75.109.163:45234.service: Deactivated successfully. Jun 26 07:17:42.245902 systemd[1]: session-27.scope: Deactivated successfully. Jun 26 07:17:42.247792 systemd-logind[1448]: Session 27 logged out. Waiting for processes to exit. Jun 26 07:17:42.249583 systemd-logind[1448]: Removed session 27. Jun 26 07:17:46.733188 kubelet[2555]: E0626 07:17:46.732706 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:17:47.257472 systemd[1]: Started sshd@27-165.232.133.181:22-147.75.109.163:47216.service - OpenSSH per-connection server daemon (147.75.109.163:47216). Jun 26 07:17:47.315820 sshd[5534]: Accepted publickey for core from 147.75.109.163 port 47216 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:17:47.318222 sshd[5534]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:17:47.332894 systemd-logind[1448]: New session 28 of user core. Jun 26 07:17:47.338065 systemd[1]: Started session-28.scope - Session 28 of User core. Jun 26 07:17:47.532496 sshd[5534]: pam_unix(sshd:session): session closed for user core Jun 26 07:17:47.539416 systemd[1]: sshd@27-165.232.133.181:22-147.75.109.163:47216.service: Deactivated successfully. Jun 26 07:17:47.542459 systemd[1]: session-28.scope: Deactivated successfully. Jun 26 07:17:47.545012 systemd-logind[1448]: Session 28 logged out. Waiting for processes to exit. Jun 26 07:17:47.546588 systemd-logind[1448]: Removed session 28. Jun 26 07:17:49.645348 systemd[1]: run-containerd-runc-k8s.io-a4e6466f19beb2516a143803a29f7e5526b7be106c3014a6148c005b7208a753-runc.0OQwik.mount: Deactivated successfully. Jun 26 07:17:52.554185 systemd[1]: Started sshd@28-165.232.133.181:22-147.75.109.163:47228.service - OpenSSH per-connection server daemon (147.75.109.163:47228). Jun 26 07:17:52.621641 sshd[5568]: Accepted publickey for core from 147.75.109.163 port 47228 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:17:52.623535 sshd[5568]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:17:52.631802 systemd-logind[1448]: New session 29 of user core. Jun 26 07:17:52.636716 systemd[1]: Started session-29.scope - Session 29 of User core. Jun 26 07:17:52.735914 kubelet[2555]: E0626 07:17:52.735367 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:17:52.785007 sshd[5568]: pam_unix(sshd:session): session closed for user core Jun 26 07:17:52.790688 systemd[1]: sshd@28-165.232.133.181:22-147.75.109.163:47228.service: Deactivated successfully. Jun 26 07:17:52.793926 systemd[1]: session-29.scope: Deactivated successfully. Jun 26 07:17:52.795225 systemd-logind[1448]: Session 29 logged out. Waiting for processes to exit. Jun 26 07:17:52.796804 systemd-logind[1448]: Removed session 29. Jun 26 07:17:54.733582 kubelet[2555]: E0626 07:17:54.732972 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"