Oct 9 07:49:49.096529 kernel: Linux version 6.6.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Oct 8 18:24:27 -00 2024 Oct 9 07:49:49.096570 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=ed527eaf992abc270af9987554566193214d123941456fd3066b47855e5178a5 Oct 9 07:49:49.096585 kernel: BIOS-provided physical RAM map: Oct 9 07:49:49.096594 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Oct 9 07:49:49.096605 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Oct 9 07:49:49.096616 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Oct 9 07:49:49.096628 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffd7fff] usable Oct 9 07:49:49.096639 kernel: BIOS-e820: [mem 0x000000007ffd8000-0x000000007fffffff] reserved Oct 9 07:49:49.096649 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 9 07:49:49.096667 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Oct 9 07:49:49.096679 kernel: NX (Execute Disable) protection: active Oct 9 07:49:49.096690 kernel: APIC: Static calls initialized Oct 9 07:49:49.096703 kernel: SMBIOS 2.8 present. Oct 9 07:49:49.096715 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Oct 9 07:49:49.097327 kernel: Hypervisor detected: KVM Oct 9 07:49:49.097390 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 9 07:49:49.097404 kernel: kvm-clock: using sched offset of 4154874390 cycles Oct 9 07:49:49.097433 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 9 07:49:49.097446 kernel: tsc: Detected 2494.138 MHz processor Oct 9 07:49:49.097460 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 9 07:49:49.097475 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 9 07:49:49.097489 kernel: last_pfn = 0x7ffd8 max_arch_pfn = 0x400000000 Oct 9 07:49:49.097501 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Oct 9 07:49:49.097514 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 9 07:49:49.097533 kernel: ACPI: Early table checksum verification disabled Oct 9 07:49:49.097546 kernel: ACPI: RSDP 0x00000000000F5A50 000014 (v00 BOCHS ) Oct 9 07:49:49.097561 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 07:49:49.097574 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 07:49:49.097586 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 07:49:49.097598 kernel: ACPI: FACS 0x000000007FFE0000 000040 Oct 9 07:49:49.097612 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 07:49:49.097626 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 07:49:49.097639 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 07:49:49.097658 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 07:49:49.097671 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Oct 9 07:49:49.097683 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Oct 9 07:49:49.097697 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Oct 9 07:49:49.097711 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Oct 9 07:49:49.097724 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Oct 9 07:49:49.097758 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Oct 9 07:49:49.097786 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Oct 9 07:49:49.097799 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Oct 9 07:49:49.097812 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Oct 9 07:49:49.097828 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Oct 9 07:49:49.097843 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Oct 9 07:49:49.097853 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffd7fff] -> [mem 0x00000000-0x7ffd7fff] Oct 9 07:49:49.097863 kernel: NODE_DATA(0) allocated [mem 0x7ffd2000-0x7ffd7fff] Oct 9 07:49:49.097877 kernel: Zone ranges: Oct 9 07:49:49.097886 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 9 07:49:49.097896 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffd7fff] Oct 9 07:49:49.097905 kernel: Normal empty Oct 9 07:49:49.097914 kernel: Movable zone start for each node Oct 9 07:49:49.097923 kernel: Early memory node ranges Oct 9 07:49:49.097933 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Oct 9 07:49:49.097942 kernel: node 0: [mem 0x0000000000100000-0x000000007ffd7fff] Oct 9 07:49:49.097951 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffd7fff] Oct 9 07:49:49.097964 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 9 07:49:49.097973 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Oct 9 07:49:49.097982 kernel: On node 0, zone DMA32: 40 pages in unavailable ranges Oct 9 07:49:49.097992 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 9 07:49:49.098001 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 9 07:49:49.098010 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 9 07:49:49.098019 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 9 07:49:49.098028 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 9 07:49:49.098037 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 9 07:49:49.098050 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 9 07:49:49.098060 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 9 07:49:49.098069 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 9 07:49:49.098078 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 9 07:49:49.098087 kernel: TSC deadline timer available Oct 9 07:49:49.098096 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Oct 9 07:49:49.098105 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 9 07:49:49.098114 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Oct 9 07:49:49.098124 kernel: Booting paravirtualized kernel on KVM Oct 9 07:49:49.098144 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 9 07:49:49.098154 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Oct 9 07:49:49.098163 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Oct 9 07:49:49.098172 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Oct 9 07:49:49.098181 kernel: pcpu-alloc: [0] 0 1 Oct 9 07:49:49.098190 kernel: kvm-guest: PV spinlocks disabled, no host support Oct 9 07:49:49.098202 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=ed527eaf992abc270af9987554566193214d123941456fd3066b47855e5178a5 Oct 9 07:49:49.098212 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 9 07:49:49.098225 kernel: random: crng init done Oct 9 07:49:49.098234 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 9 07:49:49.098243 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Oct 9 07:49:49.098253 kernel: Fallback order for Node 0: 0 Oct 9 07:49:49.098262 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515800 Oct 9 07:49:49.098271 kernel: Policy zone: DMA32 Oct 9 07:49:49.098280 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 9 07:49:49.098290 kernel: Memory: 1971188K/2096600K available (12288K kernel code, 2305K rwdata, 22716K rodata, 42828K init, 2360K bss, 125152K reserved, 0K cma-reserved) Oct 9 07:49:49.098299 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Oct 9 07:49:49.098312 kernel: Kernel/User page tables isolation: enabled Oct 9 07:49:49.098321 kernel: ftrace: allocating 37784 entries in 148 pages Oct 9 07:49:49.098331 kernel: ftrace: allocated 148 pages with 3 groups Oct 9 07:49:49.098340 kernel: Dynamic Preempt: voluntary Oct 9 07:49:49.098349 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 9 07:49:49.098360 kernel: rcu: RCU event tracing is enabled. Oct 9 07:49:49.098369 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Oct 9 07:49:49.098378 kernel: Trampoline variant of Tasks RCU enabled. Oct 9 07:49:49.098388 kernel: Rude variant of Tasks RCU enabled. Oct 9 07:49:49.098401 kernel: Tracing variant of Tasks RCU enabled. Oct 9 07:49:49.098410 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 9 07:49:49.098419 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Oct 9 07:49:49.098428 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Oct 9 07:49:49.098438 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 9 07:49:49.098447 kernel: Console: colour VGA+ 80x25 Oct 9 07:49:49.098458 kernel: printk: console [tty0] enabled Oct 9 07:49:49.098472 kernel: printk: console [ttyS0] enabled Oct 9 07:49:49.098484 kernel: ACPI: Core revision 20230628 Oct 9 07:49:49.098496 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 9 07:49:49.098515 kernel: APIC: Switch to symmetric I/O mode setup Oct 9 07:49:49.098527 kernel: x2apic enabled Oct 9 07:49:49.098539 kernel: APIC: Switched APIC routing to: physical x2apic Oct 9 07:49:49.098553 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 9 07:49:49.098567 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns Oct 9 07:49:49.098580 kernel: Calibrating delay loop (skipped) preset value.. 4988.27 BogoMIPS (lpj=2494138) Oct 9 07:49:49.098591 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Oct 9 07:49:49.098601 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Oct 9 07:49:49.098630 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 9 07:49:49.098639 kernel: Spectre V2 : Mitigation: Retpolines Oct 9 07:49:49.098664 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Oct 9 07:49:49.098682 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Oct 9 07:49:49.098692 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Oct 9 07:49:49.098702 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 9 07:49:49.098712 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 9 07:49:49.098722 kernel: MDS: Mitigation: Clear CPU buffers Oct 9 07:49:49.100787 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Oct 9 07:49:49.100862 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 9 07:49:49.100874 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 9 07:49:49.100884 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 9 07:49:49.100893 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 9 07:49:49.100904 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Oct 9 07:49:49.100914 kernel: Freeing SMP alternatives memory: 32K Oct 9 07:49:49.100924 kernel: pid_max: default: 32768 minimum: 301 Oct 9 07:49:49.100934 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Oct 9 07:49:49.100952 kernel: landlock: Up and running. Oct 9 07:49:49.100966 kernel: SELinux: Initializing. Oct 9 07:49:49.100979 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 9 07:49:49.100993 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 9 07:49:49.101006 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Oct 9 07:49:49.101019 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 9 07:49:49.101032 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 9 07:49:49.101045 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 9 07:49:49.101058 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Oct 9 07:49:49.101078 kernel: signal: max sigframe size: 1776 Oct 9 07:49:49.101092 kernel: rcu: Hierarchical SRCU implementation. Oct 9 07:49:49.101106 kernel: rcu: Max phase no-delay instances is 400. Oct 9 07:49:49.101119 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Oct 9 07:49:49.101134 kernel: smp: Bringing up secondary CPUs ... Oct 9 07:49:49.101147 kernel: smpboot: x86: Booting SMP configuration: Oct 9 07:49:49.101163 kernel: .... node #0, CPUs: #1 Oct 9 07:49:49.101181 kernel: smp: Brought up 1 node, 2 CPUs Oct 9 07:49:49.101201 kernel: smpboot: Max logical packages: 1 Oct 9 07:49:49.101222 kernel: smpboot: Total of 2 processors activated (9976.55 BogoMIPS) Oct 9 07:49:49.101237 kernel: devtmpfs: initialized Oct 9 07:49:49.101253 kernel: x86/mm: Memory block size: 128MB Oct 9 07:49:49.101269 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 9 07:49:49.101284 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Oct 9 07:49:49.101299 kernel: pinctrl core: initialized pinctrl subsystem Oct 9 07:49:49.101313 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 9 07:49:49.101332 kernel: audit: initializing netlink subsys (disabled) Oct 9 07:49:49.101345 kernel: audit: type=2000 audit(1728460187.915:1): state=initialized audit_enabled=0 res=1 Oct 9 07:49:49.101367 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 9 07:49:49.101388 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 9 07:49:49.101408 kernel: cpuidle: using governor menu Oct 9 07:49:49.101430 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 9 07:49:49.101447 kernel: dca service started, version 1.12.1 Oct 9 07:49:49.101461 kernel: PCI: Using configuration type 1 for base access Oct 9 07:49:49.101479 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 9 07:49:49.101498 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 9 07:49:49.101517 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 9 07:49:49.101543 kernel: ACPI: Added _OSI(Module Device) Oct 9 07:49:49.101561 kernel: ACPI: Added _OSI(Processor Device) Oct 9 07:49:49.101581 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 9 07:49:49.101600 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 9 07:49:49.101620 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 9 07:49:49.101651 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Oct 9 07:49:49.101670 kernel: ACPI: Interpreter enabled Oct 9 07:49:49.101689 kernel: ACPI: PM: (supports S0 S5) Oct 9 07:49:49.101708 kernel: ACPI: Using IOAPIC for interrupt routing Oct 9 07:49:49.101729 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 9 07:49:49.101956 kernel: PCI: Using E820 reservations for host bridge windows Oct 9 07:49:49.101984 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Oct 9 07:49:49.102001 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 9 07:49:49.102319 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Oct 9 07:49:49.102467 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Oct 9 07:49:49.102622 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Oct 9 07:49:49.102646 kernel: acpiphp: Slot [3] registered Oct 9 07:49:49.102657 kernel: acpiphp: Slot [4] registered Oct 9 07:49:49.102667 kernel: acpiphp: Slot [5] registered Oct 9 07:49:49.102682 kernel: acpiphp: Slot [6] registered Oct 9 07:49:49.102692 kernel: acpiphp: Slot [7] registered Oct 9 07:49:49.102702 kernel: acpiphp: Slot [8] registered Oct 9 07:49:49.102712 kernel: acpiphp: Slot [9] registered Oct 9 07:49:49.102722 kernel: acpiphp: Slot [10] registered Oct 9 07:49:49.102751 kernel: acpiphp: Slot [11] registered Oct 9 07:49:49.102772 kernel: acpiphp: Slot [12] registered Oct 9 07:49:49.102787 kernel: acpiphp: Slot [13] registered Oct 9 07:49:49.102801 kernel: acpiphp: Slot [14] registered Oct 9 07:49:49.102811 kernel: acpiphp: Slot [15] registered Oct 9 07:49:49.102821 kernel: acpiphp: Slot [16] registered Oct 9 07:49:49.102831 kernel: acpiphp: Slot [17] registered Oct 9 07:49:49.102841 kernel: acpiphp: Slot [18] registered Oct 9 07:49:49.102851 kernel: acpiphp: Slot [19] registered Oct 9 07:49:49.102862 kernel: acpiphp: Slot [20] registered Oct 9 07:49:49.102872 kernel: acpiphp: Slot [21] registered Oct 9 07:49:49.102886 kernel: acpiphp: Slot [22] registered Oct 9 07:49:49.102896 kernel: acpiphp: Slot [23] registered Oct 9 07:49:49.102906 kernel: acpiphp: Slot [24] registered Oct 9 07:49:49.102916 kernel: acpiphp: Slot [25] registered Oct 9 07:49:49.102925 kernel: acpiphp: Slot [26] registered Oct 9 07:49:49.102939 kernel: acpiphp: Slot [27] registered Oct 9 07:49:49.102955 kernel: acpiphp: Slot [28] registered Oct 9 07:49:49.102968 kernel: acpiphp: Slot [29] registered Oct 9 07:49:49.102982 kernel: acpiphp: Slot [30] registered Oct 9 07:49:49.103000 kernel: acpiphp: Slot [31] registered Oct 9 07:49:49.103014 kernel: PCI host bridge to bus 0000:00 Oct 9 07:49:49.103259 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 9 07:49:49.103421 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 9 07:49:49.103539 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 9 07:49:49.103672 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Oct 9 07:49:49.105162 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Oct 9 07:49:49.105305 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 9 07:49:49.105491 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Oct 9 07:49:49.105630 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Oct 9 07:49:49.105941 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Oct 9 07:49:49.106122 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Oct 9 07:49:49.106289 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Oct 9 07:49:49.106454 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Oct 9 07:49:49.106576 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Oct 9 07:49:49.106721 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Oct 9 07:49:49.106896 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Oct 9 07:49:49.107065 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Oct 9 07:49:49.107252 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Oct 9 07:49:49.107412 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Oct 9 07:49:49.107582 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Oct 9 07:49:49.107716 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Oct 9 07:49:49.107952 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Oct 9 07:49:49.108061 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Oct 9 07:49:49.108165 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Oct 9 07:49:49.108270 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Oct 9 07:49:49.108375 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 9 07:49:49.108510 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Oct 9 07:49:49.108623 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Oct 9 07:49:49.108753 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Oct 9 07:49:49.108894 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Oct 9 07:49:49.109084 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Oct 9 07:49:49.109197 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Oct 9 07:49:49.109303 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Oct 9 07:49:49.109417 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Oct 9 07:49:49.109536 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Oct 9 07:49:49.109641 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Oct 9 07:49:49.109914 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Oct 9 07:49:49.110025 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Oct 9 07:49:49.110165 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Oct 9 07:49:49.110312 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Oct 9 07:49:49.110488 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Oct 9 07:49:49.110602 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Oct 9 07:49:49.110725 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Oct 9 07:49:49.113064 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Oct 9 07:49:49.113283 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Oct 9 07:49:49.113450 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Oct 9 07:49:49.113649 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Oct 9 07:49:49.113861 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Oct 9 07:49:49.114024 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Oct 9 07:49:49.114045 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 9 07:49:49.114059 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 9 07:49:49.114075 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 9 07:49:49.114089 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 9 07:49:49.114115 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Oct 9 07:49:49.114130 kernel: iommu: Default domain type: Translated Oct 9 07:49:49.114144 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 9 07:49:49.114160 kernel: PCI: Using ACPI for IRQ routing Oct 9 07:49:49.114174 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 9 07:49:49.114188 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Oct 9 07:49:49.114203 kernel: e820: reserve RAM buffer [mem 0x7ffd8000-0x7fffffff] Oct 9 07:49:49.114407 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Oct 9 07:49:49.114545 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Oct 9 07:49:49.114716 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 9 07:49:49.116986 kernel: vgaarb: loaded Oct 9 07:49:49.117018 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 9 07:49:49.117030 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 9 07:49:49.117041 kernel: clocksource: Switched to clocksource kvm-clock Oct 9 07:49:49.117051 kernel: VFS: Disk quotas dquot_6.6.0 Oct 9 07:49:49.117063 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 9 07:49:49.117074 kernel: pnp: PnP ACPI init Oct 9 07:49:49.117084 kernel: pnp: PnP ACPI: found 4 devices Oct 9 07:49:49.117104 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 9 07:49:49.117114 kernel: NET: Registered PF_INET protocol family Oct 9 07:49:49.117125 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 9 07:49:49.117136 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Oct 9 07:49:49.117147 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 9 07:49:49.117157 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Oct 9 07:49:49.117167 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Oct 9 07:49:49.117179 kernel: TCP: Hash tables configured (established 16384 bind 16384) Oct 9 07:49:49.117193 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 9 07:49:49.117215 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 9 07:49:49.117228 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 9 07:49:49.117238 kernel: NET: Registered PF_XDP protocol family Oct 9 07:49:49.117446 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 9 07:49:49.117602 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 9 07:49:49.117796 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 9 07:49:49.117970 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Oct 9 07:49:49.118140 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Oct 9 07:49:49.118344 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Oct 9 07:49:49.118524 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Oct 9 07:49:49.118552 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Oct 9 07:49:49.118723 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 37567 usecs Oct 9 07:49:49.118976 kernel: PCI: CLS 0 bytes, default 64 Oct 9 07:49:49.118995 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Oct 9 07:49:49.119012 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns Oct 9 07:49:49.119029 kernel: Initialise system trusted keyrings Oct 9 07:49:49.119054 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Oct 9 07:49:49.119064 kernel: Key type asymmetric registered Oct 9 07:49:49.119075 kernel: Asymmetric key parser 'x509' registered Oct 9 07:49:49.119085 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Oct 9 07:49:49.119095 kernel: io scheduler mq-deadline registered Oct 9 07:49:49.119105 kernel: io scheduler kyber registered Oct 9 07:49:49.119115 kernel: io scheduler bfq registered Oct 9 07:49:49.119126 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 9 07:49:49.119137 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Oct 9 07:49:49.119148 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Oct 9 07:49:49.119184 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Oct 9 07:49:49.119199 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 9 07:49:49.119214 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 9 07:49:49.119229 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 9 07:49:49.119244 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 9 07:49:49.119259 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 9 07:49:49.119527 kernel: rtc_cmos 00:03: RTC can wake from S4 Oct 9 07:49:49.119556 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 9 07:49:49.119699 kernel: rtc_cmos 00:03: registered as rtc0 Oct 9 07:49:49.119871 kernel: rtc_cmos 00:03: setting system clock to 2024-10-09T07:49:48 UTC (1728460188) Oct 9 07:49:49.120005 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Oct 9 07:49:49.120020 kernel: intel_pstate: CPU model not supported Oct 9 07:49:49.120031 kernel: NET: Registered PF_INET6 protocol family Oct 9 07:49:49.120044 kernel: Segment Routing with IPv6 Oct 9 07:49:49.120059 kernel: In-situ OAM (IOAM) with IPv6 Oct 9 07:49:49.120072 kernel: NET: Registered PF_PACKET protocol family Oct 9 07:49:49.120097 kernel: Key type dns_resolver registered Oct 9 07:49:49.120113 kernel: IPI shorthand broadcast: enabled Oct 9 07:49:49.120127 kernel: sched_clock: Marking stable (1190006454, 120013569)->(1356451803, -46431780) Oct 9 07:49:49.120144 kernel: registered taskstats version 1 Oct 9 07:49:49.120158 kernel: Loading compiled-in X.509 certificates Oct 9 07:49:49.120174 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.54-flatcar: 14ce23fc5070d0471461f1dd6e298a5588e7ba8f' Oct 9 07:49:49.120188 kernel: Key type .fscrypt registered Oct 9 07:49:49.120201 kernel: Key type fscrypt-provisioning registered Oct 9 07:49:49.120216 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 9 07:49:49.120238 kernel: ima: Allocated hash algorithm: sha1 Oct 9 07:49:49.120257 kernel: ima: No architecture policies found Oct 9 07:49:49.120273 kernel: clk: Disabling unused clocks Oct 9 07:49:49.120285 kernel: Freeing unused kernel image (initmem) memory: 42828K Oct 9 07:49:49.120296 kernel: Write protecting the kernel read-only data: 36864k Oct 9 07:49:49.120357 kernel: Freeing unused kernel image (rodata/data gap) memory: 1860K Oct 9 07:49:49.120379 kernel: Run /init as init process Oct 9 07:49:49.120397 kernel: with arguments: Oct 9 07:49:49.120414 kernel: /init Oct 9 07:49:49.120432 kernel: with environment: Oct 9 07:49:49.120447 kernel: HOME=/ Oct 9 07:49:49.120458 kernel: TERM=linux Oct 9 07:49:49.120469 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 9 07:49:49.120486 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 9 07:49:49.120506 systemd[1]: Detected virtualization kvm. Oct 9 07:49:49.120522 systemd[1]: Detected architecture x86-64. Oct 9 07:49:49.120538 systemd[1]: Running in initrd. Oct 9 07:49:49.120559 systemd[1]: No hostname configured, using default hostname. Oct 9 07:49:49.120576 systemd[1]: Hostname set to . Oct 9 07:49:49.120588 systemd[1]: Initializing machine ID from VM UUID. Oct 9 07:49:49.120599 systemd[1]: Queued start job for default target initrd.target. Oct 9 07:49:49.120610 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 07:49:49.120622 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 07:49:49.120636 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 9 07:49:49.120652 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 9 07:49:49.120675 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 9 07:49:49.120690 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 9 07:49:49.120709 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 9 07:49:49.120724 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 9 07:49:49.122873 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 07:49:49.122949 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 9 07:49:49.122985 systemd[1]: Reached target paths.target - Path Units. Oct 9 07:49:49.123004 systemd[1]: Reached target slices.target - Slice Units. Oct 9 07:49:49.123023 systemd[1]: Reached target swap.target - Swaps. Oct 9 07:49:49.123047 systemd[1]: Reached target timers.target - Timer Units. Oct 9 07:49:49.123066 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 9 07:49:49.123081 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 9 07:49:49.123103 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 9 07:49:49.123119 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 9 07:49:49.123135 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 9 07:49:49.123153 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 9 07:49:49.123261 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 07:49:49.123278 systemd[1]: Reached target sockets.target - Socket Units. Oct 9 07:49:49.123295 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 9 07:49:49.123313 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 9 07:49:49.123338 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 9 07:49:49.123355 systemd[1]: Starting systemd-fsck-usr.service... Oct 9 07:49:49.123371 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 9 07:49:49.123387 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 9 07:49:49.123403 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 07:49:49.123418 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 9 07:49:49.123509 systemd-journald[181]: Collecting audit messages is disabled. Oct 9 07:49:49.123558 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 07:49:49.123575 systemd[1]: Finished systemd-fsck-usr.service. Oct 9 07:49:49.123593 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 9 07:49:49.123617 systemd-journald[181]: Journal started Oct 9 07:49:49.123648 systemd-journald[181]: Runtime Journal (/run/log/journal/5ae57698bb39487dbe756af032377d77) is 4.9M, max 39.3M, 34.4M free. Oct 9 07:49:49.126778 systemd[1]: Started systemd-journald.service - Journal Service. Oct 9 07:49:49.135392 systemd-modules-load[182]: Inserted module 'overlay' Oct 9 07:49:49.192607 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 9 07:49:49.192677 kernel: Bridge firewalling registered Oct 9 07:49:49.137236 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 9 07:49:49.174697 systemd-modules-load[182]: Inserted module 'br_netfilter' Oct 9 07:49:49.200189 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 9 07:49:49.206885 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:49:49.215376 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 9 07:49:49.219941 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 9 07:49:49.230190 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 07:49:49.235111 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 9 07:49:49.245163 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 9 07:49:49.266219 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 9 07:49:49.276601 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 9 07:49:49.277829 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 07:49:49.287953 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 07:49:49.298138 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 9 07:49:49.335348 dracut-cmdline[219]: dracut-dracut-053 Oct 9 07:49:49.339076 systemd-resolved[215]: Positive Trust Anchors: Oct 9 07:49:49.340116 systemd-resolved[215]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 9 07:49:49.344877 dracut-cmdline[219]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=ed527eaf992abc270af9987554566193214d123941456fd3066b47855e5178a5 Oct 9 07:49:49.340178 systemd-resolved[215]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 9 07:49:49.349598 systemd-resolved[215]: Defaulting to hostname 'linux'. Oct 9 07:49:49.351670 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 9 07:49:49.354960 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 9 07:49:49.506868 kernel: SCSI subsystem initialized Oct 9 07:49:49.521807 kernel: Loading iSCSI transport class v2.0-870. Oct 9 07:49:49.540143 kernel: iscsi: registered transport (tcp) Oct 9 07:49:49.573019 kernel: iscsi: registered transport (qla4xxx) Oct 9 07:49:49.573182 kernel: QLogic iSCSI HBA Driver Oct 9 07:49:49.665494 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 9 07:49:49.676430 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 9 07:49:49.738264 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 9 07:49:49.738407 kernel: device-mapper: uevent: version 1.0.3 Oct 9 07:49:49.740354 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Oct 9 07:49:49.809843 kernel: raid6: avx2x4 gen() 16504 MB/s Oct 9 07:49:49.826885 kernel: raid6: avx2x2 gen() 13451 MB/s Oct 9 07:49:49.844108 kernel: raid6: avx2x1 gen() 11579 MB/s Oct 9 07:49:49.844237 kernel: raid6: using algorithm avx2x4 gen() 16504 MB/s Oct 9 07:49:49.862926 kernel: raid6: .... xor() 5934 MB/s, rmw enabled Oct 9 07:49:49.863064 kernel: raid6: using avx2x2 recovery algorithm Oct 9 07:49:49.912010 kernel: xor: automatically using best checksumming function avx Oct 9 07:49:50.156523 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 9 07:49:50.179624 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 9 07:49:50.187379 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 07:49:50.222234 systemd-udevd[402]: Using default interface naming scheme 'v255'. Oct 9 07:49:50.229834 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 07:49:50.242671 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 9 07:49:50.290349 dracut-pre-trigger[410]: rd.md=0: removing MD RAID activation Oct 9 07:49:50.361269 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 9 07:49:50.373137 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 9 07:49:50.495960 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 07:49:50.506027 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 9 07:49:50.541027 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 9 07:49:50.549529 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 9 07:49:50.550992 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 07:49:50.551766 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 9 07:49:50.560096 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 9 07:49:50.600083 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 9 07:49:50.671155 kernel: scsi host0: Virtio SCSI HBA Oct 9 07:49:50.678827 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Oct 9 07:49:50.713802 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Oct 9 07:49:50.715779 kernel: cryptd: max_cpu_qlen set to 1000 Oct 9 07:49:50.717908 kernel: ACPI: bus type USB registered Oct 9 07:49:50.717981 kernel: usbcore: registered new interface driver usbfs Oct 9 07:49:50.719959 kernel: usbcore: registered new interface driver hub Oct 9 07:49:50.725765 kernel: usbcore: registered new device driver usb Oct 9 07:49:50.741004 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 9 07:49:50.741089 kernel: GPT:9289727 != 125829119 Oct 9 07:49:50.741109 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 9 07:49:50.741126 kernel: GPT:9289727 != 125829119 Oct 9 07:49:50.741143 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 9 07:49:50.741160 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 07:49:50.756771 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Oct 9 07:49:50.764821 kernel: virtio_blk virtio5: [vdb] 968 512-byte logical blocks (496 kB/484 KiB) Oct 9 07:49:50.765764 kernel: libata version 3.00 loaded. Oct 9 07:49:50.777375 kernel: AVX2 version of gcm_enc/dec engaged. Oct 9 07:49:50.777466 kernel: AES CTR mode by8 optimization enabled Oct 9 07:49:50.782767 kernel: ata_piix 0000:00:01.1: version 2.13 Oct 9 07:49:50.786238 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 9 07:49:50.786525 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 07:49:50.789136 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 07:49:50.793938 kernel: scsi host1: ata_piix Oct 9 07:49:50.789575 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 07:49:50.789818 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:49:50.791907 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 07:49:50.809969 kernel: scsi host2: ata_piix Oct 9 07:49:50.810262 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Oct 9 07:49:50.810288 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Oct 9 07:49:50.806513 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 07:49:50.838527 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (459) Oct 9 07:49:50.860676 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 9 07:49:50.877780 kernel: BTRFS: device fsid a8680da2-059a-4648-a8e8-f62925ab33ec devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (465) Oct 9 07:49:50.892534 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 9 07:49:50.943710 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Oct 9 07:49:50.944002 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Oct 9 07:49:50.944134 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Oct 9 07:49:50.944265 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Oct 9 07:49:50.944455 kernel: hub 1-0:1.0: USB hub found Oct 9 07:49:50.944653 kernel: hub 1-0:1.0: 2 ports detected Oct 9 07:49:50.944602 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:49:50.959914 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 9 07:49:50.968409 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Oct 9 07:49:50.969265 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 9 07:49:50.976042 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 9 07:49:50.983069 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 07:49:50.994797 disk-uuid[542]: Primary Header is updated. Oct 9 07:49:50.994797 disk-uuid[542]: Secondary Entries is updated. Oct 9 07:49:50.994797 disk-uuid[542]: Secondary Header is updated. Oct 9 07:49:51.007788 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 07:49:51.017797 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 07:49:51.018411 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 07:49:52.036816 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 07:49:52.038078 disk-uuid[543]: The operation has completed successfully. Oct 9 07:49:52.104635 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 9 07:49:52.104879 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 9 07:49:52.135074 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 9 07:49:52.143523 sh[564]: Success Oct 9 07:49:52.165814 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Oct 9 07:49:52.265816 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 9 07:49:52.281358 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 9 07:49:52.285064 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 9 07:49:52.326672 kernel: BTRFS info (device dm-0): first mount of filesystem a8680da2-059a-4648-a8e8-f62925ab33ec Oct 9 07:49:52.326795 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 9 07:49:52.326821 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Oct 9 07:49:52.326843 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 9 07:49:52.327885 kernel: BTRFS info (device dm-0): using free space tree Oct 9 07:49:52.343305 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 9 07:49:52.345669 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 9 07:49:52.352070 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 9 07:49:52.354986 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 9 07:49:52.381627 kernel: BTRFS info (device vda6): first mount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 9 07:49:52.381722 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 9 07:49:52.381760 kernel: BTRFS info (device vda6): using free space tree Oct 9 07:49:52.402437 kernel: BTRFS info (device vda6): auto enabling async discard Oct 9 07:49:52.422466 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 9 07:49:52.424418 kernel: BTRFS info (device vda6): last unmount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 9 07:49:52.436531 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 9 07:49:52.446116 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 9 07:49:52.593426 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 9 07:49:52.603438 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 9 07:49:52.637546 ignition[664]: Ignition 2.19.0 Oct 9 07:49:52.637566 ignition[664]: Stage: fetch-offline Oct 9 07:49:52.639997 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 9 07:49:52.637620 ignition[664]: no configs at "/usr/lib/ignition/base.d" Oct 9 07:49:52.637633 ignition[664]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 9 07:49:52.637814 ignition[664]: parsed url from cmdline: "" Oct 9 07:49:52.637820 ignition[664]: no config URL provided Oct 9 07:49:52.637828 ignition[664]: reading system config file "/usr/lib/ignition/user.ign" Oct 9 07:49:52.637843 ignition[664]: no config at "/usr/lib/ignition/user.ign" Oct 9 07:49:52.637852 ignition[664]: failed to fetch config: resource requires networking Oct 9 07:49:52.638199 ignition[664]: Ignition finished successfully Oct 9 07:49:52.651065 systemd-networkd[751]: lo: Link UP Oct 9 07:49:52.651082 systemd-networkd[751]: lo: Gained carrier Oct 9 07:49:52.653475 systemd-networkd[751]: Enumeration completed Oct 9 07:49:52.653682 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 9 07:49:52.654073 systemd-networkd[751]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Oct 9 07:49:52.654079 systemd-networkd[751]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Oct 9 07:49:52.654414 systemd[1]: Reached target network.target - Network. Oct 9 07:49:52.655190 systemd-networkd[751]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 07:49:52.655194 systemd-networkd[751]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 9 07:49:52.656876 systemd-networkd[751]: eth0: Link UP Oct 9 07:49:52.656883 systemd-networkd[751]: eth0: Gained carrier Oct 9 07:49:52.656895 systemd-networkd[751]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Oct 9 07:49:52.662243 systemd-networkd[751]: eth1: Link UP Oct 9 07:49:52.662248 systemd-networkd[751]: eth1: Gained carrier Oct 9 07:49:52.662262 systemd-networkd[751]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 07:49:52.664078 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Oct 9 07:49:52.675903 systemd-networkd[751]: eth0: DHCPv4 address 64.23.134.87/20, gateway 64.23.128.1 acquired from 169.254.169.253 Oct 9 07:49:52.690885 systemd-networkd[751]: eth1: DHCPv4 address 10.124.0.19/20 acquired from 169.254.169.253 Oct 9 07:49:52.701130 ignition[756]: Ignition 2.19.0 Oct 9 07:49:52.701153 ignition[756]: Stage: fetch Oct 9 07:49:52.701510 ignition[756]: no configs at "/usr/lib/ignition/base.d" Oct 9 07:49:52.701530 ignition[756]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 9 07:49:52.702977 ignition[756]: parsed url from cmdline: "" Oct 9 07:49:52.702983 ignition[756]: no config URL provided Oct 9 07:49:52.702993 ignition[756]: reading system config file "/usr/lib/ignition/user.ign" Oct 9 07:49:52.703023 ignition[756]: no config at "/usr/lib/ignition/user.ign" Oct 9 07:49:52.703051 ignition[756]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Oct 9 07:49:52.736515 ignition[756]: GET result: OK Oct 9 07:49:52.736690 ignition[756]: parsing config with SHA512: dbd736c70ef383bb65e541a6bb5723d9bcb4f28e08684501c45301dd2fe4e06ae5f47245ecaad6fc86defeefddcbfcbbba8fa94db9fcce7874aa3197c15ab1e0 Oct 9 07:49:52.742791 unknown[756]: fetched base config from "system" Oct 9 07:49:52.742807 unknown[756]: fetched base config from "system" Oct 9 07:49:52.743554 ignition[756]: fetch: fetch complete Oct 9 07:49:52.742818 unknown[756]: fetched user config from "digitalocean" Oct 9 07:49:52.743564 ignition[756]: fetch: fetch passed Oct 9 07:49:52.746105 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Oct 9 07:49:52.743651 ignition[756]: Ignition finished successfully Oct 9 07:49:52.754167 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 9 07:49:52.787702 ignition[763]: Ignition 2.19.0 Oct 9 07:49:52.787715 ignition[763]: Stage: kargs Oct 9 07:49:52.788022 ignition[763]: no configs at "/usr/lib/ignition/base.d" Oct 9 07:49:52.788036 ignition[763]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 9 07:49:52.789171 ignition[763]: kargs: kargs passed Oct 9 07:49:52.790505 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 9 07:49:52.789236 ignition[763]: Ignition finished successfully Oct 9 07:49:52.796127 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 9 07:49:52.844067 ignition[769]: Ignition 2.19.0 Oct 9 07:49:52.844095 ignition[769]: Stage: disks Oct 9 07:49:52.844467 ignition[769]: no configs at "/usr/lib/ignition/base.d" Oct 9 07:49:52.844487 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 9 07:49:52.846043 ignition[769]: disks: disks passed Oct 9 07:49:52.848505 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 9 07:49:52.846118 ignition[769]: Ignition finished successfully Oct 9 07:49:52.854065 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 9 07:49:52.854682 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 9 07:49:52.855175 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 9 07:49:52.856471 systemd[1]: Reached target sysinit.target - System Initialization. Oct 9 07:49:52.859603 systemd[1]: Reached target basic.target - Basic System. Oct 9 07:49:52.872641 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 9 07:49:52.901860 systemd-fsck[777]: ROOT: clean, 14/553520 files, 52654/553472 blocks Oct 9 07:49:52.905301 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 9 07:49:52.912966 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 9 07:49:53.062094 kernel: EXT4-fs (vda9): mounted filesystem 1df90f14-3ad0-4280-9b7d-a34f65d70e4d r/w with ordered data mode. Quota mode: none. Oct 9 07:49:53.063844 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 9 07:49:53.066498 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 9 07:49:53.075711 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 9 07:49:53.093131 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 9 07:49:53.099089 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Oct 9 07:49:53.111356 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Oct 9 07:49:53.113450 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (785) Oct 9 07:49:53.113632 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 9 07:49:53.121373 kernel: BTRFS info (device vda6): first mount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 9 07:49:53.121449 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 9 07:49:53.121475 kernel: BTRFS info (device vda6): using free space tree Oct 9 07:49:53.117662 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 9 07:49:53.149483 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 9 07:49:53.154066 kernel: BTRFS info (device vda6): auto enabling async discard Oct 9 07:49:53.184284 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 9 07:49:53.192106 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 9 07:49:53.288795 coreos-metadata[788]: Oct 09 07:49:53.286 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Oct 9 07:49:53.299612 coreos-metadata[788]: Oct 09 07:49:53.298 INFO Fetch successful Oct 9 07:49:53.306855 coreos-metadata[787]: Oct 09 07:49:53.306 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Oct 9 07:49:53.309535 coreos-metadata[788]: Oct 09 07:49:53.309 INFO wrote hostname ci-4081.1.0-6-a1de16b848 to /sysroot/etc/hostname Oct 9 07:49:53.312348 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Oct 9 07:49:53.315483 initrd-setup-root[816]: cut: /sysroot/etc/passwd: No such file or directory Oct 9 07:49:53.321923 coreos-metadata[787]: Oct 09 07:49:53.321 INFO Fetch successful Oct 9 07:49:53.332032 initrd-setup-root[824]: cut: /sysroot/etc/group: No such file or directory Oct 9 07:49:53.333531 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Oct 9 07:49:53.333741 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Oct 9 07:49:53.352664 initrd-setup-root[832]: cut: /sysroot/etc/shadow: No such file or directory Oct 9 07:49:53.371588 initrd-setup-root[839]: cut: /sysroot/etc/gshadow: No such file or directory Oct 9 07:49:53.586582 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 9 07:49:53.590959 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 9 07:49:53.595043 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 9 07:49:53.619382 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 9 07:49:53.622765 kernel: BTRFS info (device vda6): last unmount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 9 07:49:53.657429 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 9 07:49:53.688264 ignition[907]: INFO : Ignition 2.19.0 Oct 9 07:49:53.699312 ignition[907]: INFO : Stage: mount Oct 9 07:49:53.699312 ignition[907]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 07:49:53.699312 ignition[907]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 9 07:49:53.702499 ignition[907]: INFO : mount: mount passed Oct 9 07:49:53.702499 ignition[907]: INFO : Ignition finished successfully Oct 9 07:49:53.704205 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 9 07:49:53.711646 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 9 07:49:53.767677 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 9 07:49:53.795792 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (918) Oct 9 07:49:53.799933 kernel: BTRFS info (device vda6): first mount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 9 07:49:53.800030 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 9 07:49:53.800055 kernel: BTRFS info (device vda6): using free space tree Oct 9 07:49:53.810846 kernel: BTRFS info (device vda6): auto enabling async discard Oct 9 07:49:53.819263 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 9 07:49:53.887849 ignition[935]: INFO : Ignition 2.19.0 Oct 9 07:49:53.887849 ignition[935]: INFO : Stage: files Oct 9 07:49:53.887849 ignition[935]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 07:49:53.887849 ignition[935]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 9 07:49:53.892450 ignition[935]: DEBUG : files: compiled without relabeling support, skipping Oct 9 07:49:53.894637 ignition[935]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 9 07:49:53.894637 ignition[935]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 9 07:49:53.895953 systemd-networkd[751]: eth0: Gained IPv6LL Oct 9 07:49:53.898480 systemd-networkd[751]: eth1: Gained IPv6LL Oct 9 07:49:53.902082 ignition[935]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 9 07:49:53.903525 ignition[935]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 9 07:49:53.904168 ignition[935]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 9 07:49:53.904035 unknown[935]: wrote ssh authorized keys file for user: core Oct 9 07:49:53.906631 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 9 07:49:53.906631 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Oct 9 07:49:53.953437 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 9 07:49:54.087240 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 9 07:49:54.087240 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 9 07:49:54.089627 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 9 07:49:54.089627 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 9 07:49:54.089627 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 9 07:49:54.089627 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 9 07:49:54.089627 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 9 07:49:54.089627 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 9 07:49:54.089627 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 9 07:49:54.089627 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 9 07:49:54.089627 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 9 07:49:54.089627 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Oct 9 07:49:54.089627 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Oct 9 07:49:54.089627 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Oct 9 07:49:54.089627 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Oct 9 07:49:54.534816 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 9 07:49:54.873324 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Oct 9 07:49:54.873324 ignition[935]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 9 07:49:54.876183 ignition[935]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 9 07:49:54.876183 ignition[935]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 9 07:49:54.876183 ignition[935]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 9 07:49:54.876183 ignition[935]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Oct 9 07:49:54.876183 ignition[935]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Oct 9 07:49:54.876183 ignition[935]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 9 07:49:54.876183 ignition[935]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 9 07:49:54.876183 ignition[935]: INFO : files: files passed Oct 9 07:49:54.876183 ignition[935]: INFO : Ignition finished successfully Oct 9 07:49:54.877472 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 9 07:49:54.887243 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 9 07:49:54.897165 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 9 07:49:54.906787 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 9 07:49:54.908331 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 9 07:49:54.919313 initrd-setup-root-after-ignition[963]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 9 07:49:54.919313 initrd-setup-root-after-ignition[963]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 9 07:49:54.923045 initrd-setup-root-after-ignition[967]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 9 07:49:54.926800 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 9 07:49:54.928449 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 9 07:49:54.934366 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 9 07:49:55.002659 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 9 07:49:55.002895 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 9 07:49:55.005063 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 9 07:49:55.005650 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 9 07:49:55.006992 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 9 07:49:55.021186 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 9 07:49:55.043064 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 9 07:49:55.053127 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 9 07:49:55.067254 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 9 07:49:55.068757 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 07:49:55.070255 systemd[1]: Stopped target timers.target - Timer Units. Oct 9 07:49:55.071483 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 9 07:49:55.071666 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 9 07:49:55.072697 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 9 07:49:55.073311 systemd[1]: Stopped target basic.target - Basic System. Oct 9 07:49:55.075289 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 9 07:49:55.076756 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 9 07:49:55.078241 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 9 07:49:55.079702 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 9 07:49:55.080836 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 9 07:49:55.082388 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 9 07:49:55.083536 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 9 07:49:55.085000 systemd[1]: Stopped target swap.target - Swaps. Oct 9 07:49:55.086370 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 9 07:49:55.086696 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 9 07:49:55.088907 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 9 07:49:55.090360 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 07:49:55.091759 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 9 07:49:55.091980 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 07:49:55.093153 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 9 07:49:55.093484 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 9 07:49:55.095313 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 9 07:49:55.095578 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 9 07:49:55.096786 systemd[1]: ignition-files.service: Deactivated successfully. Oct 9 07:49:55.096920 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 9 07:49:55.098063 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Oct 9 07:49:55.098235 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Oct 9 07:49:55.107252 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 9 07:49:55.108887 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 9 07:49:55.110015 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 07:49:55.118123 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 9 07:49:55.120530 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 9 07:49:55.121328 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 07:49:55.124842 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 9 07:49:55.125075 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 9 07:49:55.135321 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 9 07:49:55.135516 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 9 07:49:55.144772 ignition[988]: INFO : Ignition 2.19.0 Oct 9 07:49:55.144772 ignition[988]: INFO : Stage: umount Oct 9 07:49:55.144772 ignition[988]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 07:49:55.144772 ignition[988]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 9 07:49:55.147678 ignition[988]: INFO : umount: umount passed Oct 9 07:49:55.147678 ignition[988]: INFO : Ignition finished successfully Oct 9 07:49:55.149246 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 9 07:49:55.150286 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 9 07:49:55.151969 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 9 07:49:55.152144 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 9 07:49:55.153252 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 9 07:49:55.153327 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 9 07:49:55.160798 systemd[1]: ignition-fetch.service: Deactivated successfully. Oct 9 07:49:55.160898 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Oct 9 07:49:55.161822 systemd[1]: Stopped target network.target - Network. Oct 9 07:49:55.163811 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 9 07:49:55.163929 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 9 07:49:55.164882 systemd[1]: Stopped target paths.target - Path Units. Oct 9 07:49:55.165287 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 9 07:49:55.168960 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 07:49:55.169687 systemd[1]: Stopped target slices.target - Slice Units. Oct 9 07:49:55.173308 systemd[1]: Stopped target sockets.target - Socket Units. Oct 9 07:49:55.173913 systemd[1]: iscsid.socket: Deactivated successfully. Oct 9 07:49:55.173994 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 9 07:49:55.174558 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 9 07:49:55.174617 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 9 07:49:55.201339 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 9 07:49:55.201451 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 9 07:49:55.202299 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 9 07:49:55.202371 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 9 07:49:55.203548 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 9 07:49:55.204357 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 9 07:49:55.228121 systemd-networkd[751]: eth0: DHCPv6 lease lost Oct 9 07:49:55.228259 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 9 07:49:55.232874 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 9 07:49:55.234450 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 9 07:49:55.250178 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 9 07:49:55.250393 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 9 07:49:55.251476 systemd-networkd[751]: eth1: DHCPv6 lease lost Oct 9 07:49:55.256502 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 9 07:49:55.256800 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 9 07:49:55.274088 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 9 07:49:55.274239 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 9 07:49:55.285149 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 9 07:49:55.285851 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 9 07:49:55.285998 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 9 07:49:55.287140 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 9 07:49:55.287256 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 9 07:49:55.289758 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 9 07:49:55.289861 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 9 07:49:55.294529 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 07:49:55.306177 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 9 07:49:55.306385 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 9 07:49:55.309296 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 9 07:49:55.310130 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 07:49:55.317590 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 9 07:49:55.317750 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 9 07:49:55.318706 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 9 07:49:55.318816 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 07:49:55.320158 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 9 07:49:55.320239 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 9 07:49:55.322438 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 9 07:49:55.322517 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 9 07:49:55.324294 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 9 07:49:55.324381 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 07:49:55.325433 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 9 07:49:55.325515 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 9 07:49:55.330723 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 9 07:49:55.332549 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 9 07:49:55.332678 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 07:49:55.336017 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 07:49:55.336137 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:49:55.345312 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 9 07:49:55.346111 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 9 07:49:55.355868 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 9 07:49:55.356063 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 9 07:49:55.358682 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 9 07:49:55.366135 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 9 07:49:55.396154 systemd[1]: Switching root. Oct 9 07:49:55.437896 systemd-journald[181]: Journal stopped Oct 9 07:49:56.777620 systemd-journald[181]: Received SIGTERM from PID 1 (systemd). Oct 9 07:49:56.777717 kernel: SELinux: policy capability network_peer_controls=1 Oct 9 07:49:56.777736 kernel: SELinux: policy capability open_perms=1 Oct 9 07:49:56.777771 kernel: SELinux: policy capability extended_socket_class=1 Oct 9 07:49:56.777785 kernel: SELinux: policy capability always_check_network=0 Oct 9 07:49:56.777798 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 9 07:49:56.777811 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 9 07:49:56.777836 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 9 07:49:56.777855 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 9 07:49:56.777868 kernel: audit: type=1403 audit(1728460195.639:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 9 07:49:56.777883 systemd[1]: Successfully loaded SELinux policy in 44.351ms. Oct 9 07:49:56.777910 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 18.576ms. Oct 9 07:49:56.777925 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 9 07:49:56.777939 systemd[1]: Detected virtualization kvm. Oct 9 07:49:56.777952 systemd[1]: Detected architecture x86-64. Oct 9 07:49:56.777965 systemd[1]: Detected first boot. Oct 9 07:49:56.777984 systemd[1]: Hostname set to . Oct 9 07:49:56.778001 systemd[1]: Initializing machine ID from VM UUID. Oct 9 07:49:56.778023 zram_generator::config[1031]: No configuration found. Oct 9 07:49:56.778044 systemd[1]: Populated /etc with preset unit settings. Oct 9 07:49:56.778065 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 9 07:49:56.778086 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 9 07:49:56.778105 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 9 07:49:56.778125 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 9 07:49:56.778154 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 9 07:49:56.778168 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 9 07:49:56.778181 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 9 07:49:56.778194 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 9 07:49:56.778221 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 9 07:49:56.778235 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 9 07:49:56.778253 systemd[1]: Created slice user.slice - User and Session Slice. Oct 9 07:49:56.778267 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 07:49:56.778280 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 07:49:56.778314 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 9 07:49:56.778327 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 9 07:49:56.778358 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 9 07:49:56.778371 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 9 07:49:56.778385 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 9 07:49:56.778404 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 07:49:56.778417 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 9 07:49:56.778435 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 9 07:49:56.778451 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 9 07:49:56.778473 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 9 07:49:56.778491 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 07:49:56.778513 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 9 07:49:56.778528 systemd[1]: Reached target slices.target - Slice Units. Oct 9 07:49:56.778543 systemd[1]: Reached target swap.target - Swaps. Oct 9 07:49:56.778557 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 9 07:49:56.778575 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 9 07:49:56.778590 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 9 07:49:56.778604 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 9 07:49:56.778618 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 07:49:56.778632 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 9 07:49:56.778647 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 9 07:49:56.778662 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 9 07:49:56.778675 systemd[1]: Mounting media.mount - External Media Directory... Oct 9 07:49:56.778689 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:49:56.778707 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 9 07:49:56.778720 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 9 07:49:56.780135 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 9 07:49:56.780193 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 9 07:49:56.780216 systemd[1]: Reached target machines.target - Containers. Oct 9 07:49:56.780236 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 9 07:49:56.780270 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 07:49:56.780289 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 9 07:49:56.780308 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 9 07:49:56.780337 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 07:49:56.780356 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 9 07:49:56.780375 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 07:49:56.780395 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 9 07:49:56.780414 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 07:49:56.780434 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 9 07:49:56.780454 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 9 07:49:56.780474 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 9 07:49:56.780498 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 9 07:49:56.780517 systemd[1]: Stopped systemd-fsck-usr.service. Oct 9 07:49:56.780537 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 9 07:49:56.780555 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 9 07:49:56.780574 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 9 07:49:56.780594 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 9 07:49:56.780712 systemd-journald[1107]: Collecting audit messages is disabled. Oct 9 07:49:56.780792 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 9 07:49:56.780822 systemd[1]: verity-setup.service: Deactivated successfully. Oct 9 07:49:56.780844 systemd[1]: Stopped verity-setup.service. Oct 9 07:49:56.780866 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:49:56.780887 systemd-journald[1107]: Journal started Oct 9 07:49:56.780924 systemd-journald[1107]: Runtime Journal (/run/log/journal/5ae57698bb39487dbe756af032377d77) is 4.9M, max 39.3M, 34.4M free. Oct 9 07:49:56.445583 systemd[1]: Queued start job for default target multi-user.target. Oct 9 07:49:56.471669 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 9 07:49:56.472144 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 9 07:49:56.791754 systemd[1]: Started systemd-journald.service - Journal Service. Oct 9 07:49:56.794430 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 9 07:49:56.795863 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 9 07:49:56.798128 systemd[1]: Mounted media.mount - External Media Directory. Oct 9 07:49:56.800483 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 9 07:49:56.801395 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 9 07:49:56.802805 kernel: loop: module loaded Oct 9 07:49:56.810030 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 9 07:49:56.813854 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 07:49:56.815217 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 9 07:49:56.815463 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 9 07:49:56.816644 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 07:49:56.817282 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 07:49:56.820027 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 07:49:56.820342 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 07:49:56.822829 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 07:49:56.823088 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 07:49:56.846770 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 9 07:49:56.850931 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 9 07:49:56.860643 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 9 07:49:56.876632 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 9 07:49:56.886942 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 9 07:49:56.888910 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 9 07:49:56.888977 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 9 07:49:56.898153 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Oct 9 07:49:56.910317 kernel: ACPI: bus type drm_connector registered Oct 9 07:49:56.912141 kernel: fuse: init (API version 7.39) Oct 9 07:49:56.914098 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 9 07:49:56.917115 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 9 07:49:56.918051 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 07:49:56.925042 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 9 07:49:56.936204 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 9 07:49:56.938603 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 9 07:49:56.945053 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 9 07:49:56.946294 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 9 07:49:56.949359 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 9 07:49:56.959691 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 9 07:49:56.960226 systemd-journald[1107]: Time spent on flushing to /var/log/journal/5ae57698bb39487dbe756af032377d77 is 48.019ms for 976 entries. Oct 9 07:49:56.960226 systemd-journald[1107]: System Journal (/var/log/journal/5ae57698bb39487dbe756af032377d77) is 8.0M, max 195.6M, 187.6M free. Oct 9 07:49:57.023094 systemd-journald[1107]: Received client request to flush runtime journal. Oct 9 07:49:56.970899 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 9 07:49:56.972295 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 9 07:49:56.973966 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 9 07:49:56.975334 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 9 07:49:56.976145 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 9 07:49:56.987441 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 9 07:49:56.989836 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 9 07:49:57.027911 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 9 07:49:57.034373 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 9 07:49:57.035831 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 9 07:49:57.042282 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 9 07:49:57.107516 kernel: loop0: detected capacity change from 0 to 140768 Oct 9 07:49:57.108555 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 9 07:49:57.114634 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 9 07:49:57.128037 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Oct 9 07:49:57.160448 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 9 07:49:57.179325 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 9 07:49:57.215774 kernel: loop1: detected capacity change from 0 to 142488 Oct 9 07:49:57.221249 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 9 07:49:57.230156 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Oct 9 07:49:57.248764 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 07:49:57.260516 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Oct 9 07:49:57.299809 kernel: loop2: detected capacity change from 0 to 8 Oct 9 07:49:57.308781 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 9 07:49:57.316992 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 9 07:49:57.345257 kernel: loop3: detected capacity change from 0 to 205544 Oct 9 07:49:57.345950 udevadm[1169]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Oct 9 07:49:57.408444 systemd-tmpfiles[1173]: ACLs are not supported, ignoring. Oct 9 07:49:57.408471 systemd-tmpfiles[1173]: ACLs are not supported, ignoring. Oct 9 07:49:57.420843 kernel: loop4: detected capacity change from 0 to 140768 Oct 9 07:49:57.433027 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 07:49:57.449781 kernel: loop5: detected capacity change from 0 to 142488 Oct 9 07:49:57.469766 kernel: loop6: detected capacity change from 0 to 8 Oct 9 07:49:57.471928 kernel: loop7: detected capacity change from 0 to 205544 Oct 9 07:49:57.498936 (sd-merge)[1176]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Oct 9 07:49:57.501481 (sd-merge)[1176]: Merged extensions into '/usr'. Oct 9 07:49:57.512705 systemd[1]: Reloading requested from client PID 1147 ('systemd-sysext') (unit systemd-sysext.service)... Oct 9 07:49:57.512987 systemd[1]: Reloading... Oct 9 07:49:57.660889 zram_generator::config[1202]: No configuration found. Oct 9 07:49:57.900851 ldconfig[1142]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 9 07:49:57.995165 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 07:49:58.063158 systemd[1]: Reloading finished in 549 ms. Oct 9 07:49:58.086932 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 9 07:49:58.092262 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 9 07:49:58.102158 systemd[1]: Starting ensure-sysext.service... Oct 9 07:49:58.106325 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 9 07:49:58.122173 systemd[1]: Reloading requested from client PID 1246 ('systemctl') (unit ensure-sysext.service)... Oct 9 07:49:58.122207 systemd[1]: Reloading... Oct 9 07:49:58.176235 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 9 07:49:58.177209 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 9 07:49:58.178633 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 9 07:49:58.179421 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. Oct 9 07:49:58.179678 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. Oct 9 07:49:58.187238 systemd-tmpfiles[1247]: Detected autofs mount point /boot during canonicalization of boot. Oct 9 07:49:58.187467 systemd-tmpfiles[1247]: Skipping /boot Oct 9 07:49:58.208820 systemd-tmpfiles[1247]: Detected autofs mount point /boot during canonicalization of boot. Oct 9 07:49:58.209022 systemd-tmpfiles[1247]: Skipping /boot Oct 9 07:49:58.300336 zram_generator::config[1274]: No configuration found. Oct 9 07:49:58.480827 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 07:49:58.544881 systemd[1]: Reloading finished in 421 ms. Oct 9 07:49:58.563614 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 9 07:49:58.569479 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 9 07:49:58.590096 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 9 07:49:58.594978 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 9 07:49:58.605088 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 9 07:49:58.610182 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 9 07:49:58.613634 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 07:49:58.621087 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 9 07:49:58.629493 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:49:58.629777 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 07:49:58.640108 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 07:49:58.645842 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 07:49:58.652277 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 07:49:58.654112 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 07:49:58.654347 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:49:58.668216 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 9 07:49:58.670085 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:49:58.670288 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 07:49:58.670468 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 07:49:58.670588 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:49:58.675369 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:49:58.675673 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 07:49:58.682173 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 9 07:49:58.684109 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 07:49:58.684317 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:49:58.689024 systemd[1]: Finished ensure-sysext.service. Oct 9 07:49:58.701167 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 9 07:49:58.709495 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 9 07:49:58.711825 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 9 07:49:58.736873 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 9 07:49:58.740939 systemd-udevd[1327]: Using default interface naming scheme 'v255'. Oct 9 07:49:58.743198 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 07:49:58.743430 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 07:49:58.744378 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 9 07:49:58.744549 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 9 07:49:58.769499 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 07:49:58.772009 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 07:49:58.773136 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 9 07:49:58.776985 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 9 07:49:58.786176 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 9 07:49:58.787381 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 07:49:58.789883 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 07:49:58.792956 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 9 07:49:58.822956 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 07:49:58.831769 augenrules[1355]: No rules Oct 9 07:49:58.836029 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 9 07:49:58.836883 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 9 07:49:58.839612 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 9 07:49:58.860945 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 9 07:49:59.000404 systemd-resolved[1326]: Positive Trust Anchors: Oct 9 07:49:59.000424 systemd-resolved[1326]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 9 07:49:59.000487 systemd-resolved[1326]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 9 07:49:59.011726 systemd-resolved[1326]: Using system hostname 'ci-4081.1.0-6-a1de16b848'. Oct 9 07:49:59.018157 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 9 07:49:59.019803 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 9 07:49:59.038452 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 9 07:49:59.039972 systemd[1]: Reached target time-set.target - System Time Set. Oct 9 07:49:59.058457 systemd-networkd[1358]: lo: Link UP Oct 9 07:49:59.058473 systemd-networkd[1358]: lo: Gained carrier Oct 9 07:49:59.061143 systemd-networkd[1358]: Enumeration completed Oct 9 07:49:59.061340 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 9 07:49:59.062984 systemd[1]: Reached target network.target - Network. Oct 9 07:49:59.072006 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 9 07:49:59.097763 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1374) Oct 9 07:49:59.101775 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 9 07:49:59.107833 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1374) Oct 9 07:49:59.167957 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Oct 9 07:49:59.169866 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:49:59.170122 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 07:49:59.179485 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 07:49:59.191163 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 07:49:59.203016 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 07:49:59.206018 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 07:49:59.206098 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 9 07:49:59.206128 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:49:59.231878 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 07:49:59.232129 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 07:49:59.239078 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 07:49:59.239370 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 07:49:59.241594 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 9 07:49:59.244808 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1383) Oct 9 07:49:59.248359 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 07:49:59.248645 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 07:49:59.253164 kernel: ISO 9660 Extensions: RRIP_1991A Oct 9 07:49:59.261419 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Oct 9 07:49:59.272674 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 9 07:49:59.322785 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Oct 9 07:49:59.339765 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Oct 9 07:49:59.351866 kernel: ACPI: button: Power Button [PWRF] Oct 9 07:49:59.357649 systemd-networkd[1358]: eth0: Configuring with /run/systemd/network/10-8e:55:d1:ce:e1:a9.network. Oct 9 07:49:59.361374 systemd-networkd[1358]: eth1: Configuring with /run/systemd/network/10-f2:1a:46:ee:de:77.network. Oct 9 07:49:59.364283 systemd-networkd[1358]: eth0: Link UP Oct 9 07:49:59.364298 systemd-networkd[1358]: eth0: Gained carrier Oct 9 07:49:59.367561 systemd-networkd[1358]: eth1: Link UP Oct 9 07:49:59.367574 systemd-networkd[1358]: eth1: Gained carrier Oct 9 07:49:59.378931 systemd-timesyncd[1338]: Network configuration changed, trying to establish connection. Oct 9 07:49:59.379939 systemd-timesyncd[1338]: Network configuration changed, trying to establish connection. Oct 9 07:49:59.389711 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 9 07:49:59.402085 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 9 07:49:59.455680 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 9 07:49:59.472772 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Oct 9 07:49:59.536769 kernel: mousedev: PS/2 mouse device common for all mice Oct 9 07:49:59.537093 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 07:49:59.589763 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Oct 9 07:49:59.590771 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Oct 9 07:49:59.597772 kernel: Console: switching to colour dummy device 80x25 Oct 9 07:49:59.600370 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Oct 9 07:49:59.600525 kernel: [drm] features: -context_init Oct 9 07:49:59.605915 kernel: [drm] number of scanouts: 1 Oct 9 07:49:59.606024 kernel: [drm] number of cap sets: 0 Oct 9 07:49:59.676784 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Oct 9 07:49:59.691667 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Oct 9 07:49:59.692130 kernel: Console: switching to colour frame buffer device 128x48 Oct 9 07:49:59.702512 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Oct 9 07:49:59.702232 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 07:49:59.702853 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:49:59.724094 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 07:49:59.770896 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 07:49:59.771299 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:49:59.779235 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 07:49:59.787773 kernel: EDAC MC: Ver: 3.0.0 Oct 9 07:49:59.820931 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Oct 9 07:49:59.835091 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Oct 9 07:49:59.865249 lvm[1429]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 9 07:49:59.866901 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:49:59.899850 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Oct 9 07:49:59.901483 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 9 07:49:59.901672 systemd[1]: Reached target sysinit.target - System Initialization. Oct 9 07:49:59.901927 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 9 07:49:59.902124 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 9 07:49:59.902604 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 9 07:49:59.903584 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 9 07:49:59.904137 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 9 07:49:59.904425 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 9 07:49:59.904617 systemd[1]: Reached target paths.target - Path Units. Oct 9 07:49:59.904874 systemd[1]: Reached target timers.target - Timer Units. Oct 9 07:49:59.908312 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 9 07:49:59.911360 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 9 07:49:59.918939 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 9 07:49:59.923641 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Oct 9 07:49:59.931806 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 9 07:49:59.940211 systemd[1]: Reached target sockets.target - Socket Units. Oct 9 07:49:59.942927 systemd[1]: Reached target basic.target - Basic System. Oct 9 07:49:59.946115 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 9 07:49:59.946178 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 9 07:49:59.954725 lvm[1435]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 9 07:49:59.959961 systemd[1]: Starting containerd.service - containerd container runtime... Oct 9 07:49:59.968052 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Oct 9 07:49:59.986164 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 9 07:49:59.997016 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 9 07:50:00.004109 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 9 07:50:00.005014 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 9 07:50:00.011112 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 9 07:50:00.022015 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 9 07:50:00.034109 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 9 07:50:00.048136 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 9 07:50:00.060824 coreos-metadata[1437]: Oct 09 07:50:00.059 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Oct 9 07:50:00.066086 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 9 07:50:00.073524 coreos-metadata[1437]: Oct 09 07:50:00.073 INFO Fetch successful Oct 9 07:50:00.072951 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 9 07:50:00.076236 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 9 07:50:00.080132 systemd[1]: Starting update-engine.service - Update Engine... Oct 9 07:50:00.080942 dbus-daemon[1438]: [system] SELinux support is enabled Oct 9 07:50:00.092050 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 9 07:50:00.096194 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 9 07:50:00.109428 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Oct 9 07:50:00.119773 jq[1441]: false Oct 9 07:50:00.144895 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 9 07:50:00.146933 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 9 07:50:00.172170 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 9 07:50:00.175648 jq[1449]: true Oct 9 07:50:00.176422 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 9 07:50:00.176496 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 9 07:50:00.180298 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 9 07:50:00.180498 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Oct 9 07:50:00.180536 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 9 07:50:00.191489 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 9 07:50:00.191902 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 9 07:50:00.226105 (ntainerd)[1459]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 9 07:50:00.253455 extend-filesystems[1442]: Found loop4 Oct 9 07:50:00.265913 extend-filesystems[1442]: Found loop5 Oct 9 07:50:00.265913 extend-filesystems[1442]: Found loop6 Oct 9 07:50:00.265913 extend-filesystems[1442]: Found loop7 Oct 9 07:50:00.265913 extend-filesystems[1442]: Found vda Oct 9 07:50:00.265913 extend-filesystems[1442]: Found vda1 Oct 9 07:50:00.265913 extend-filesystems[1442]: Found vda2 Oct 9 07:50:00.265913 extend-filesystems[1442]: Found vda3 Oct 9 07:50:00.265913 extend-filesystems[1442]: Found usr Oct 9 07:50:00.265913 extend-filesystems[1442]: Found vda4 Oct 9 07:50:00.265913 extend-filesystems[1442]: Found vda6 Oct 9 07:50:00.265913 extend-filesystems[1442]: Found vda7 Oct 9 07:50:00.265913 extend-filesystems[1442]: Found vda9 Oct 9 07:50:00.265913 extend-filesystems[1442]: Checking size of /dev/vda9 Oct 9 07:50:00.352537 tar[1458]: linux-amd64/helm Oct 9 07:50:00.367201 jq[1463]: true Oct 9 07:50:00.294842 systemd[1]: Started update-engine.service - Update Engine. Oct 9 07:50:00.367564 update_engine[1448]: I20241009 07:50:00.268092 1448 main.cc:92] Flatcar Update Engine starting Oct 9 07:50:00.367564 update_engine[1448]: I20241009 07:50:00.297082 1448 update_check_scheduler.cc:74] Next update check in 6m3s Oct 9 07:50:00.315999 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 9 07:50:00.332231 systemd[1]: motdgen.service: Deactivated successfully. Oct 9 07:50:00.332581 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 9 07:50:00.340730 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Oct 9 07:50:00.360410 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 9 07:50:00.415272 extend-filesystems[1442]: Resized partition /dev/vda9 Oct 9 07:50:00.440781 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1384) Oct 9 07:50:00.440889 extend-filesystems[1487]: resize2fs 1.47.1 (20-May-2024) Oct 9 07:50:00.459270 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Oct 9 07:50:00.520856 systemd-logind[1447]: New seat seat0. Oct 9 07:50:00.549609 systemd-logind[1447]: Watching system buttons on /dev/input/event1 (Power Button) Oct 9 07:50:00.549654 systemd-logind[1447]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 9 07:50:00.550160 systemd[1]: Started systemd-logind.service - User Login Management. Oct 9 07:50:00.615909 systemd-networkd[1358]: eth0: Gained IPv6LL Oct 9 07:50:00.616603 systemd-timesyncd[1338]: Network configuration changed, trying to establish connection. Oct 9 07:50:00.633619 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 9 07:50:00.640183 systemd[1]: Reached target network-online.target - Network is Online. Oct 9 07:50:00.656135 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:50:00.667264 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 9 07:50:00.719000 bash[1503]: Updated "/home/core/.ssh/authorized_keys" Oct 9 07:50:00.722048 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 9 07:50:00.751664 systemd[1]: Starting sshkeys.service... Oct 9 07:50:00.756033 systemd-networkd[1358]: eth1: Gained IPv6LL Oct 9 07:50:00.756540 systemd-timesyncd[1338]: Network configuration changed, trying to establish connection. Oct 9 07:50:00.816399 locksmithd[1480]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 9 07:50:00.877361 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Oct 9 07:50:00.881551 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Oct 9 07:50:00.898404 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Oct 9 07:50:01.214364 extend-filesystems[1487]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 9 07:50:01.214364 extend-filesystems[1487]: old_desc_blocks = 1, new_desc_blocks = 8 Oct 9 07:50:01.214364 extend-filesystems[1487]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Oct 9 07:50:01.245950 coreos-metadata[1523]: Oct 09 07:50:01.165 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Oct 9 07:50:01.053843 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 9 07:50:01.251619 extend-filesystems[1442]: Resized filesystem in /dev/vda9 Oct 9 07:50:01.251619 extend-filesystems[1442]: Found vdb Oct 9 07:50:01.215874 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 9 07:50:01.216146 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 9 07:50:01.271499 coreos-metadata[1523]: Oct 09 07:50:01.266 INFO Fetch successful Oct 9 07:50:01.314877 unknown[1523]: wrote ssh authorized keys file for user: core Oct 9 07:50:01.509195 containerd[1459]: time="2024-10-09T07:50:01.507033066Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Oct 9 07:50:01.620105 update-ssh-keys[1531]: Updated "/home/core/.ssh/authorized_keys" Oct 9 07:50:01.622240 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Oct 9 07:50:01.637797 systemd[1]: Finished sshkeys.service. Oct 9 07:50:01.686584 containerd[1459]: time="2024-10-09T07:50:01.686470562Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 9 07:50:01.701463 containerd[1459]: time="2024-10-09T07:50:01.698714047Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.54-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 9 07:50:01.701463 containerd[1459]: time="2024-10-09T07:50:01.698814229Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 9 07:50:01.701463 containerd[1459]: time="2024-10-09T07:50:01.698847122Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 9 07:50:01.701463 containerd[1459]: time="2024-10-09T07:50:01.699227144Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Oct 9 07:50:01.701463 containerd[1459]: time="2024-10-09T07:50:01.699265451Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Oct 9 07:50:01.701463 containerd[1459]: time="2024-10-09T07:50:01.699361942Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 07:50:01.701463 containerd[1459]: time="2024-10-09T07:50:01.699402048Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 9 07:50:01.701463 containerd[1459]: time="2024-10-09T07:50:01.699763755Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 07:50:01.701463 containerd[1459]: time="2024-10-09T07:50:01.699819636Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 9 07:50:01.701463 containerd[1459]: time="2024-10-09T07:50:01.699846222Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 07:50:01.701463 containerd[1459]: time="2024-10-09T07:50:01.699864828Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 9 07:50:01.702069 containerd[1459]: time="2024-10-09T07:50:01.700059220Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 9 07:50:01.702069 containerd[1459]: time="2024-10-09T07:50:01.700670965Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 9 07:50:01.707486 containerd[1459]: time="2024-10-09T07:50:01.707401329Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 07:50:01.707871 containerd[1459]: time="2024-10-09T07:50:01.707776970Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 9 07:50:01.708872 containerd[1459]: time="2024-10-09T07:50:01.708831107Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 9 07:50:01.712279 containerd[1459]: time="2024-10-09T07:50:01.710927797Z" level=info msg="metadata content store policy set" policy=shared Oct 9 07:50:01.804098 containerd[1459]: time="2024-10-09T07:50:01.803924714Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 9 07:50:01.804723 containerd[1459]: time="2024-10-09T07:50:01.804677894Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 9 07:50:01.805847 containerd[1459]: time="2024-10-09T07:50:01.805475184Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Oct 9 07:50:01.805847 containerd[1459]: time="2024-10-09T07:50:01.805542881Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Oct 9 07:50:01.805847 containerd[1459]: time="2024-10-09T07:50:01.805643379Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 9 07:50:01.806282 containerd[1459]: time="2024-10-09T07:50:01.806253771Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 9 07:50:01.808688 containerd[1459]: time="2024-10-09T07:50:01.808281954Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 9 07:50:01.808688 containerd[1459]: time="2024-10-09T07:50:01.808637890Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Oct 9 07:50:01.809055 containerd[1459]: time="2024-10-09T07:50:01.808906978Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Oct 9 07:50:01.809055 containerd[1459]: time="2024-10-09T07:50:01.808948571Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Oct 9 07:50:01.809055 containerd[1459]: time="2024-10-09T07:50:01.809012004Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 9 07:50:01.809808 containerd[1459]: time="2024-10-09T07:50:01.809039906Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 9 07:50:01.809808 containerd[1459]: time="2024-10-09T07:50:01.809469583Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 9 07:50:01.809808 containerd[1459]: time="2024-10-09T07:50:01.809515781Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 9 07:50:01.809808 containerd[1459]: time="2024-10-09T07:50:01.809557797Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 9 07:50:01.809808 containerd[1459]: time="2024-10-09T07:50:01.809618253Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 9 07:50:01.809808 containerd[1459]: time="2024-10-09T07:50:01.809640770Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 9 07:50:01.810834 containerd[1459]: time="2024-10-09T07:50:01.810158471Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 9 07:50:01.810834 containerd[1459]: time="2024-10-09T07:50:01.810230541Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 9 07:50:01.810834 containerd[1459]: time="2024-10-09T07:50:01.810257085Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 9 07:50:01.810834 containerd[1459]: time="2024-10-09T07:50:01.810278957Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 9 07:50:01.810834 containerd[1459]: time="2024-10-09T07:50:01.810316446Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 9 07:50:01.810834 containerd[1459]: time="2024-10-09T07:50:01.810337294Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 9 07:50:01.810834 containerd[1459]: time="2024-10-09T07:50:01.810724872Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 9 07:50:01.810834 containerd[1459]: time="2024-10-09T07:50:01.810794243Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 9 07:50:01.813416 containerd[1459]: time="2024-10-09T07:50:01.811697344Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 9 07:50:01.813416 containerd[1459]: time="2024-10-09T07:50:01.811771736Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Oct 9 07:50:01.813416 containerd[1459]: time="2024-10-09T07:50:01.811803239Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Oct 9 07:50:01.813416 containerd[1459]: time="2024-10-09T07:50:01.811839461Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 9 07:50:01.813416 containerd[1459]: time="2024-10-09T07:50:01.811863891Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Oct 9 07:50:01.813416 containerd[1459]: time="2024-10-09T07:50:01.811889087Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 9 07:50:01.813416 containerd[1459]: time="2024-10-09T07:50:01.811956914Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Oct 9 07:50:01.813416 containerd[1459]: time="2024-10-09T07:50:01.812852011Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Oct 9 07:50:01.813416 containerd[1459]: time="2024-10-09T07:50:01.812902939Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 9 07:50:01.813416 containerd[1459]: time="2024-10-09T07:50:01.812924847Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 9 07:50:01.813416 containerd[1459]: time="2024-10-09T07:50:01.813039822Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 9 07:50:01.813416 containerd[1459]: time="2024-10-09T07:50:01.813191628Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Oct 9 07:50:01.813416 containerd[1459]: time="2024-10-09T07:50:01.813227736Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 9 07:50:01.814063 containerd[1459]: time="2024-10-09T07:50:01.813266392Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Oct 9 07:50:01.814063 containerd[1459]: time="2024-10-09T07:50:01.813283298Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 9 07:50:01.814063 containerd[1459]: time="2024-10-09T07:50:01.813301715Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Oct 9 07:50:01.814063 containerd[1459]: time="2024-10-09T07:50:01.813334863Z" level=info msg="NRI interface is disabled by configuration." Oct 9 07:50:01.814063 containerd[1459]: time="2024-10-09T07:50:01.813354295Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 9 07:50:01.819845 containerd[1459]: time="2024-10-09T07:50:01.816820486Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 9 07:50:01.819845 containerd[1459]: time="2024-10-09T07:50:01.817034566Z" level=info msg="Connect containerd service" Oct 9 07:50:01.819845 containerd[1459]: time="2024-10-09T07:50:01.817112708Z" level=info msg="using legacy CRI server" Oct 9 07:50:01.819845 containerd[1459]: time="2024-10-09T07:50:01.817125715Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 9 07:50:01.819845 containerd[1459]: time="2024-10-09T07:50:01.817341695Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 9 07:50:01.819845 containerd[1459]: time="2024-10-09T07:50:01.819095600Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 9 07:50:01.821782 containerd[1459]: time="2024-10-09T07:50:01.820664748Z" level=info msg="Start subscribing containerd event" Oct 9 07:50:01.821782 containerd[1459]: time="2024-10-09T07:50:01.820797662Z" level=info msg="Start recovering state" Oct 9 07:50:01.821782 containerd[1459]: time="2024-10-09T07:50:01.820915683Z" level=info msg="Start event monitor" Oct 9 07:50:01.821782 containerd[1459]: time="2024-10-09T07:50:01.820933969Z" level=info msg="Start snapshots syncer" Oct 9 07:50:01.821782 containerd[1459]: time="2024-10-09T07:50:01.820950348Z" level=info msg="Start cni network conf syncer for default" Oct 9 07:50:01.821782 containerd[1459]: time="2024-10-09T07:50:01.820964033Z" level=info msg="Start streaming server" Oct 9 07:50:01.829235 containerd[1459]: time="2024-10-09T07:50:01.826666787Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 9 07:50:01.829235 containerd[1459]: time="2024-10-09T07:50:01.826799221Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 9 07:50:01.829235 containerd[1459]: time="2024-10-09T07:50:01.826902670Z" level=info msg="containerd successfully booted in 0.359105s" Oct 9 07:50:01.828372 systemd[1]: Started containerd.service - containerd container runtime. Oct 9 07:50:01.901458 sshd_keygen[1470]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 9 07:50:02.064698 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 9 07:50:02.091866 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 9 07:50:02.127877 systemd[1]: Started sshd@0-64.23.134.87:22-139.178.89.65:55044.service - OpenSSH per-connection server daemon (139.178.89.65:55044). Oct 9 07:50:02.191303 systemd[1]: issuegen.service: Deactivated successfully. Oct 9 07:50:02.191803 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 9 07:50:02.205407 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 9 07:50:02.317479 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 9 07:50:02.342487 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 9 07:50:02.356375 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 9 07:50:02.360207 systemd[1]: Reached target getty.target - Login Prompts. Oct 9 07:50:02.415088 sshd[1547]: Accepted publickey for core from 139.178.89.65 port 55044 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:50:02.426402 sshd[1547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:50:02.457003 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 9 07:50:02.471453 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 9 07:50:02.482765 systemd-logind[1447]: New session 1 of user core. Oct 9 07:50:02.522870 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 9 07:50:02.544702 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 9 07:50:02.588204 (systemd)[1559]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:50:02.772334 tar[1458]: linux-amd64/LICENSE Oct 9 07:50:02.773142 tar[1458]: linux-amd64/README.md Oct 9 07:50:02.836965 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 9 07:50:02.903844 systemd[1559]: Queued start job for default target default.target. Oct 9 07:50:02.914120 systemd[1559]: Created slice app.slice - User Application Slice. Oct 9 07:50:02.914193 systemd[1559]: Reached target paths.target - Paths. Oct 9 07:50:02.914219 systemd[1559]: Reached target timers.target - Timers. Oct 9 07:50:02.921887 systemd[1559]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 9 07:50:02.955388 systemd[1559]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 9 07:50:02.956747 systemd[1559]: Reached target sockets.target - Sockets. Oct 9 07:50:02.958465 systemd[1559]: Reached target basic.target - Basic System. Oct 9 07:50:02.960983 systemd[1559]: Reached target default.target - Main User Target. Oct 9 07:50:02.961057 systemd[1559]: Startup finished in 339ms. Oct 9 07:50:02.961149 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 9 07:50:02.974916 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 9 07:50:03.067427 systemd[1]: Started sshd@1-64.23.134.87:22-139.178.89.65:55056.service - OpenSSH per-connection server daemon (139.178.89.65:55056). Oct 9 07:50:03.223259 sshd[1573]: Accepted publickey for core from 139.178.89.65 port 55056 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:50:03.226875 sshd[1573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:50:03.237401 systemd-logind[1447]: New session 2 of user core. Oct 9 07:50:03.244394 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 9 07:50:03.329515 sshd[1573]: pam_unix(sshd:session): session closed for user core Oct 9 07:50:03.348878 systemd[1]: sshd@1-64.23.134.87:22-139.178.89.65:55056.service: Deactivated successfully. Oct 9 07:50:03.352703 systemd[1]: session-2.scope: Deactivated successfully. Oct 9 07:50:03.358022 systemd-logind[1447]: Session 2 logged out. Waiting for processes to exit. Oct 9 07:50:03.365640 systemd[1]: Started sshd@2-64.23.134.87:22-139.178.89.65:55064.service - OpenSSH per-connection server daemon (139.178.89.65:55064). Oct 9 07:50:03.372058 systemd-logind[1447]: Removed session 2. Oct 9 07:50:03.455500 sshd[1580]: Accepted publickey for core from 139.178.89.65 port 55064 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:50:03.461483 sshd[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:50:03.476255 systemd-logind[1447]: New session 3 of user core. Oct 9 07:50:03.490100 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 9 07:50:03.573691 sshd[1580]: pam_unix(sshd:session): session closed for user core Oct 9 07:50:03.580315 systemd-logind[1447]: Session 3 logged out. Waiting for processes to exit. Oct 9 07:50:03.581136 systemd[1]: sshd@2-64.23.134.87:22-139.178.89.65:55064.service: Deactivated successfully. Oct 9 07:50:03.585019 systemd[1]: session-3.scope: Deactivated successfully. Oct 9 07:50:03.587312 systemd-logind[1447]: Removed session 3. Oct 9 07:50:04.145653 (kubelet)[1590]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 07:50:04.146029 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:50:04.150950 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 9 07:50:04.165504 systemd[1]: Startup finished in 1.367s (kernel) + 6.888s (initrd) + 8.568s (userspace) = 16.824s. Oct 9 07:50:05.289313 kubelet[1590]: E1009 07:50:05.289126 1590 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 07:50:05.293215 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 07:50:05.293450 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 07:50:05.293879 systemd[1]: kubelet.service: Consumed 1.503s CPU time. Oct 9 07:50:13.595350 systemd[1]: Started sshd@3-64.23.134.87:22-139.178.89.65:40684.service - OpenSSH per-connection server daemon (139.178.89.65:40684). Oct 9 07:50:13.646767 sshd[1604]: Accepted publickey for core from 139.178.89.65 port 40684 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:50:13.649708 sshd[1604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:50:13.658713 systemd-logind[1447]: New session 4 of user core. Oct 9 07:50:13.666117 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 9 07:50:13.737140 sshd[1604]: pam_unix(sshd:session): session closed for user core Oct 9 07:50:13.754656 systemd[1]: sshd@3-64.23.134.87:22-139.178.89.65:40684.service: Deactivated successfully. Oct 9 07:50:13.758736 systemd[1]: session-4.scope: Deactivated successfully. Oct 9 07:50:13.763250 systemd-logind[1447]: Session 4 logged out. Waiting for processes to exit. Oct 9 07:50:13.776362 systemd[1]: Started sshd@4-64.23.134.87:22-139.178.89.65:40698.service - OpenSSH per-connection server daemon (139.178.89.65:40698). Oct 9 07:50:13.778651 systemd-logind[1447]: Removed session 4. Oct 9 07:50:13.831388 sshd[1611]: Accepted publickey for core from 139.178.89.65 port 40698 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:50:13.834204 sshd[1611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:50:13.844255 systemd-logind[1447]: New session 5 of user core. Oct 9 07:50:13.851265 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 9 07:50:13.914599 sshd[1611]: pam_unix(sshd:session): session closed for user core Oct 9 07:50:13.944286 systemd[1]: sshd@4-64.23.134.87:22-139.178.89.65:40698.service: Deactivated successfully. Oct 9 07:50:13.947134 systemd[1]: session-5.scope: Deactivated successfully. Oct 9 07:50:13.949683 systemd-logind[1447]: Session 5 logged out. Waiting for processes to exit. Oct 9 07:50:13.958566 systemd[1]: Started sshd@5-64.23.134.87:22-139.178.89.65:40714.service - OpenSSH per-connection server daemon (139.178.89.65:40714). Oct 9 07:50:13.960281 systemd-logind[1447]: Removed session 5. Oct 9 07:50:14.005346 sshd[1618]: Accepted publickey for core from 139.178.89.65 port 40714 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:50:14.008885 sshd[1618]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:50:14.016161 systemd-logind[1447]: New session 6 of user core. Oct 9 07:50:14.028316 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 9 07:50:14.097963 sshd[1618]: pam_unix(sshd:session): session closed for user core Oct 9 07:50:14.119546 systemd[1]: sshd@5-64.23.134.87:22-139.178.89.65:40714.service: Deactivated successfully. Oct 9 07:50:14.123086 systemd[1]: session-6.scope: Deactivated successfully. Oct 9 07:50:14.124257 systemd-logind[1447]: Session 6 logged out. Waiting for processes to exit. Oct 9 07:50:14.145029 systemd[1]: Started sshd@6-64.23.134.87:22-139.178.89.65:40716.service - OpenSSH per-connection server daemon (139.178.89.65:40716). Oct 9 07:50:14.147224 systemd-logind[1447]: Removed session 6. Oct 9 07:50:14.197894 sshd[1625]: Accepted publickey for core from 139.178.89.65 port 40716 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:50:14.201540 sshd[1625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:50:14.212496 systemd-logind[1447]: New session 7 of user core. Oct 9 07:50:14.223309 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 9 07:50:14.307468 sudo[1628]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 9 07:50:14.307947 sudo[1628]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 07:50:14.322442 sudo[1628]: pam_unix(sudo:session): session closed for user root Oct 9 07:50:14.327702 sshd[1625]: pam_unix(sshd:session): session closed for user core Oct 9 07:50:14.338457 systemd[1]: sshd@6-64.23.134.87:22-139.178.89.65:40716.service: Deactivated successfully. Oct 9 07:50:14.341889 systemd[1]: session-7.scope: Deactivated successfully. Oct 9 07:50:14.346047 systemd-logind[1447]: Session 7 logged out. Waiting for processes to exit. Oct 9 07:50:14.352316 systemd[1]: Started sshd@7-64.23.134.87:22-139.178.89.65:40726.service - OpenSSH per-connection server daemon (139.178.89.65:40726). Oct 9 07:50:14.357626 systemd-logind[1447]: Removed session 7. Oct 9 07:50:14.412389 sshd[1633]: Accepted publickey for core from 139.178.89.65 port 40726 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:50:14.415096 sshd[1633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:50:14.421859 systemd-logind[1447]: New session 8 of user core. Oct 9 07:50:14.431223 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 9 07:50:14.501345 sudo[1637]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 9 07:50:14.501967 sudo[1637]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 07:50:14.508805 sudo[1637]: pam_unix(sudo:session): session closed for user root Oct 9 07:50:14.519549 sudo[1636]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 9 07:50:14.520779 sudo[1636]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 07:50:14.547859 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Oct 9 07:50:14.552033 auditctl[1640]: No rules Oct 9 07:50:14.553085 systemd[1]: audit-rules.service: Deactivated successfully. Oct 9 07:50:14.553509 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Oct 9 07:50:14.571602 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 9 07:50:14.618940 augenrules[1658]: No rules Oct 9 07:50:14.620626 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 9 07:50:14.624101 sudo[1636]: pam_unix(sudo:session): session closed for user root Oct 9 07:50:14.629210 sshd[1633]: pam_unix(sshd:session): session closed for user core Oct 9 07:50:14.642197 systemd[1]: sshd@7-64.23.134.87:22-139.178.89.65:40726.service: Deactivated successfully. Oct 9 07:50:14.645416 systemd[1]: session-8.scope: Deactivated successfully. Oct 9 07:50:14.650052 systemd-logind[1447]: Session 8 logged out. Waiting for processes to exit. Oct 9 07:50:14.656327 systemd[1]: Started sshd@8-64.23.134.87:22-139.178.89.65:44410.service - OpenSSH per-connection server daemon (139.178.89.65:44410). Oct 9 07:50:14.659641 systemd-logind[1447]: Removed session 8. Oct 9 07:50:14.710399 sshd[1666]: Accepted publickey for core from 139.178.89.65 port 44410 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:50:14.711555 sshd[1666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:50:14.719048 systemd-logind[1447]: New session 9 of user core. Oct 9 07:50:14.727188 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 9 07:50:14.791274 sudo[1669]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 9 07:50:14.791657 sudo[1669]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 07:50:15.433335 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 9 07:50:15.441300 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 9 07:50:15.444506 (dockerd)[1684]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 9 07:50:15.445548 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:50:15.664147 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:50:15.667273 (kubelet)[1692]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 07:50:15.788016 kubelet[1692]: E1009 07:50:15.785235 1692 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 07:50:15.794203 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 07:50:15.794495 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 07:50:16.074234 dockerd[1684]: time="2024-10-09T07:50:16.073104608Z" level=info msg="Starting up" Oct 9 07:50:16.269723 dockerd[1684]: time="2024-10-09T07:50:16.269342137Z" level=info msg="Loading containers: start." Oct 9 07:50:16.427817 kernel: Initializing XFRM netlink socket Oct 9 07:50:16.465686 systemd-timesyncd[1338]: Network configuration changed, trying to establish connection. Oct 9 07:50:17.123817 systemd-resolved[1326]: Clock change detected. Flushing caches. Oct 9 07:50:17.124433 systemd-timesyncd[1338]: Contacted time server 216.229.4.66:123 (2.flatcar.pool.ntp.org). Oct 9 07:50:17.124513 systemd-timesyncd[1338]: Initial clock synchronization to Wed 2024-10-09 07:50:17.123725 UTC. Oct 9 07:50:17.156931 systemd-networkd[1358]: docker0: Link UP Oct 9 07:50:17.180920 dockerd[1684]: time="2024-10-09T07:50:17.180663918Z" level=info msg="Loading containers: done." Oct 9 07:50:17.215407 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3293941413-merged.mount: Deactivated successfully. Oct 9 07:50:17.219197 dockerd[1684]: time="2024-10-09T07:50:17.219126092Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 9 07:50:17.219469 dockerd[1684]: time="2024-10-09T07:50:17.219295544Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Oct 9 07:50:17.219573 dockerd[1684]: time="2024-10-09T07:50:17.219523362Z" level=info msg="Daemon has completed initialization" Oct 9 07:50:17.302137 dockerd[1684]: time="2024-10-09T07:50:17.301973719Z" level=info msg="API listen on /run/docker.sock" Oct 9 07:50:17.302671 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 9 07:50:18.074806 containerd[1459]: time="2024-10-09T07:50:18.074751218Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.0\"" Oct 9 07:50:18.854225 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1147153925.mount: Deactivated successfully. Oct 9 07:50:20.192637 containerd[1459]: time="2024-10-09T07:50:20.192532443Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:50:20.194882 containerd[1459]: time="2024-10-09T07:50:20.194784708Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.0: active requests=0, bytes read=28066621" Oct 9 07:50:20.196294 containerd[1459]: time="2024-10-09T07:50:20.196228771Z" level=info msg="ImageCreate event name:\"sha256:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:50:20.201559 containerd[1459]: time="2024-10-09T07:50:20.200070194Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:50:20.201559 containerd[1459]: time="2024-10-09T07:50:20.200335809Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.0\" with image id \"sha256:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.0\", repo digest \"registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf\", size \"28063421\" in 2.125533019s" Oct 9 07:50:20.201559 containerd[1459]: time="2024-10-09T07:50:20.200374815Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.0\" returns image reference \"sha256:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3\"" Oct 9 07:50:20.202587 containerd[1459]: time="2024-10-09T07:50:20.202531698Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.0\"" Oct 9 07:50:21.859169 containerd[1459]: time="2024-10-09T07:50:21.859086351Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:50:21.860493 containerd[1459]: time="2024-10-09T07:50:21.860442996Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.0: active requests=0, bytes read=24690922" Oct 9 07:50:21.861667 containerd[1459]: time="2024-10-09T07:50:21.861600514Z" level=info msg="ImageCreate event name:\"sha256:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:50:21.867250 containerd[1459]: time="2024-10-09T07:50:21.867167327Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:50:21.869900 containerd[1459]: time="2024-10-09T07:50:21.868919347Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.0\" with image id \"sha256:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.0\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d\", size \"26240868\" in 1.666340401s" Oct 9 07:50:21.869900 containerd[1459]: time="2024-10-09T07:50:21.868995847Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.0\" returns image reference \"sha256:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1\"" Oct 9 07:50:21.870137 containerd[1459]: time="2024-10-09T07:50:21.870099940Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.0\"" Oct 9 07:50:23.236355 containerd[1459]: time="2024-10-09T07:50:23.236284448Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:50:23.237621 containerd[1459]: time="2024-10-09T07:50:23.237538986Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.0: active requests=0, bytes read=18646758" Oct 9 07:50:23.238907 containerd[1459]: time="2024-10-09T07:50:23.238480098Z" level=info msg="ImageCreate event name:\"sha256:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:50:23.242493 containerd[1459]: time="2024-10-09T07:50:23.242451526Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:50:23.244567 containerd[1459]: time="2024-10-09T07:50:23.244508168Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.0\" with image id \"sha256:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.0\", repo digest \"registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808\", size \"20196722\" in 1.374361969s" Oct 9 07:50:23.244824 containerd[1459]: time="2024-10-09T07:50:23.244567318Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.0\" returns image reference \"sha256:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94\"" Oct 9 07:50:23.245550 containerd[1459]: time="2024-10-09T07:50:23.245209635Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.0\"" Oct 9 07:50:23.248344 systemd-resolved[1326]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Oct 9 07:50:24.551778 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3059092295.mount: Deactivated successfully. Oct 9 07:50:25.142870 containerd[1459]: time="2024-10-09T07:50:25.142063133Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:50:25.144229 containerd[1459]: time="2024-10-09T07:50:25.144143536Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.0: active requests=0, bytes read=30208881" Oct 9 07:50:25.145472 containerd[1459]: time="2024-10-09T07:50:25.145414092Z" level=info msg="ImageCreate event name:\"sha256:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:50:25.148327 containerd[1459]: time="2024-10-09T07:50:25.148236914Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:50:25.149752 containerd[1459]: time="2024-10-09T07:50:25.148820301Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.0\" with image id \"sha256:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494\", repo tag \"registry.k8s.io/kube-proxy:v1.31.0\", repo digest \"registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe\", size \"30207900\" in 1.903578997s" Oct 9 07:50:25.149752 containerd[1459]: time="2024-10-09T07:50:25.148864300Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.0\" returns image reference \"sha256:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494\"" Oct 9 07:50:25.150109 containerd[1459]: time="2024-10-09T07:50:25.150071688Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Oct 9 07:50:25.704470 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount548705849.mount: Deactivated successfully. Oct 9 07:50:26.313552 systemd-resolved[1326]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Oct 9 07:50:26.654206 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 9 07:50:26.662298 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:50:26.888247 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:50:26.901979 (kubelet)[1966]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 07:50:26.920056 containerd[1459]: time="2024-10-09T07:50:26.919431670Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:50:26.924029 containerd[1459]: time="2024-10-09T07:50:26.923408377Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Oct 9 07:50:26.929717 containerd[1459]: time="2024-10-09T07:50:26.929633684Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:50:26.935273 containerd[1459]: time="2024-10-09T07:50:26.935205622Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:50:26.937681 containerd[1459]: time="2024-10-09T07:50:26.937506084Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.787396138s" Oct 9 07:50:26.937681 containerd[1459]: time="2024-10-09T07:50:26.937561863Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Oct 9 07:50:26.939006 containerd[1459]: time="2024-10-09T07:50:26.938874298Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Oct 9 07:50:26.976551 kubelet[1966]: E1009 07:50:26.976469 1966 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 07:50:26.979948 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 07:50:26.980139 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 07:50:27.457389 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3538651447.mount: Deactivated successfully. Oct 9 07:50:27.468516 containerd[1459]: time="2024-10-09T07:50:27.468415845Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:50:27.470542 containerd[1459]: time="2024-10-09T07:50:27.470446827Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Oct 9 07:50:27.471776 containerd[1459]: time="2024-10-09T07:50:27.471699591Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:50:27.475725 containerd[1459]: time="2024-10-09T07:50:27.475666678Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 535.80899ms" Oct 9 07:50:27.475725 containerd[1459]: time="2024-10-09T07:50:27.475712260Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Oct 9 07:50:27.476257 containerd[1459]: time="2024-10-09T07:50:27.475925695Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:50:27.477339 containerd[1459]: time="2024-10-09T07:50:27.477276878Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Oct 9 07:50:28.031350 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3824400953.mount: Deactivated successfully. Oct 9 07:50:30.062916 containerd[1459]: time="2024-10-09T07:50:30.061008510Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:50:30.063557 containerd[1459]: time="2024-10-09T07:50:30.063481360Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56241740" Oct 9 07:50:30.064145 containerd[1459]: time="2024-10-09T07:50:30.064097709Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:50:30.069781 containerd[1459]: time="2024-10-09T07:50:30.069709047Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:50:30.071592 containerd[1459]: time="2024-10-09T07:50:30.071508548Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.594065409s" Oct 9 07:50:30.071592 containerd[1459]: time="2024-10-09T07:50:30.071587076Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Oct 9 07:50:33.685131 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:50:33.695360 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:50:33.741928 systemd[1]: Reloading requested from client PID 2057 ('systemctl') (unit session-9.scope)... Oct 9 07:50:33.741953 systemd[1]: Reloading... Oct 9 07:50:33.900916 zram_generator::config[2097]: No configuration found. Oct 9 07:50:34.047430 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 07:50:34.131380 systemd[1]: Reloading finished in 388 ms. Oct 9 07:50:34.197443 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:50:34.201745 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:50:34.206756 systemd[1]: kubelet.service: Deactivated successfully. Oct 9 07:50:34.207039 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:50:34.214362 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:50:34.373198 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:50:34.382792 (kubelet)[2152]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 9 07:50:34.450639 kubelet[2152]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 07:50:34.450639 kubelet[2152]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 9 07:50:34.450639 kubelet[2152]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 07:50:34.452961 kubelet[2152]: I1009 07:50:34.452458 2152 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 9 07:50:35.149750 kubelet[2152]: I1009 07:50:35.149680 2152 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Oct 9 07:50:35.149750 kubelet[2152]: I1009 07:50:35.149730 2152 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 9 07:50:35.150246 kubelet[2152]: I1009 07:50:35.150219 2152 server.go:929] "Client rotation is on, will bootstrap in background" Oct 9 07:50:35.190137 kubelet[2152]: I1009 07:50:35.190098 2152 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 9 07:50:35.190688 kubelet[2152]: E1009 07:50:35.190520 2152 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://64.23.134.87:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 64.23.134.87:6443: connect: connection refused" logger="UnhandledError" Oct 9 07:50:35.203308 kubelet[2152]: E1009 07:50:35.203236 2152 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Oct 9 07:50:35.203308 kubelet[2152]: I1009 07:50:35.203281 2152 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Oct 9 07:50:35.208796 kubelet[2152]: I1009 07:50:35.208746 2152 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 9 07:50:35.208996 kubelet[2152]: I1009 07:50:35.208906 2152 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Oct 9 07:50:35.209131 kubelet[2152]: I1009 07:50:35.209082 2152 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 9 07:50:35.209310 kubelet[2152]: I1009 07:50:35.209122 2152 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.1.0-6-a1de16b848","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 9 07:50:35.209449 kubelet[2152]: I1009 07:50:35.209322 2152 topology_manager.go:138] "Creating topology manager with none policy" Oct 9 07:50:35.209449 kubelet[2152]: I1009 07:50:35.209334 2152 container_manager_linux.go:300] "Creating device plugin manager" Oct 9 07:50:35.209530 kubelet[2152]: I1009 07:50:35.209467 2152 state_mem.go:36] "Initialized new in-memory state store" Oct 9 07:50:35.212261 kubelet[2152]: I1009 07:50:35.211497 2152 kubelet.go:408] "Attempting to sync node with API server" Oct 9 07:50:35.212261 kubelet[2152]: I1009 07:50:35.211550 2152 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 9 07:50:35.212261 kubelet[2152]: I1009 07:50:35.211590 2152 kubelet.go:314] "Adding apiserver pod source" Oct 9 07:50:35.212261 kubelet[2152]: I1009 07:50:35.211606 2152 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 9 07:50:35.215582 kubelet[2152]: W1009 07:50:35.215517 2152 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://64.23.134.87:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.1.0-6-a1de16b848&limit=500&resourceVersion=0": dial tcp 64.23.134.87:6443: connect: connection refused Oct 9 07:50:35.215875 kubelet[2152]: E1009 07:50:35.215851 2152 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://64.23.134.87:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.1.0-6-a1de16b848&limit=500&resourceVersion=0\": dial tcp 64.23.134.87:6443: connect: connection refused" logger="UnhandledError" Oct 9 07:50:35.216589 kubelet[2152]: W1009 07:50:35.216541 2152 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://64.23.134.87:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 64.23.134.87:6443: connect: connection refused Oct 9 07:50:35.216665 kubelet[2152]: E1009 07:50:35.216594 2152 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://64.23.134.87:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 64.23.134.87:6443: connect: connection refused" logger="UnhandledError" Oct 9 07:50:35.217117 kubelet[2152]: I1009 07:50:35.217094 2152 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Oct 9 07:50:35.219472 kubelet[2152]: I1009 07:50:35.219435 2152 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 9 07:50:35.220600 kubelet[2152]: W1009 07:50:35.220560 2152 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 9 07:50:35.222902 kubelet[2152]: I1009 07:50:35.222660 2152 server.go:1269] "Started kubelet" Oct 9 07:50:35.226929 kubelet[2152]: I1009 07:50:35.226728 2152 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 9 07:50:35.229769 kubelet[2152]: E1009 07:50:35.229742 2152 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 9 07:50:35.230688 kubelet[2152]: I1009 07:50:35.230669 2152 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 9 07:50:35.231506 kubelet[2152]: I1009 07:50:35.231465 2152 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 9 07:50:35.234375 kubelet[2152]: I1009 07:50:35.234331 2152 server.go:460] "Adding debug handlers to kubelet server" Oct 9 07:50:35.236180 kubelet[2152]: I1009 07:50:35.236121 2152 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 9 07:50:35.236544 kubelet[2152]: I1009 07:50:35.236527 2152 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 9 07:50:35.237333 kubelet[2152]: I1009 07:50:35.237304 2152 volume_manager.go:289] "Starting Kubelet Volume Manager" Oct 9 07:50:35.237662 kubelet[2152]: E1009 07:50:35.237571 2152 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.1.0-6-a1de16b848\" not found" Oct 9 07:50:35.238711 kubelet[2152]: I1009 07:50:35.237960 2152 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 9 07:50:35.238711 kubelet[2152]: I1009 07:50:35.238030 2152 reconciler.go:26] "Reconciler: start to sync state" Oct 9 07:50:35.240749 kubelet[2152]: I1009 07:50:35.240190 2152 factory.go:221] Registration of the systemd container factory successfully Oct 9 07:50:35.240749 kubelet[2152]: I1009 07:50:35.240368 2152 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 9 07:50:35.240749 kubelet[2152]: W1009 07:50:35.240699 2152 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://64.23.134.87:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.23.134.87:6443: connect: connection refused Oct 9 07:50:35.240943 kubelet[2152]: E1009 07:50:35.240782 2152 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://64.23.134.87:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 64.23.134.87:6443: connect: connection refused" logger="UnhandledError" Oct 9 07:50:35.240943 kubelet[2152]: E1009 07:50:35.240859 2152 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.134.87:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.1.0-6-a1de16b848?timeout=10s\": dial tcp 64.23.134.87:6443: connect: connection refused" interval="200ms" Oct 9 07:50:35.244049 kubelet[2152]: I1009 07:50:35.242514 2152 factory.go:221] Registration of the containerd container factory successfully Oct 9 07:50:35.266774 kubelet[2152]: E1009 07:50:35.263034 2152 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://64.23.134.87:6443/api/v1/namespaces/default/events\": dial tcp 64.23.134.87:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.1.0-6-a1de16b848.17fcb96d18d543cd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.1.0-6-a1de16b848,UID:ci-4081.1.0-6-a1de16b848,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.1.0-6-a1de16b848,},FirstTimestamp:2024-10-09 07:50:35.222631373 +0000 UTC m=+0.832252761,LastTimestamp:2024-10-09 07:50:35.222631373 +0000 UTC m=+0.832252761,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.1.0-6-a1de16b848,}" Oct 9 07:50:35.267221 kubelet[2152]: I1009 07:50:35.267176 2152 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 9 07:50:35.273018 kubelet[2152]: I1009 07:50:35.272987 2152 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 9 07:50:35.273307 kubelet[2152]: I1009 07:50:35.273288 2152 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 9 07:50:35.273445 kubelet[2152]: I1009 07:50:35.273426 2152 state_mem.go:36] "Initialized new in-memory state store" Oct 9 07:50:35.273813 kubelet[2152]: I1009 07:50:35.273121 2152 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 9 07:50:35.273971 kubelet[2152]: I1009 07:50:35.273958 2152 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 9 07:50:35.274080 kubelet[2152]: I1009 07:50:35.274068 2152 kubelet.go:2321] "Starting kubelet main sync loop" Oct 9 07:50:35.274677 kubelet[2152]: E1009 07:50:35.274206 2152 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 9 07:50:35.276508 kubelet[2152]: W1009 07:50:35.276466 2152 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://64.23.134.87:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.23.134.87:6443: connect: connection refused Oct 9 07:50:35.276925 kubelet[2152]: E1009 07:50:35.276647 2152 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://64.23.134.87:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 64.23.134.87:6443: connect: connection refused" logger="UnhandledError" Oct 9 07:50:35.278455 kubelet[2152]: I1009 07:50:35.278271 2152 policy_none.go:49] "None policy: Start" Oct 9 07:50:35.279643 kubelet[2152]: I1009 07:50:35.279621 2152 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 9 07:50:35.280467 kubelet[2152]: I1009 07:50:35.280002 2152 state_mem.go:35] "Initializing new in-memory state store" Oct 9 07:50:35.292322 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 9 07:50:35.308919 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 9 07:50:35.313516 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 9 07:50:35.322689 kubelet[2152]: I1009 07:50:35.322651 2152 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 9 07:50:35.322689 kubelet[2152]: I1009 07:50:35.322947 2152 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 9 07:50:35.322689 kubelet[2152]: I1009 07:50:35.322966 2152 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 9 07:50:35.323504 kubelet[2152]: I1009 07:50:35.323311 2152 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 9 07:50:35.328197 kubelet[2152]: E1009 07:50:35.328154 2152 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.1.0-6-a1de16b848\" not found" Oct 9 07:50:35.388783 systemd[1]: Created slice kubepods-burstable-podffdc5b8e0b4e8064da8ebdee6c4315c2.slice - libcontainer container kubepods-burstable-podffdc5b8e0b4e8064da8ebdee6c4315c2.slice. Oct 9 07:50:35.416190 systemd[1]: Created slice kubepods-burstable-pod1fd730ea8ae4e3d790c8447cf1f888ca.slice - libcontainer container kubepods-burstable-pod1fd730ea8ae4e3d790c8447cf1f888ca.slice. Oct 9 07:50:35.425432 kubelet[2152]: I1009 07:50:35.425376 2152 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.1.0-6-a1de16b848" Oct 9 07:50:35.426585 kubelet[2152]: E1009 07:50:35.426453 2152 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://64.23.134.87:6443/api/v1/nodes\": dial tcp 64.23.134.87:6443: connect: connection refused" node="ci-4081.1.0-6-a1de16b848" Oct 9 07:50:35.436328 systemd[1]: Created slice kubepods-burstable-pod7b4a5f3076a93485bdd40ba7fdf9f0af.slice - libcontainer container kubepods-burstable-pod7b4a5f3076a93485bdd40ba7fdf9f0af.slice. Oct 9 07:50:35.442030 kubelet[2152]: E1009 07:50:35.441971 2152 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.134.87:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.1.0-6-a1de16b848?timeout=10s\": dial tcp 64.23.134.87:6443: connect: connection refused" interval="400ms" Oct 9 07:50:35.539591 kubelet[2152]: I1009 07:50:35.539521 2152 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7b4a5f3076a93485bdd40ba7fdf9f0af-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.1.0-6-a1de16b848\" (UID: \"7b4a5f3076a93485bdd40ba7fdf9f0af\") " pod="kube-system/kube-controller-manager-ci-4081.1.0-6-a1de16b848" Oct 9 07:50:35.539591 kubelet[2152]: I1009 07:50:35.539594 2152 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7b4a5f3076a93485bdd40ba7fdf9f0af-k8s-certs\") pod \"kube-controller-manager-ci-4081.1.0-6-a1de16b848\" (UID: \"7b4a5f3076a93485bdd40ba7fdf9f0af\") " pod="kube-system/kube-controller-manager-ci-4081.1.0-6-a1de16b848" Oct 9 07:50:35.539591 kubelet[2152]: I1009 07:50:35.539613 2152 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b4a5f3076a93485bdd40ba7fdf9f0af-kubeconfig\") pod \"kube-controller-manager-ci-4081.1.0-6-a1de16b848\" (UID: \"7b4a5f3076a93485bdd40ba7fdf9f0af\") " pod="kube-system/kube-controller-manager-ci-4081.1.0-6-a1de16b848" Oct 9 07:50:35.540478 kubelet[2152]: I1009 07:50:35.539630 2152 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7b4a5f3076a93485bdd40ba7fdf9f0af-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.1.0-6-a1de16b848\" (UID: \"7b4a5f3076a93485bdd40ba7fdf9f0af\") " pod="kube-system/kube-controller-manager-ci-4081.1.0-6-a1de16b848" Oct 9 07:50:35.540478 kubelet[2152]: I1009 07:50:35.539657 2152 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1fd730ea8ae4e3d790c8447cf1f888ca-kubeconfig\") pod \"kube-scheduler-ci-4081.1.0-6-a1de16b848\" (UID: \"1fd730ea8ae4e3d790c8447cf1f888ca\") " pod="kube-system/kube-scheduler-ci-4081.1.0-6-a1de16b848" Oct 9 07:50:35.540478 kubelet[2152]: I1009 07:50:35.539673 2152 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ffdc5b8e0b4e8064da8ebdee6c4315c2-ca-certs\") pod \"kube-apiserver-ci-4081.1.0-6-a1de16b848\" (UID: \"ffdc5b8e0b4e8064da8ebdee6c4315c2\") " pod="kube-system/kube-apiserver-ci-4081.1.0-6-a1de16b848" Oct 9 07:50:35.540478 kubelet[2152]: I1009 07:50:35.539690 2152 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ffdc5b8e0b4e8064da8ebdee6c4315c2-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.1.0-6-a1de16b848\" (UID: \"ffdc5b8e0b4e8064da8ebdee6c4315c2\") " pod="kube-system/kube-apiserver-ci-4081.1.0-6-a1de16b848" Oct 9 07:50:35.540478 kubelet[2152]: I1009 07:50:35.539795 2152 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7b4a5f3076a93485bdd40ba7fdf9f0af-ca-certs\") pod \"kube-controller-manager-ci-4081.1.0-6-a1de16b848\" (UID: \"7b4a5f3076a93485bdd40ba7fdf9f0af\") " pod="kube-system/kube-controller-manager-ci-4081.1.0-6-a1de16b848" Oct 9 07:50:35.540710 kubelet[2152]: I1009 07:50:35.539817 2152 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ffdc5b8e0b4e8064da8ebdee6c4315c2-k8s-certs\") pod \"kube-apiserver-ci-4081.1.0-6-a1de16b848\" (UID: \"ffdc5b8e0b4e8064da8ebdee6c4315c2\") " pod="kube-system/kube-apiserver-ci-4081.1.0-6-a1de16b848" Oct 9 07:50:35.636353 kubelet[2152]: I1009 07:50:35.636258 2152 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.1.0-6-a1de16b848" Oct 9 07:50:35.636874 kubelet[2152]: E1009 07:50:35.636832 2152 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://64.23.134.87:6443/api/v1/nodes\": dial tcp 64.23.134.87:6443: connect: connection refused" node="ci-4081.1.0-6-a1de16b848" Oct 9 07:50:35.712587 kubelet[2152]: E1009 07:50:35.711662 2152 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:50:35.714105 containerd[1459]: time="2024-10-09T07:50:35.713639215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.1.0-6-a1de16b848,Uid:ffdc5b8e0b4e8064da8ebdee6c4315c2,Namespace:kube-system,Attempt:0,}" Oct 9 07:50:35.722479 systemd-resolved[1326]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. Oct 9 07:50:35.727172 kubelet[2152]: E1009 07:50:35.727097 2152 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:50:35.728357 containerd[1459]: time="2024-10-09T07:50:35.727976760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.1.0-6-a1de16b848,Uid:1fd730ea8ae4e3d790c8447cf1f888ca,Namespace:kube-system,Attempt:0,}" Oct 9 07:50:35.740318 kubelet[2152]: E1009 07:50:35.740225 2152 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:50:35.741525 containerd[1459]: time="2024-10-09T07:50:35.741048537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.1.0-6-a1de16b848,Uid:7b4a5f3076a93485bdd40ba7fdf9f0af,Namespace:kube-system,Attempt:0,}" Oct 9 07:50:35.842865 kubelet[2152]: E1009 07:50:35.842779 2152 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.134.87:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.1.0-6-a1de16b848?timeout=10s\": dial tcp 64.23.134.87:6443: connect: connection refused" interval="800ms" Oct 9 07:50:36.038555 kubelet[2152]: I1009 07:50:36.038425 2152 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.1.0-6-a1de16b848" Oct 9 07:50:36.039659 kubelet[2152]: E1009 07:50:36.039543 2152 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://64.23.134.87:6443/api/v1/nodes\": dial tcp 64.23.134.87:6443: connect: connection refused" node="ci-4081.1.0-6-a1de16b848" Oct 9 07:50:36.124709 kubelet[2152]: W1009 07:50:36.124584 2152 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://64.23.134.87:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.23.134.87:6443: connect: connection refused Oct 9 07:50:36.124709 kubelet[2152]: E1009 07:50:36.124651 2152 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://64.23.134.87:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 64.23.134.87:6443: connect: connection refused" logger="UnhandledError" Oct 9 07:50:36.319167 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3496852427.mount: Deactivated successfully. Oct 9 07:50:36.328678 containerd[1459]: time="2024-10-09T07:50:36.328574244Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 07:50:36.331857 containerd[1459]: time="2024-10-09T07:50:36.331412058Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 9 07:50:36.333590 containerd[1459]: time="2024-10-09T07:50:36.333505208Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 07:50:36.335079 containerd[1459]: time="2024-10-09T07:50:36.335001533Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 07:50:36.336989 containerd[1459]: time="2024-10-09T07:50:36.336782746Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Oct 9 07:50:36.338928 containerd[1459]: time="2024-10-09T07:50:36.338050480Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 07:50:36.338928 containerd[1459]: time="2024-10-09T07:50:36.338208739Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 9 07:50:36.342319 containerd[1459]: time="2024-10-09T07:50:36.342230551Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 07:50:36.345272 containerd[1459]: time="2024-10-09T07:50:36.345013680Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 616.931397ms" Oct 9 07:50:36.349326 containerd[1459]: time="2024-10-09T07:50:36.349221239Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 635.457579ms" Oct 9 07:50:36.350548 containerd[1459]: time="2024-10-09T07:50:36.350420185Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 609.273528ms" Oct 9 07:50:36.465562 kubelet[2152]: W1009 07:50:36.465397 2152 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://64.23.134.87:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 64.23.134.87:6443: connect: connection refused Oct 9 07:50:36.465562 kubelet[2152]: E1009 07:50:36.465521 2152 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://64.23.134.87:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 64.23.134.87:6443: connect: connection refused" logger="UnhandledError" Oct 9 07:50:36.525867 containerd[1459]: time="2024-10-09T07:50:36.525574872Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:50:36.525867 containerd[1459]: time="2024-10-09T07:50:36.525643783Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:50:36.525867 containerd[1459]: time="2024-10-09T07:50:36.525665592Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:50:36.525867 containerd[1459]: time="2024-10-09T07:50:36.525801922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:50:36.535643 containerd[1459]: time="2024-10-09T07:50:36.535367751Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:50:36.535643 containerd[1459]: time="2024-10-09T07:50:36.535441652Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:50:36.535917 containerd[1459]: time="2024-10-09T07:50:36.535496182Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:50:36.537461 containerd[1459]: time="2024-10-09T07:50:36.537168570Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:50:36.537576 containerd[1459]: time="2024-10-09T07:50:36.537338672Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:50:36.537625 containerd[1459]: time="2024-10-09T07:50:36.537571156Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:50:36.538798 containerd[1459]: time="2024-10-09T07:50:36.537623338Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:50:36.538798 containerd[1459]: time="2024-10-09T07:50:36.538742896Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:50:36.562221 systemd[1]: Started cri-containerd-ae73fd7d7e4b88db73bd539689fa2b5e3a7e3ccb5eda224b0969f14ac7c215df.scope - libcontainer container ae73fd7d7e4b88db73bd539689fa2b5e3a7e3ccb5eda224b0969f14ac7c215df. Oct 9 07:50:36.570798 systemd[1]: Started cri-containerd-d058f9f78f7f3f8eeb3e3889cc64189dc7eeda1a8418a5184f442e5fa7faf5d9.scope - libcontainer container d058f9f78f7f3f8eeb3e3889cc64189dc7eeda1a8418a5184f442e5fa7faf5d9. Oct 9 07:50:36.586845 systemd[1]: Started cri-containerd-c5b94390b9d90d59787766c7ee09171fa43fb5651aa4fa8462e2916a37db6456.scope - libcontainer container c5b94390b9d90d59787766c7ee09171fa43fb5651aa4fa8462e2916a37db6456. Oct 9 07:50:36.620915 kubelet[2152]: W1009 07:50:36.620854 2152 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://64.23.134.87:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.23.134.87:6443: connect: connection refused Oct 9 07:50:36.621371 kubelet[2152]: E1009 07:50:36.620923 2152 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://64.23.134.87:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 64.23.134.87:6443: connect: connection refused" logger="UnhandledError" Oct 9 07:50:36.649741 kubelet[2152]: E1009 07:50:36.646559 2152 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.134.87:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.1.0-6-a1de16b848?timeout=10s\": dial tcp 64.23.134.87:6443: connect: connection refused" interval="1.6s" Oct 9 07:50:36.690002 containerd[1459]: time="2024-10-09T07:50:36.689517922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.1.0-6-a1de16b848,Uid:1fd730ea8ae4e3d790c8447cf1f888ca,Namespace:kube-system,Attempt:0,} returns sandbox id \"ae73fd7d7e4b88db73bd539689fa2b5e3a7e3ccb5eda224b0969f14ac7c215df\"" Oct 9 07:50:36.698535 kubelet[2152]: E1009 07:50:36.697823 2152 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:50:36.702944 containerd[1459]: time="2024-10-09T07:50:36.702859030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.1.0-6-a1de16b848,Uid:ffdc5b8e0b4e8064da8ebdee6c4315c2,Namespace:kube-system,Attempt:0,} returns sandbox id \"d058f9f78f7f3f8eeb3e3889cc64189dc7eeda1a8418a5184f442e5fa7faf5d9\"" Oct 9 07:50:36.703631 kubelet[2152]: E1009 07:50:36.703599 2152 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:50:36.706442 containerd[1459]: time="2024-10-09T07:50:36.706275241Z" level=info msg="CreateContainer within sandbox \"ae73fd7d7e4b88db73bd539689fa2b5e3a7e3ccb5eda224b0969f14ac7c215df\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 9 07:50:36.706442 containerd[1459]: time="2024-10-09T07:50:36.706561987Z" level=info msg="CreateContainer within sandbox \"d058f9f78f7f3f8eeb3e3889cc64189dc7eeda1a8418a5184f442e5fa7faf5d9\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 9 07:50:36.725499 kubelet[2152]: W1009 07:50:36.725414 2152 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://64.23.134.87:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.1.0-6-a1de16b848&limit=500&resourceVersion=0": dial tcp 64.23.134.87:6443: connect: connection refused Oct 9 07:50:36.725726 kubelet[2152]: E1009 07:50:36.725508 2152 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://64.23.134.87:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.1.0-6-a1de16b848&limit=500&resourceVersion=0\": dial tcp 64.23.134.87:6443: connect: connection refused" logger="UnhandledError" Oct 9 07:50:36.729391 containerd[1459]: time="2024-10-09T07:50:36.729067006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.1.0-6-a1de16b848,Uid:7b4a5f3076a93485bdd40ba7fdf9f0af,Namespace:kube-system,Attempt:0,} returns sandbox id \"c5b94390b9d90d59787766c7ee09171fa43fb5651aa4fa8462e2916a37db6456\"" Oct 9 07:50:36.731103 kubelet[2152]: E1009 07:50:36.730790 2152 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:50:36.733985 containerd[1459]: time="2024-10-09T07:50:36.733940314Z" level=info msg="CreateContainer within sandbox \"c5b94390b9d90d59787766c7ee09171fa43fb5651aa4fa8462e2916a37db6456\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 9 07:50:36.739864 containerd[1459]: time="2024-10-09T07:50:36.739649211Z" level=info msg="CreateContainer within sandbox \"d058f9f78f7f3f8eeb3e3889cc64189dc7eeda1a8418a5184f442e5fa7faf5d9\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b4444392e32a2b0d5f5882902c1ae1b07f3a21643009542df0419699d8b18da4\"" Oct 9 07:50:36.741663 containerd[1459]: time="2024-10-09T07:50:36.741523156Z" level=info msg="StartContainer for \"b4444392e32a2b0d5f5882902c1ae1b07f3a21643009542df0419699d8b18da4\"" Oct 9 07:50:36.777158 containerd[1459]: time="2024-10-09T07:50:36.777085914Z" level=info msg="CreateContainer within sandbox \"ae73fd7d7e4b88db73bd539689fa2b5e3a7e3ccb5eda224b0969f14ac7c215df\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4b0281c63393c25b999953067618cc498a7dc6573fb038857def8cb70f13e947\"" Oct 9 07:50:36.778994 containerd[1459]: time="2024-10-09T07:50:36.777996431Z" level=info msg="StartContainer for \"4b0281c63393c25b999953067618cc498a7dc6573fb038857def8cb70f13e947\"" Oct 9 07:50:36.782725 containerd[1459]: time="2024-10-09T07:50:36.782674069Z" level=info msg="CreateContainer within sandbox \"c5b94390b9d90d59787766c7ee09171fa43fb5651aa4fa8462e2916a37db6456\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8c3487757fdb53f1a08777c1f8e31408c825f2040f58632c5c42ff188b975e0b\"" Oct 9 07:50:36.783483 containerd[1459]: time="2024-10-09T07:50:36.783445557Z" level=info msg="StartContainer for \"8c3487757fdb53f1a08777c1f8e31408c825f2040f58632c5c42ff188b975e0b\"" Oct 9 07:50:36.792325 systemd[1]: Started cri-containerd-b4444392e32a2b0d5f5882902c1ae1b07f3a21643009542df0419699d8b18da4.scope - libcontainer container b4444392e32a2b0d5f5882902c1ae1b07f3a21643009542df0419699d8b18da4. Oct 9 07:50:36.842918 kubelet[2152]: I1009 07:50:36.841710 2152 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.1.0-6-a1de16b848" Oct 9 07:50:36.844344 kubelet[2152]: E1009 07:50:36.844223 2152 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://64.23.134.87:6443/api/v1/nodes\": dial tcp 64.23.134.87:6443: connect: connection refused" node="ci-4081.1.0-6-a1de16b848" Oct 9 07:50:36.847331 systemd[1]: Started cri-containerd-4b0281c63393c25b999953067618cc498a7dc6573fb038857def8cb70f13e947.scope - libcontainer container 4b0281c63393c25b999953067618cc498a7dc6573fb038857def8cb70f13e947. Oct 9 07:50:36.876357 systemd[1]: Started cri-containerd-8c3487757fdb53f1a08777c1f8e31408c825f2040f58632c5c42ff188b975e0b.scope - libcontainer container 8c3487757fdb53f1a08777c1f8e31408c825f2040f58632c5c42ff188b975e0b. Oct 9 07:50:36.885437 containerd[1459]: time="2024-10-09T07:50:36.885289428Z" level=info msg="StartContainer for \"b4444392e32a2b0d5f5882902c1ae1b07f3a21643009542df0419699d8b18da4\" returns successfully" Oct 9 07:50:36.948502 containerd[1459]: time="2024-10-09T07:50:36.948417020Z" level=info msg="StartContainer for \"4b0281c63393c25b999953067618cc498a7dc6573fb038857def8cb70f13e947\" returns successfully" Oct 9 07:50:36.990708 containerd[1459]: time="2024-10-09T07:50:36.990564749Z" level=info msg="StartContainer for \"8c3487757fdb53f1a08777c1f8e31408c825f2040f58632c5c42ff188b975e0b\" returns successfully" Oct 9 07:50:37.286604 kubelet[2152]: E1009 07:50:37.286423 2152 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:50:37.291590 kubelet[2152]: E1009 07:50:37.291408 2152 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:50:37.295970 kubelet[2152]: E1009 07:50:37.294605 2152 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:50:38.298153 kubelet[2152]: E1009 07:50:38.297960 2152 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:50:38.446707 kubelet[2152]: I1009 07:50:38.446332 2152 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.1.0-6-a1de16b848" Oct 9 07:50:38.699647 kubelet[2152]: E1009 07:50:38.697590 2152 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:50:39.394155 kubelet[2152]: E1009 07:50:39.394105 2152 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.1.0-6-a1de16b848\" not found" node="ci-4081.1.0-6-a1de16b848" Oct 9 07:50:39.486061 kubelet[2152]: I1009 07:50:39.485818 2152 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.1.0-6-a1de16b848" Oct 9 07:50:39.486061 kubelet[2152]: E1009 07:50:39.485912 2152 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4081.1.0-6-a1de16b848\": node \"ci-4081.1.0-6-a1de16b848\" not found" Oct 9 07:50:40.219783 kubelet[2152]: I1009 07:50:40.219256 2152 apiserver.go:52] "Watching apiserver" Oct 9 07:50:40.238454 kubelet[2152]: I1009 07:50:40.238408 2152 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 9 07:50:41.865459 systemd[1]: Reloading requested from client PID 2431 ('systemctl') (unit session-9.scope)... Oct 9 07:50:41.865482 systemd[1]: Reloading... Oct 9 07:50:41.984015 zram_generator::config[2469]: No configuration found. Oct 9 07:50:42.240715 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 07:50:42.387801 systemd[1]: Reloading finished in 521 ms. Oct 9 07:50:42.458619 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:50:42.481293 systemd[1]: kubelet.service: Deactivated successfully. Oct 9 07:50:42.481673 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:50:42.481780 systemd[1]: kubelet.service: Consumed 1.313s CPU time, 111.4M memory peak, 0B memory swap peak. Oct 9 07:50:42.496356 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:50:42.666219 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:50:42.677725 (kubelet)[2521]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 9 07:50:42.811069 kubelet[2521]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 07:50:42.811069 kubelet[2521]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 9 07:50:42.811069 kubelet[2521]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 07:50:42.815031 kubelet[2521]: I1009 07:50:42.814846 2521 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 9 07:50:42.833084 kubelet[2521]: I1009 07:50:42.832467 2521 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Oct 9 07:50:42.833718 kubelet[2521]: I1009 07:50:42.833600 2521 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 9 07:50:42.835777 kubelet[2521]: I1009 07:50:42.834567 2521 server.go:929] "Client rotation is on, will bootstrap in background" Oct 9 07:50:42.839658 kubelet[2521]: I1009 07:50:42.839612 2521 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 9 07:50:42.846183 kubelet[2521]: I1009 07:50:42.844800 2521 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 9 07:50:42.856583 kubelet[2521]: E1009 07:50:42.856542 2521 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Oct 9 07:50:42.856583 kubelet[2521]: I1009 07:50:42.856578 2521 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Oct 9 07:50:42.861014 kubelet[2521]: I1009 07:50:42.860977 2521 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 9 07:50:42.862025 kubelet[2521]: I1009 07:50:42.861992 2521 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Oct 9 07:50:42.862266 kubelet[2521]: I1009 07:50:42.862208 2521 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 9 07:50:42.862917 kubelet[2521]: I1009 07:50:42.862275 2521 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.1.0-6-a1de16b848","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 9 07:50:42.862917 kubelet[2521]: I1009 07:50:42.862734 2521 topology_manager.go:138] "Creating topology manager with none policy" Oct 9 07:50:42.862917 kubelet[2521]: I1009 07:50:42.862746 2521 container_manager_linux.go:300] "Creating device plugin manager" Oct 9 07:50:42.862917 kubelet[2521]: I1009 07:50:42.862789 2521 state_mem.go:36] "Initialized new in-memory state store" Oct 9 07:50:42.863444 kubelet[2521]: I1009 07:50:42.863415 2521 kubelet.go:408] "Attempting to sync node with API server" Oct 9 07:50:42.863526 kubelet[2521]: I1009 07:50:42.863517 2521 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 9 07:50:42.865958 kubelet[2521]: I1009 07:50:42.863591 2521 kubelet.go:314] "Adding apiserver pod source" Oct 9 07:50:42.865958 kubelet[2521]: I1009 07:50:42.863632 2521 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 9 07:50:42.873184 kubelet[2521]: I1009 07:50:42.872181 2521 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Oct 9 07:50:42.875067 kubelet[2521]: I1009 07:50:42.875007 2521 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 9 07:50:42.878559 kubelet[2521]: I1009 07:50:42.878525 2521 server.go:1269] "Started kubelet" Oct 9 07:50:42.896135 kubelet[2521]: I1009 07:50:42.895991 2521 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 9 07:50:42.897940 kubelet[2521]: I1009 07:50:42.896990 2521 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 9 07:50:42.906067 kubelet[2521]: I1009 07:50:42.905231 2521 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 9 07:50:42.906067 kubelet[2521]: I1009 07:50:42.902671 2521 server.go:460] "Adding debug handlers to kubelet server" Oct 9 07:50:42.906067 kubelet[2521]: I1009 07:50:42.905590 2521 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 9 07:50:42.918900 kubelet[2521]: I1009 07:50:42.918812 2521 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 9 07:50:42.920976 kubelet[2521]: I1009 07:50:42.920738 2521 volume_manager.go:289] "Starting Kubelet Volume Manager" Oct 9 07:50:42.923651 kubelet[2521]: E1009 07:50:42.922746 2521 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.1.0-6-a1de16b848\" not found" Oct 9 07:50:42.923651 kubelet[2521]: I1009 07:50:42.923562 2521 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 9 07:50:42.923959 kubelet[2521]: I1009 07:50:42.923777 2521 reconciler.go:26] "Reconciler: start to sync state" Oct 9 07:50:42.927971 kubelet[2521]: E1009 07:50:42.927542 2521 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 9 07:50:42.929814 kubelet[2521]: I1009 07:50:42.929772 2521 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 9 07:50:42.944082 kubelet[2521]: I1009 07:50:42.943298 2521 factory.go:221] Registration of the containerd container factory successfully Oct 9 07:50:42.944082 kubelet[2521]: I1009 07:50:42.943343 2521 factory.go:221] Registration of the systemd container factory successfully Oct 9 07:50:42.978912 kubelet[2521]: I1009 07:50:42.978833 2521 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 9 07:50:42.985212 kubelet[2521]: I1009 07:50:42.981261 2521 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 9 07:50:42.985212 kubelet[2521]: I1009 07:50:42.981302 2521 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 9 07:50:42.985212 kubelet[2521]: I1009 07:50:42.981326 2521 kubelet.go:2321] "Starting kubelet main sync loop" Oct 9 07:50:42.985212 kubelet[2521]: E1009 07:50:42.981394 2521 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 9 07:50:43.076184 kubelet[2521]: I1009 07:50:43.075293 2521 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 9 07:50:43.076184 kubelet[2521]: I1009 07:50:43.075465 2521 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 9 07:50:43.076184 kubelet[2521]: I1009 07:50:43.075587 2521 state_mem.go:36] "Initialized new in-memory state store" Oct 9 07:50:43.077926 kubelet[2521]: I1009 07:50:43.076823 2521 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 9 07:50:43.077926 kubelet[2521]: I1009 07:50:43.076844 2521 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 9 07:50:43.077926 kubelet[2521]: I1009 07:50:43.076890 2521 policy_none.go:49] "None policy: Start" Oct 9 07:50:43.080002 kubelet[2521]: I1009 07:50:43.079646 2521 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 9 07:50:43.080002 kubelet[2521]: I1009 07:50:43.079770 2521 state_mem.go:35] "Initializing new in-memory state store" Oct 9 07:50:43.081643 kubelet[2521]: I1009 07:50:43.081382 2521 state_mem.go:75] "Updated machine memory state" Oct 9 07:50:43.081643 kubelet[2521]: E1009 07:50:43.081549 2521 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 9 07:50:43.090900 kubelet[2521]: I1009 07:50:43.090649 2521 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 9 07:50:43.091493 kubelet[2521]: I1009 07:50:43.091386 2521 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 9 07:50:43.091493 kubelet[2521]: I1009 07:50:43.091414 2521 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 9 07:50:43.092552 kubelet[2521]: I1009 07:50:43.092535 2521 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 9 07:50:43.212946 kubelet[2521]: I1009 07:50:43.211076 2521 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.1.0-6-a1de16b848" Oct 9 07:50:43.240935 kubelet[2521]: I1009 07:50:43.240678 2521 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081.1.0-6-a1de16b848" Oct 9 07:50:43.240935 kubelet[2521]: I1009 07:50:43.240815 2521 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.1.0-6-a1de16b848" Oct 9 07:50:43.328222 kubelet[2521]: I1009 07:50:43.326905 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ffdc5b8e0b4e8064da8ebdee6c4315c2-k8s-certs\") pod \"kube-apiserver-ci-4081.1.0-6-a1de16b848\" (UID: \"ffdc5b8e0b4e8064da8ebdee6c4315c2\") " pod="kube-system/kube-apiserver-ci-4081.1.0-6-a1de16b848" Oct 9 07:50:43.328222 kubelet[2521]: I1009 07:50:43.326958 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7b4a5f3076a93485bdd40ba7fdf9f0af-k8s-certs\") pod \"kube-controller-manager-ci-4081.1.0-6-a1de16b848\" (UID: \"7b4a5f3076a93485bdd40ba7fdf9f0af\") " pod="kube-system/kube-controller-manager-ci-4081.1.0-6-a1de16b848" Oct 9 07:50:43.328222 kubelet[2521]: I1009 07:50:43.326996 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b4a5f3076a93485bdd40ba7fdf9f0af-kubeconfig\") pod \"kube-controller-manager-ci-4081.1.0-6-a1de16b848\" (UID: \"7b4a5f3076a93485bdd40ba7fdf9f0af\") " pod="kube-system/kube-controller-manager-ci-4081.1.0-6-a1de16b848" Oct 9 07:50:43.328222 kubelet[2521]: I1009 07:50:43.327064 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7b4a5f3076a93485bdd40ba7fdf9f0af-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.1.0-6-a1de16b848\" (UID: \"7b4a5f3076a93485bdd40ba7fdf9f0af\") " pod="kube-system/kube-controller-manager-ci-4081.1.0-6-a1de16b848" Oct 9 07:50:43.328222 kubelet[2521]: I1009 07:50:43.327094 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1fd730ea8ae4e3d790c8447cf1f888ca-kubeconfig\") pod \"kube-scheduler-ci-4081.1.0-6-a1de16b848\" (UID: \"1fd730ea8ae4e3d790c8447cf1f888ca\") " pod="kube-system/kube-scheduler-ci-4081.1.0-6-a1de16b848" Oct 9 07:50:43.328585 kubelet[2521]: I1009 07:50:43.327134 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ffdc5b8e0b4e8064da8ebdee6c4315c2-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.1.0-6-a1de16b848\" (UID: \"ffdc5b8e0b4e8064da8ebdee6c4315c2\") " pod="kube-system/kube-apiserver-ci-4081.1.0-6-a1de16b848" Oct 9 07:50:43.328585 kubelet[2521]: I1009 07:50:43.327161 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7b4a5f3076a93485bdd40ba7fdf9f0af-ca-certs\") pod \"kube-controller-manager-ci-4081.1.0-6-a1de16b848\" (UID: \"7b4a5f3076a93485bdd40ba7fdf9f0af\") " pod="kube-system/kube-controller-manager-ci-4081.1.0-6-a1de16b848" Oct 9 07:50:43.328585 kubelet[2521]: I1009 07:50:43.327209 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7b4a5f3076a93485bdd40ba7fdf9f0af-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.1.0-6-a1de16b848\" (UID: \"7b4a5f3076a93485bdd40ba7fdf9f0af\") " pod="kube-system/kube-controller-manager-ci-4081.1.0-6-a1de16b848" Oct 9 07:50:43.328585 kubelet[2521]: I1009 07:50:43.327231 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ffdc5b8e0b4e8064da8ebdee6c4315c2-ca-certs\") pod \"kube-apiserver-ci-4081.1.0-6-a1de16b848\" (UID: \"ffdc5b8e0b4e8064da8ebdee6c4315c2\") " pod="kube-system/kube-apiserver-ci-4081.1.0-6-a1de16b848" Oct 9 07:50:43.331901 kubelet[2521]: W1009 07:50:43.331415 2521 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 9 07:50:43.331901 kubelet[2521]: W1009 07:50:43.331704 2521 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 9 07:50:43.334948 kubelet[2521]: W1009 07:50:43.334153 2521 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 9 07:50:43.633008 kubelet[2521]: E1009 07:50:43.632496 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:50:43.634915 kubelet[2521]: E1009 07:50:43.634009 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:50:43.636087 kubelet[2521]: E1009 07:50:43.635963 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:50:43.865914 kubelet[2521]: I1009 07:50:43.865262 2521 apiserver.go:52] "Watching apiserver" Oct 9 07:50:43.924968 kubelet[2521]: I1009 07:50:43.924708 2521 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 9 07:50:44.033025 kubelet[2521]: E1009 07:50:44.029345 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:50:44.033025 kubelet[2521]: E1009 07:50:44.029476 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:50:44.033025 kubelet[2521]: E1009 07:50:44.030288 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:50:44.109803 kubelet[2521]: I1009 07:50:44.109084 2521 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.1.0-6-a1de16b848" podStartSLOduration=1.109057671 podStartE2EDuration="1.109057671s" podCreationTimestamp="2024-10-09 07:50:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 07:50:44.062152631 +0000 UTC m=+1.357395689" watchObservedRunningTime="2024-10-09 07:50:44.109057671 +0000 UTC m=+1.404300730" Oct 9 07:50:44.213154 kubelet[2521]: I1009 07:50:44.212910 2521 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.1.0-6-a1de16b848" podStartSLOduration=1.21288773 podStartE2EDuration="1.21288773s" podCreationTimestamp="2024-10-09 07:50:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 07:50:44.113278288 +0000 UTC m=+1.408521347" watchObservedRunningTime="2024-10-09 07:50:44.21288773 +0000 UTC m=+1.508130780" Oct 9 07:50:44.287082 kubelet[2521]: I1009 07:50:44.286983 2521 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.1.0-6-a1de16b848" podStartSLOduration=1.286952375 podStartE2EDuration="1.286952375s" podCreationTimestamp="2024-10-09 07:50:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 07:50:44.214484871 +0000 UTC m=+1.509727941" watchObservedRunningTime="2024-10-09 07:50:44.286952375 +0000 UTC m=+1.582195433" Oct 9 07:50:45.031058 kubelet[2521]: E1009 07:50:45.030963 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:50:45.032598 kubelet[2521]: E1009 07:50:45.032408 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:50:45.343767 kubelet[2521]: E1009 07:50:45.341965 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:50:45.953932 update_engine[1448]: I20241009 07:50:45.952939 1448 update_attempter.cc:509] Updating boot flags... Oct 9 07:50:46.036232 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2578) Oct 9 07:50:46.041247 kubelet[2521]: E1009 07:50:46.039781 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:50:46.149189 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2581) Oct 9 07:50:47.648530 kubelet[2521]: I1009 07:50:47.648493 2521 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 9 07:50:47.650688 kubelet[2521]: I1009 07:50:47.649640 2521 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 9 07:50:47.650748 containerd[1459]: time="2024-10-09T07:50:47.648989416Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 9 07:50:48.682301 systemd[1]: Created slice kubepods-besteffort-podb0f9552b_212e_4034_bdb6_6eb7f21d4421.slice - libcontainer container kubepods-besteffort-podb0f9552b_212e_4034_bdb6_6eb7f21d4421.slice. Oct 9 07:50:48.765378 kubelet[2521]: I1009 07:50:48.765182 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b0f9552b-212e-4034-bdb6-6eb7f21d4421-xtables-lock\") pod \"kube-proxy-5dwbd\" (UID: \"b0f9552b-212e-4034-bdb6-6eb7f21d4421\") " pod="kube-system/kube-proxy-5dwbd" Oct 9 07:50:48.765378 kubelet[2521]: I1009 07:50:48.765241 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6khfh\" (UniqueName: \"kubernetes.io/projected/b0f9552b-212e-4034-bdb6-6eb7f21d4421-kube-api-access-6khfh\") pod \"kube-proxy-5dwbd\" (UID: \"b0f9552b-212e-4034-bdb6-6eb7f21d4421\") " pod="kube-system/kube-proxy-5dwbd" Oct 9 07:50:48.765378 kubelet[2521]: I1009 07:50:48.765301 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b0f9552b-212e-4034-bdb6-6eb7f21d4421-kube-proxy\") pod \"kube-proxy-5dwbd\" (UID: \"b0f9552b-212e-4034-bdb6-6eb7f21d4421\") " pod="kube-system/kube-proxy-5dwbd" Oct 9 07:50:48.765378 kubelet[2521]: I1009 07:50:48.765341 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b0f9552b-212e-4034-bdb6-6eb7f21d4421-lib-modules\") pod \"kube-proxy-5dwbd\" (UID: \"b0f9552b-212e-4034-bdb6-6eb7f21d4421\") " pod="kube-system/kube-proxy-5dwbd" Oct 9 07:50:48.861656 systemd[1]: Created slice kubepods-besteffort-pod54a74a82_c22f_4086_ba4f_a7c81f55b17f.slice - libcontainer container kubepods-besteffort-pod54a74a82_c22f_4086_ba4f_a7c81f55b17f.slice. Oct 9 07:50:48.945806 sudo[1669]: pam_unix(sudo:session): session closed for user root Oct 9 07:50:48.952718 sshd[1666]: pam_unix(sshd:session): session closed for user core Oct 9 07:50:48.958085 systemd-logind[1447]: Session 9 logged out. Waiting for processes to exit. Oct 9 07:50:48.958533 systemd[1]: sshd@8-64.23.134.87:22-139.178.89.65:44410.service: Deactivated successfully. Oct 9 07:50:48.962434 systemd[1]: session-9.scope: Deactivated successfully. Oct 9 07:50:48.963190 systemd[1]: session-9.scope: Consumed 6.550s CPU time, 151.8M memory peak, 0B memory swap peak. Oct 9 07:50:48.967089 kubelet[2521]: I1009 07:50:48.966726 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/54a74a82-c22f-4086-ba4f-a7c81f55b17f-var-lib-calico\") pod \"tigera-operator-55748b469f-fs7c8\" (UID: \"54a74a82-c22f-4086-ba4f-a7c81f55b17f\") " pod="tigera-operator/tigera-operator-55748b469f-fs7c8" Oct 9 07:50:48.967089 kubelet[2521]: I1009 07:50:48.966796 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cfpq\" (UniqueName: \"kubernetes.io/projected/54a74a82-c22f-4086-ba4f-a7c81f55b17f-kube-api-access-4cfpq\") pod \"tigera-operator-55748b469f-fs7c8\" (UID: \"54a74a82-c22f-4086-ba4f-a7c81f55b17f\") " pod="tigera-operator/tigera-operator-55748b469f-fs7c8" Oct 9 07:50:48.967871 systemd-logind[1447]: Removed session 9. Oct 9 07:50:49.000181 kubelet[2521]: E1009 07:50:48.999981 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:50:49.003771 containerd[1459]: time="2024-10-09T07:50:49.002964016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5dwbd,Uid:b0f9552b-212e-4034-bdb6-6eb7f21d4421,Namespace:kube-system,Attempt:0,}" Oct 9 07:50:49.047706 containerd[1459]: time="2024-10-09T07:50:49.046838402Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:50:49.048255 containerd[1459]: time="2024-10-09T07:50:49.047072224Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:50:49.048422 containerd[1459]: time="2024-10-09T07:50:49.048285472Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:50:49.048572 containerd[1459]: time="2024-10-09T07:50:49.048470770Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:50:49.102261 systemd[1]: Started cri-containerd-63f68c50f91d56672d2d1dae1db6efe07821d42fd1ff0caa7827b31815abe7eb.scope - libcontainer container 63f68c50f91d56672d2d1dae1db6efe07821d42fd1ff0caa7827b31815abe7eb. Oct 9 07:50:49.141173 containerd[1459]: time="2024-10-09T07:50:49.140607690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5dwbd,Uid:b0f9552b-212e-4034-bdb6-6eb7f21d4421,Namespace:kube-system,Attempt:0,} returns sandbox id \"63f68c50f91d56672d2d1dae1db6efe07821d42fd1ff0caa7827b31815abe7eb\"" Oct 9 07:50:49.141831 kubelet[2521]: E1009 07:50:49.141804 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:50:49.145336 containerd[1459]: time="2024-10-09T07:50:49.145295090Z" level=info msg="CreateContainer within sandbox \"63f68c50f91d56672d2d1dae1db6efe07821d42fd1ff0caa7827b31815abe7eb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 9 07:50:49.175270 containerd[1459]: time="2024-10-09T07:50:49.174925705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-55748b469f-fs7c8,Uid:54a74a82-c22f-4086-ba4f-a7c81f55b17f,Namespace:tigera-operator,Attempt:0,}" Oct 9 07:50:49.185589 containerd[1459]: time="2024-10-09T07:50:49.185523424Z" level=info msg="CreateContainer within sandbox \"63f68c50f91d56672d2d1dae1db6efe07821d42fd1ff0caa7827b31815abe7eb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"20c9c84bae2f674cf84a3643f3740180ee1b721242240387f57c629efdcf4c79\"" Oct 9 07:50:49.188225 containerd[1459]: time="2024-10-09T07:50:49.188166308Z" level=info msg="StartContainer for \"20c9c84bae2f674cf84a3643f3740180ee1b721242240387f57c629efdcf4c79\"" Oct 9 07:50:49.237718 systemd[1]: Started cri-containerd-20c9c84bae2f674cf84a3643f3740180ee1b721242240387f57c629efdcf4c79.scope - libcontainer container 20c9c84bae2f674cf84a3643f3740180ee1b721242240387f57c629efdcf4c79. Oct 9 07:50:49.244911 containerd[1459]: time="2024-10-09T07:50:49.236923958Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:50:49.244911 containerd[1459]: time="2024-10-09T07:50:49.237007836Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:50:49.244911 containerd[1459]: time="2024-10-09T07:50:49.237024298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:50:49.244911 containerd[1459]: time="2024-10-09T07:50:49.237125180Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:50:49.294280 systemd[1]: Started cri-containerd-b2d012f6fd333fdb34007583afba69c89eb01c7d38ceb67afa227d59ce95b502.scope - libcontainer container b2d012f6fd333fdb34007583afba69c89eb01c7d38ceb67afa227d59ce95b502. Oct 9 07:50:49.342976 containerd[1459]: time="2024-10-09T07:50:49.342910342Z" level=info msg="StartContainer for \"20c9c84bae2f674cf84a3643f3740180ee1b721242240387f57c629efdcf4c79\" returns successfully" Oct 9 07:50:49.374358 containerd[1459]: time="2024-10-09T07:50:49.374310628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-55748b469f-fs7c8,Uid:54a74a82-c22f-4086-ba4f-a7c81f55b17f,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b2d012f6fd333fdb34007583afba69c89eb01c7d38ceb67afa227d59ce95b502\"" Oct 9 07:50:49.380405 containerd[1459]: time="2024-10-09T07:50:49.380230320Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\"" Oct 9 07:50:50.053073 kubelet[2521]: E1009 07:50:50.052017 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:50:51.018081 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1771456281.mount: Deactivated successfully. Oct 9 07:50:52.384851 containerd[1459]: time="2024-10-09T07:50:52.383868946Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:50:52.384851 containerd[1459]: time="2024-10-09T07:50:52.384798097Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.3: active requests=0, bytes read=22136545" Oct 9 07:50:52.385746 containerd[1459]: time="2024-10-09T07:50:52.385708169Z" level=info msg="ImageCreate event name:\"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:50:52.388677 containerd[1459]: time="2024-10-09T07:50:52.388615130Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:50:52.389556 containerd[1459]: time="2024-10-09T07:50:52.389511589Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.3\" with image id \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\", repo tag \"quay.io/tigera/operator:v1.34.3\", repo digest \"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\", size \"22130728\" in 3.009101944s" Oct 9 07:50:52.389769 containerd[1459]: time="2024-10-09T07:50:52.389752749Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\" returns image reference \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\"" Oct 9 07:50:52.403154 containerd[1459]: time="2024-10-09T07:50:52.403087877Z" level=info msg="CreateContainer within sandbox \"b2d012f6fd333fdb34007583afba69c89eb01c7d38ceb67afa227d59ce95b502\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 9 07:50:52.460091 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3303288999.mount: Deactivated successfully. Oct 9 07:50:52.462693 containerd[1459]: time="2024-10-09T07:50:52.462472410Z" level=info msg="CreateContainer within sandbox \"b2d012f6fd333fdb34007583afba69c89eb01c7d38ceb67afa227d59ce95b502\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"90f8d2e786424ae60f2e18867112cdfa0795a669f6f035ad52639caecd7366b2\"" Oct 9 07:50:52.464057 containerd[1459]: time="2024-10-09T07:50:52.463725356Z" level=info msg="StartContainer for \"90f8d2e786424ae60f2e18867112cdfa0795a669f6f035ad52639caecd7366b2\"" Oct 9 07:50:52.515121 systemd[1]: Started cri-containerd-90f8d2e786424ae60f2e18867112cdfa0795a669f6f035ad52639caecd7366b2.scope - libcontainer container 90f8d2e786424ae60f2e18867112cdfa0795a669f6f035ad52639caecd7366b2. Oct 9 07:50:52.570903 containerd[1459]: time="2024-10-09T07:50:52.570718561Z" level=info msg="StartContainer for \"90f8d2e786424ae60f2e18867112cdfa0795a669f6f035ad52639caecd7366b2\" returns successfully" Oct 9 07:50:53.093304 kubelet[2521]: I1009 07:50:53.092330 2521 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-55748b469f-fs7c8" podStartSLOduration=2.06859436 podStartE2EDuration="5.092299807s" podCreationTimestamp="2024-10-09 07:50:48 +0000 UTC" firstStartedPulling="2024-10-09 07:50:49.37633499 +0000 UTC m=+6.671578026" lastFinishedPulling="2024-10-09 07:50:52.400040428 +0000 UTC m=+9.695283473" observedRunningTime="2024-10-09 07:50:53.092284181 +0000 UTC m=+10.387527245" watchObservedRunningTime="2024-10-09 07:50:53.092299807 +0000 UTC m=+10.387542869" Oct 9 07:50:53.093304 kubelet[2521]: I1009 07:50:53.092688 2521 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5dwbd" podStartSLOduration=5.092671536 podStartE2EDuration="5.092671536s" podCreationTimestamp="2024-10-09 07:50:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 07:50:50.065817087 +0000 UTC m=+7.361060152" watchObservedRunningTime="2024-10-09 07:50:53.092671536 +0000 UTC m=+10.387914600" Oct 9 07:50:54.087954 kubelet[2521]: E1009 07:50:54.087009 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:50:54.899244 kubelet[2521]: E1009 07:50:54.899195 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:50:55.086035 kubelet[2521]: E1009 07:50:55.083921 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:50:55.352806 kubelet[2521]: E1009 07:50:55.352737 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:50:55.886099 systemd[1]: Created slice kubepods-besteffort-pod416e5950_a96a_419b_ab60_d37348c80f88.slice - libcontainer container kubepods-besteffort-pod416e5950_a96a_419b_ab60_d37348c80f88.slice. Oct 9 07:50:56.019203 kubelet[2521]: I1009 07:50:56.018143 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/416e5950-a96a-419b-ab60-d37348c80f88-tigera-ca-bundle\") pod \"calico-typha-597b685764-jqcvl\" (UID: \"416e5950-a96a-419b-ab60-d37348c80f88\") " pod="calico-system/calico-typha-597b685764-jqcvl" Oct 9 07:50:56.019203 kubelet[2521]: I1009 07:50:56.018395 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/416e5950-a96a-419b-ab60-d37348c80f88-typha-certs\") pod \"calico-typha-597b685764-jqcvl\" (UID: \"416e5950-a96a-419b-ab60-d37348c80f88\") " pod="calico-system/calico-typha-597b685764-jqcvl" Oct 9 07:50:56.019203 kubelet[2521]: I1009 07:50:56.018473 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctbj5\" (UniqueName: \"kubernetes.io/projected/416e5950-a96a-419b-ab60-d37348c80f88-kube-api-access-ctbj5\") pod \"calico-typha-597b685764-jqcvl\" (UID: \"416e5950-a96a-419b-ab60-d37348c80f88\") " pod="calico-system/calico-typha-597b685764-jqcvl" Oct 9 07:50:56.034506 systemd[1]: Created slice kubepods-besteffort-poddf01fd9e_1a43_4a4b_a451_a39b302bd44a.slice - libcontainer container kubepods-besteffort-poddf01fd9e_1a43_4a4b_a451_a39b302bd44a.slice. Oct 9 07:50:56.119568 kubelet[2521]: I1009 07:50:56.119203 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/df01fd9e-1a43-4a4b-a451-a39b302bd44a-lib-modules\") pod \"calico-node-s7m66\" (UID: \"df01fd9e-1a43-4a4b-a451-a39b302bd44a\") " pod="calico-system/calico-node-s7m66" Oct 9 07:50:56.119957 kubelet[2521]: I1009 07:50:56.119711 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/df01fd9e-1a43-4a4b-a451-a39b302bd44a-var-lib-calico\") pod \"calico-node-s7m66\" (UID: \"df01fd9e-1a43-4a4b-a451-a39b302bd44a\") " pod="calico-system/calico-node-s7m66" Oct 9 07:50:56.121825 kubelet[2521]: I1009 07:50:56.121733 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/df01fd9e-1a43-4a4b-a451-a39b302bd44a-flexvol-driver-host\") pod \"calico-node-s7m66\" (UID: \"df01fd9e-1a43-4a4b-a451-a39b302bd44a\") " pod="calico-system/calico-node-s7m66" Oct 9 07:50:56.121825 kubelet[2521]: I1009 07:50:56.121797 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/df01fd9e-1a43-4a4b-a451-a39b302bd44a-cni-bin-dir\") pod \"calico-node-s7m66\" (UID: \"df01fd9e-1a43-4a4b-a451-a39b302bd44a\") " pod="calico-system/calico-node-s7m66" Oct 9 07:50:56.122072 kubelet[2521]: I1009 07:50:56.121842 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/df01fd9e-1a43-4a4b-a451-a39b302bd44a-tigera-ca-bundle\") pod \"calico-node-s7m66\" (UID: \"df01fd9e-1a43-4a4b-a451-a39b302bd44a\") " pod="calico-system/calico-node-s7m66" Oct 9 07:50:56.122072 kubelet[2521]: I1009 07:50:56.121977 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/df01fd9e-1a43-4a4b-a451-a39b302bd44a-cni-net-dir\") pod \"calico-node-s7m66\" (UID: \"df01fd9e-1a43-4a4b-a451-a39b302bd44a\") " pod="calico-system/calico-node-s7m66" Oct 9 07:50:56.122072 kubelet[2521]: I1009 07:50:56.122005 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtzct\" (UniqueName: \"kubernetes.io/projected/df01fd9e-1a43-4a4b-a451-a39b302bd44a-kube-api-access-jtzct\") pod \"calico-node-s7m66\" (UID: \"df01fd9e-1a43-4a4b-a451-a39b302bd44a\") " pod="calico-system/calico-node-s7m66" Oct 9 07:50:56.122072 kubelet[2521]: I1009 07:50:56.122037 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/df01fd9e-1a43-4a4b-a451-a39b302bd44a-var-run-calico\") pod \"calico-node-s7m66\" (UID: \"df01fd9e-1a43-4a4b-a451-a39b302bd44a\") " pod="calico-system/calico-node-s7m66" Oct 9 07:50:56.122072 kubelet[2521]: I1009 07:50:56.122062 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/df01fd9e-1a43-4a4b-a451-a39b302bd44a-cni-log-dir\") pod \"calico-node-s7m66\" (UID: \"df01fd9e-1a43-4a4b-a451-a39b302bd44a\") " pod="calico-system/calico-node-s7m66" Oct 9 07:50:56.122324 kubelet[2521]: I1009 07:50:56.122086 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/df01fd9e-1a43-4a4b-a451-a39b302bd44a-xtables-lock\") pod \"calico-node-s7m66\" (UID: \"df01fd9e-1a43-4a4b-a451-a39b302bd44a\") " pod="calico-system/calico-node-s7m66" Oct 9 07:50:56.122324 kubelet[2521]: I1009 07:50:56.122120 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/df01fd9e-1a43-4a4b-a451-a39b302bd44a-node-certs\") pod \"calico-node-s7m66\" (UID: \"df01fd9e-1a43-4a4b-a451-a39b302bd44a\") " pod="calico-system/calico-node-s7m66" Oct 9 07:50:56.122324 kubelet[2521]: I1009 07:50:56.122235 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/df01fd9e-1a43-4a4b-a451-a39b302bd44a-policysync\") pod \"calico-node-s7m66\" (UID: \"df01fd9e-1a43-4a4b-a451-a39b302bd44a\") " pod="calico-system/calico-node-s7m66" Oct 9 07:50:56.194747 kubelet[2521]: E1009 07:50:56.194504 2521 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lxz42" podUID="8243e6c5-fffe-40ae-9ffc-3e5c0557a44d" Oct 9 07:50:56.194967 kubelet[2521]: E1009 07:50:56.194804 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:50:56.196877 containerd[1459]: time="2024-10-09T07:50:56.196818492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-597b685764-jqcvl,Uid:416e5950-a96a-419b-ab60-d37348c80f88,Namespace:calico-system,Attempt:0,}" Oct 9 07:50:56.237636 kubelet[2521]: E1009 07:50:56.237293 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:50:56.237636 kubelet[2521]: W1009 07:50:56.237337 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:50:56.237636 kubelet[2521]: E1009 07:50:56.237381 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:50:56.240358 kubelet[2521]: E1009 07:50:56.238414 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:50:56.240358 kubelet[2521]: W1009 07:50:56.238443 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:50:56.240358 kubelet[2521]: E1009 07:50:56.238470 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:50:56.240358 kubelet[2521]: E1009 07:50:56.240018 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:50:56.240358 kubelet[2521]: W1009 07:50:56.240042 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:50:56.240358 kubelet[2521]: E1009 07:50:56.240071 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:50:56.241390 kubelet[2521]: E1009 07:50:56.240483 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:50:56.241390 kubelet[2521]: W1009 07:50:56.240500 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:50:56.241390 kubelet[2521]: E1009 07:50:56.240516 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:50:56.264430 kubelet[2521]: E1009 07:50:56.260034 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:50:56.264430 kubelet[2521]: W1009 07:50:56.260066 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:50:56.264430 kubelet[2521]: E1009 07:50:56.260105 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:50:56.278716 kubelet[2521]: E1009 07:50:56.275015 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:50:56.278716 kubelet[2521]: W1009 07:50:56.275061 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:50:56.278716 kubelet[2521]: E1009 07:50:56.275101 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:50:56.329125 kubelet[2521]: E1009 07:50:56.329066 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:50:56.329462 kubelet[2521]: W1009 07:50:56.329210 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:50:56.329462 kubelet[2521]: E1009 07:50:56.329257 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:50:56.329462 kubelet[2521]: I1009 07:50:56.329305 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/8243e6c5-fffe-40ae-9ffc-3e5c0557a44d-varrun\") pod \"csi-node-driver-lxz42\" (UID: \"8243e6c5-fffe-40ae-9ffc-3e5c0557a44d\") " pod="calico-system/csi-node-driver-lxz42" Oct 9 07:50:56.334576 kubelet[2521]: E1009 07:50:56.330989 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:50:56.334576 kubelet[2521]: W1009 07:50:56.331036 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:50:56.334576 kubelet[2521]: E1009 07:50:56.331074 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:50:56.334576 kubelet[2521]: I1009 07:50:56.331136 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42hcb\" (UniqueName: \"kubernetes.io/projected/8243e6c5-fffe-40ae-9ffc-3e5c0557a44d-kube-api-access-42hcb\") pod \"csi-node-driver-lxz42\" (UID: \"8243e6c5-fffe-40ae-9ffc-3e5c0557a44d\") " pod="calico-system/csi-node-driver-lxz42" Oct 9 07:50:56.334576 kubelet[2521]: E1009 07:50:56.334291 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:50:56.334576 kubelet[2521]: W1009 07:50:56.334335 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:50:56.334576 kubelet[2521]: E1009 07:50:56.334373 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:50:56.334576 kubelet[2521]: I1009 07:50:56.334419 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8243e6c5-fffe-40ae-9ffc-3e5c0557a44d-registration-dir\") pod \"csi-node-driver-lxz42\" (UID: \"8243e6c5-fffe-40ae-9ffc-3e5c0557a44d\") " pod="calico-system/csi-node-driver-lxz42" Oct 9 07:50:56.338103 kubelet[2521]: E1009 07:50:56.336401 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:50:56.338103 kubelet[2521]: W1009 07:50:56.336445 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:50:56.338103 kubelet[2521]: E1009 07:50:56.336481 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:50:56.338103 kubelet[2521]: I1009 07:50:56.336527 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8243e6c5-fffe-40ae-9ffc-3e5c0557a44d-kubelet-dir\") pod \"csi-node-driver-lxz42\" (UID: \"8243e6c5-fffe-40ae-9ffc-3e5c0557a44d\") " pod="calico-system/csi-node-driver-lxz42" Oct 9 07:50:56.340115 kubelet[2521]: E1009 07:50:56.339313 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:50:56.340399 kubelet[2521]: E1009 07:50:56.340253 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:50:56.340399 kubelet[2521]: W1009 07:50:56.340288 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:50:56.340399 kubelet[2521]: E1009 07:50:56.340328 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:50:56.340399 kubelet[2521]: I1009 07:50:56.340368 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8243e6c5-fffe-40ae-9ffc-3e5c0557a44d-socket-dir\") pod \"csi-node-driver-lxz42\" (UID: \"8243e6c5-fffe-40ae-9ffc-3e5c0557a44d\") " pod="calico-system/csi-node-driver-lxz42" Oct 9 07:50:56.344248 kubelet[2521]: E1009 07:50:56.343053 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:50:56.344248 kubelet[2521]: W1009 07:50:56.343093 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:50:56.344248 kubelet[2521]: E1009 07:50:56.343124 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:50:56.346344 kubelet[2521]: E1009 07:50:56.345643 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:50:56.346344 kubelet[2521]: W1009 07:50:56.345668 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:50:56.346344 kubelet[2521]: E1009 07:50:56.345950 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:50:56.348925 kubelet[2521]: E1009 07:50:56.348340 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:50:56.348925 kubelet[2521]: W1009 07:50:56.348372 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:50:56.348925 kubelet[2521]: E1009 07:50:56.348634 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:50:56.349205 containerd[1459]: time="2024-10-09T07:50:56.348499178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-s7m66,Uid:df01fd9e-1a43-4a4b-a451-a39b302bd44a,Namespace:calico-system,Attempt:0,}" Oct 9 07:50:56.350281 kubelet[2521]: E1009 07:50:56.350181 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:50:56.350281 kubelet[2521]: W1009 07:50:56.350208 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:50:56.350480 kubelet[2521]: E1009 07:50:56.350397 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:50:56.352936 kubelet[2521]: E1009 07:50:56.351261 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:50:56.352936 kubelet[2521]: W1009 07:50:56.351292 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:50:56.352936 kubelet[2521]: E1009 07:50:56.351500 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:50:56.353932 kubelet[2521]: E1009 07:50:56.353559 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:50:56.353932 kubelet[2521]: W1009 07:50:56.353593 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:50:56.353932 kubelet[2521]: E1009 07:50:56.353781 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:50:56.355760 kubelet[2521]: E1009 07:50:56.354787 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:50:56.355760 kubelet[2521]: W1009 07:50:56.354815 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:50:56.355760 kubelet[2521]: E1009 07:50:56.354841 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:50:56.356988 kubelet[2521]: E1009 07:50:56.356795 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:50:56.356988 kubelet[2521]: W1009 07:50:56.356825 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:50:56.356988 kubelet[2521]: E1009 07:50:56.356852 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:50:56.361478 kubelet[2521]: E1009 07:50:56.359825 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:50:56.361478 kubelet[2521]: W1009 07:50:56.359864 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:50:56.361478 kubelet[2521]: E1009 07:50:56.359915 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:50:56.362292 kubelet[2521]: E1009 07:50:56.362212 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:50:56.362292 kubelet[2521]: W1009 07:50:56.362244 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:50:56.362292 kubelet[2521]: E1009 07:50:56.362272 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:50:56.368083 containerd[1459]: time="2024-10-09T07:50:56.364674486Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:50:56.368083 containerd[1459]: time="2024-10-09T07:50:56.367967850Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:50:56.369226 containerd[1459]: time="2024-10-09T07:50:56.368506660Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:50:56.370815 containerd[1459]: time="2024-10-09T07:50:56.369917293Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:50:56.424355 systemd[1]: Started cri-containerd-ec364840b8252f3b3fcce5c1bf50f2ab4e80d2a5221c3c3504f8cff2074cd301.scope - libcontainer container ec364840b8252f3b3fcce5c1bf50f2ab4e80d2a5221c3c3504f8cff2074cd301. Oct 9 07:50:56.445045 kubelet[2521]: E1009 07:50:56.444399 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:50:56.445045 kubelet[2521]: W1009 07:50:56.444523 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:50:56.447828 kubelet[2521]: E1009 07:50:56.445661 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:50:56.448068 kubelet[2521]: E1009 07:50:56.447921 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:50:56.448068 kubelet[2521]: W1009 07:50:56.447973 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:50:56.448068 kubelet[2521]: E1009 07:50:56.448009 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:50:56.455029 kubelet[2521]: E1009 07:50:56.449071 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:50:56.455029 kubelet[2521]: W1009 07:50:56.449105 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:50:56.455029 kubelet[2521]: E1009 07:50:56.449138 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:50:56.455029 kubelet[2521]: E1009 07:50:56.451246 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:50:56.455029 kubelet[2521]: W1009 07:50:56.451269 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:50:56.455029 kubelet[2521]: E1009 07:50:56.451298 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:50:56.455029 kubelet[2521]: E1009 07:50:56.451695 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:50:56.455029 kubelet[2521]: W1009 07:50:56.451712 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:50:56.455029 kubelet[2521]: E1009 07:50:56.451738 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:50:56.464158 kubelet[2521]: E1009 07:50:56.463753 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:50:56.466373 kubelet[2521]: W1009 07:50:56.465847 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:50:56.466373 kubelet[2521]: E1009 07:50:56.465924 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:50:56.472702 kubelet[2521]: E1009 07:50:56.472008 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:50:56.472702 kubelet[2521]: W1009 07:50:56.472055 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:50:56.472702 kubelet[2521]: E1009 07:50:56.472094 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:50:56.477159 kubelet[2521]: E1009 07:50:56.476025 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:50:56.477159 kubelet[2521]: W1009 07:50:56.476064 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:50:56.477159 kubelet[2521]: E1009 07:50:56.476100 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:50:56.477159 kubelet[2521]: E1009 07:50:56.476842 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:50:56.477159 kubelet[2521]: W1009 07:50:56.476872 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:50:56.477159 kubelet[2521]: E1009 07:50:56.476964 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:50:56.483736 kubelet[2521]: E1009 07:50:56.482365 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:50:56.483736 kubelet[2521]: W1009 07:50:56.482984 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:50:56.483736 kubelet[2521]: E1009 07:50:56.483068 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:50:56.489965 kubelet[2521]: E1009 07:50:56.487445 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:50:56.489965 kubelet[2521]: W1009 07:50:56.487485 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:50:56.490402 kubelet[2521]: E1009 07:50:56.490360 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:50:56.490550 kubelet[2521]: W1009 07:50:56.490529 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:50:56.492628 kubelet[2521]: E1009 07:50:56.492566 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:50:56.492628 kubelet[2521]: E1009 07:50:56.492629 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:50:56.494046 kubelet[2521]: E1009 07:50:56.492595 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:50:56.495011 kubelet[2521]: W1009 07:50:56.494500 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:50:56.497160 kubelet[2521]: E1009 07:50:56.497111 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:50:56.497445 kubelet[2521]: W1009 07:50:56.497350 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:50:56.502210 kubelet[2521]: E1009 07:50:56.499940 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:50:56.502210 kubelet[2521]: W1009 07:50:56.499976 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:50:56.502451 kubelet[2521]: E1009 07:50:56.502331 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:50:56.502451 kubelet[2521]: W1009 07:50:56.502362 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:50:56.502551 kubelet[2521]: E1009 07:50:56.502457 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:50:56.505829 kubelet[2521]: E1009 07:50:56.505180 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:50:56.505829 kubelet[2521]: E1009 07:50:56.497285 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:50:56.505829 kubelet[2521]: E1009 07:50:56.505246 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:50:56.509146 kubelet[2521]: E1009 07:50:56.507105 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:50:56.509146 kubelet[2521]: W1009 07:50:56.507139 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:50:56.509146 kubelet[2521]: E1009 07:50:56.507184 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:50:56.510124 kubelet[2521]: E1009 07:50:56.509716 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:50:56.510124 kubelet[2521]: W1009 07:50:56.509743 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:50:56.510124 kubelet[2521]: E1009 07:50:56.509775 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:50:56.510124 kubelet[2521]: E1009 07:50:56.510125 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:50:56.510124 kubelet[2521]: W1009 07:50:56.510140 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:50:56.511162 kubelet[2521]: E1009 07:50:56.510168 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:50:56.512769 kubelet[2521]: E1009 07:50:56.511797 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:50:56.512769 kubelet[2521]: W1009 07:50:56.511821 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:50:56.512769 kubelet[2521]: E1009 07:50:56.511846 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:50:56.512769 kubelet[2521]: E1009 07:50:56.512604 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:50:56.512769 kubelet[2521]: W1009 07:50:56.512622 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:50:56.512769 kubelet[2521]: E1009 07:50:56.512639 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:50:56.516185 kubelet[2521]: E1009 07:50:56.515193 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:50:56.516185 kubelet[2521]: W1009 07:50:56.515228 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:50:56.518633 kubelet[2521]: E1009 07:50:56.517334 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:50:56.518633 kubelet[2521]: E1009 07:50:56.518211 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:50:56.518633 kubelet[2521]: W1009 07:50:56.518363 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:50:56.518633 kubelet[2521]: E1009 07:50:56.518396 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:50:56.520935 kubelet[2521]: E1009 07:50:56.520483 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:50:56.520935 kubelet[2521]: W1009 07:50:56.520521 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:50:56.520935 kubelet[2521]: E1009 07:50:56.520554 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:50:56.522394 kubelet[2521]: E1009 07:50:56.521042 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:50:56.522394 kubelet[2521]: W1009 07:50:56.521057 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:50:56.522394 kubelet[2521]: E1009 07:50:56.521076 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:50:56.535656 containerd[1459]: time="2024-10-09T07:50:56.534778349Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:50:56.535656 containerd[1459]: time="2024-10-09T07:50:56.534872683Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:50:56.535656 containerd[1459]: time="2024-10-09T07:50:56.534939786Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:50:56.535656 containerd[1459]: time="2024-10-09T07:50:56.535128503Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:50:56.547347 kubelet[2521]: E1009 07:50:56.547290 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:50:56.547347 kubelet[2521]: W1009 07:50:56.547335 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:50:56.547640 kubelet[2521]: E1009 07:50:56.547371 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:50:56.584235 systemd[1]: Started cri-containerd-6f1e861f7770c24c418edc8fd774b3249a706ce20522fc003a3a4e53bd1673fa.scope - libcontainer container 6f1e861f7770c24c418edc8fd774b3249a706ce20522fc003a3a4e53bd1673fa. Oct 9 07:50:56.702971 containerd[1459]: time="2024-10-09T07:50:56.701820744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-s7m66,Uid:df01fd9e-1a43-4a4b-a451-a39b302bd44a,Namespace:calico-system,Attempt:0,} returns sandbox id \"6f1e861f7770c24c418edc8fd774b3249a706ce20522fc003a3a4e53bd1673fa\"" Oct 9 07:50:56.710247 kubelet[2521]: E1009 07:50:56.710200 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:50:56.714552 containerd[1459]: time="2024-10-09T07:50:56.713952568Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\"" Oct 9 07:50:56.797781 containerd[1459]: time="2024-10-09T07:50:56.797633456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-597b685764-jqcvl,Uid:416e5950-a96a-419b-ab60-d37348c80f88,Namespace:calico-system,Attempt:0,} returns sandbox id \"ec364840b8252f3b3fcce5c1bf50f2ab4e80d2a5221c3c3504f8cff2074cd301\"" Oct 9 07:50:56.798978 kubelet[2521]: E1009 07:50:56.798916 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:50:57.991689 kubelet[2521]: E1009 07:50:57.990740 2521 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lxz42" podUID="8243e6c5-fffe-40ae-9ffc-3e5c0557a44d" Oct 9 07:50:58.084020 containerd[1459]: time="2024-10-09T07:50:58.083965797Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:50:58.086656 containerd[1459]: time="2024-10-09T07:50:58.086593465Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1: active requests=0, bytes read=5141007" Oct 9 07:50:58.087949 containerd[1459]: time="2024-10-09T07:50:58.087827551Z" level=info msg="ImageCreate event name:\"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:50:58.092821 containerd[1459]: time="2024-10-09T07:50:58.092103012Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:50:58.093642 containerd[1459]: time="2024-10-09T07:50:58.093488615Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" with image id \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\", size \"6633368\" in 1.378948777s" Oct 9 07:50:58.093642 containerd[1459]: time="2024-10-09T07:50:58.093549625Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" returns image reference \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\"" Oct 9 07:50:58.097504 containerd[1459]: time="2024-10-09T07:50:58.096821841Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\"" Oct 9 07:50:58.097713 containerd[1459]: time="2024-10-09T07:50:58.097655165Z" level=info msg="CreateContainer within sandbox \"6f1e861f7770c24c418edc8fd774b3249a706ce20522fc003a3a4e53bd1673fa\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 9 07:50:58.132767 containerd[1459]: time="2024-10-09T07:50:58.131838540Z" level=info msg="CreateContainer within sandbox \"6f1e861f7770c24c418edc8fd774b3249a706ce20522fc003a3a4e53bd1673fa\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e749df78482ff35aa926e5d013e430b5058247817dd797ad14d327a61e55553b\"" Oct 9 07:50:58.135725 containerd[1459]: time="2024-10-09T07:50:58.135655364Z" level=info msg="StartContainer for \"e749df78482ff35aa926e5d013e430b5058247817dd797ad14d327a61e55553b\"" Oct 9 07:50:58.217535 systemd[1]: Started cri-containerd-e749df78482ff35aa926e5d013e430b5058247817dd797ad14d327a61e55553b.scope - libcontainer container e749df78482ff35aa926e5d013e430b5058247817dd797ad14d327a61e55553b. Oct 9 07:50:58.302309 containerd[1459]: time="2024-10-09T07:50:58.302136026Z" level=info msg="StartContainer for \"e749df78482ff35aa926e5d013e430b5058247817dd797ad14d327a61e55553b\" returns successfully" Oct 9 07:50:58.333293 systemd[1]: cri-containerd-e749df78482ff35aa926e5d013e430b5058247817dd797ad14d327a61e55553b.scope: Deactivated successfully. Oct 9 07:50:58.381199 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e749df78482ff35aa926e5d013e430b5058247817dd797ad14d327a61e55553b-rootfs.mount: Deactivated successfully. Oct 9 07:50:58.388508 containerd[1459]: time="2024-10-09T07:50:58.388342620Z" level=info msg="shim disconnected" id=e749df78482ff35aa926e5d013e430b5058247817dd797ad14d327a61e55553b namespace=k8s.io Oct 9 07:50:58.388508 containerd[1459]: time="2024-10-09T07:50:58.388420531Z" level=warning msg="cleaning up after shim disconnected" id=e749df78482ff35aa926e5d013e430b5058247817dd797ad14d327a61e55553b namespace=k8s.io Oct 9 07:50:58.388508 containerd[1459]: time="2024-10-09T07:50:58.388433600Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 07:50:59.138818 kubelet[2521]: E1009 07:50:59.138422 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:50:59.982321 kubelet[2521]: E1009 07:50:59.982096 2521 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lxz42" podUID="8243e6c5-fffe-40ae-9ffc-3e5c0557a44d" Oct 9 07:51:01.418595 containerd[1459]: time="2024-10-09T07:51:01.398784387Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:51:01.420873 containerd[1459]: time="2024-10-09T07:51:01.420674492Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.1: active requests=0, bytes read=29471335" Oct 9 07:51:01.425425 containerd[1459]: time="2024-10-09T07:51:01.423508375Z" level=info msg="ImageCreate event name:\"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:51:01.432477 containerd[1459]: time="2024-10-09T07:51:01.432398718Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:51:01.434136 containerd[1459]: time="2024-10-09T07:51:01.434073556Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.1\" with image id \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\", size \"30963728\" in 3.337198771s" Oct 9 07:51:01.434387 containerd[1459]: time="2024-10-09T07:51:01.434354708Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\" returns image reference \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\"" Oct 9 07:51:01.441062 containerd[1459]: time="2024-10-09T07:51:01.440986897Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\"" Oct 9 07:51:01.500735 containerd[1459]: time="2024-10-09T07:51:01.500666701Z" level=info msg="CreateContainer within sandbox \"ec364840b8252f3b3fcce5c1bf50f2ab4e80d2a5221c3c3504f8cff2074cd301\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 9 07:51:01.591273 containerd[1459]: time="2024-10-09T07:51:01.590777808Z" level=info msg="CreateContainer within sandbox \"ec364840b8252f3b3fcce5c1bf50f2ab4e80d2a5221c3c3504f8cff2074cd301\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"6961e680c14d05868b25ffacce43132ee720b8ba0998f24c8855f22cf80e6ad2\"" Oct 9 07:51:01.596033 containerd[1459]: time="2024-10-09T07:51:01.593379067Z" level=info msg="StartContainer for \"6961e680c14d05868b25ffacce43132ee720b8ba0998f24c8855f22cf80e6ad2\"" Oct 9 07:51:01.691247 systemd[1]: Started cri-containerd-6961e680c14d05868b25ffacce43132ee720b8ba0998f24c8855f22cf80e6ad2.scope - libcontainer container 6961e680c14d05868b25ffacce43132ee720b8ba0998f24c8855f22cf80e6ad2. Oct 9 07:51:01.820742 containerd[1459]: time="2024-10-09T07:51:01.820676698Z" level=info msg="StartContainer for \"6961e680c14d05868b25ffacce43132ee720b8ba0998f24c8855f22cf80e6ad2\" returns successfully" Oct 9 07:51:01.982636 kubelet[2521]: E1009 07:51:01.982449 2521 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lxz42" podUID="8243e6c5-fffe-40ae-9ffc-3e5c0557a44d" Oct 9 07:51:02.170757 kubelet[2521]: E1009 07:51:02.170219 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:51:02.229190 kubelet[2521]: I1009 07:51:02.228828 2521 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-597b685764-jqcvl" podStartSLOduration=2.592004365 podStartE2EDuration="7.228797216s" podCreationTimestamp="2024-10-09 07:50:55 +0000 UTC" firstStartedPulling="2024-10-09 07:50:56.799544715 +0000 UTC m=+14.094787752" lastFinishedPulling="2024-10-09 07:51:01.436337551 +0000 UTC m=+18.731580603" observedRunningTime="2024-10-09 07:51:02.228640509 +0000 UTC m=+19.523883574" watchObservedRunningTime="2024-10-09 07:51:02.228797216 +0000 UTC m=+19.524040287" Oct 9 07:51:03.219314 kubelet[2521]: I1009 07:51:03.218017 2521 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 9 07:51:03.219314 kubelet[2521]: E1009 07:51:03.218717 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:51:03.982135 kubelet[2521]: E1009 07:51:03.982074 2521 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lxz42" podUID="8243e6c5-fffe-40ae-9ffc-3e5c0557a44d" Oct 9 07:51:04.201647 kubelet[2521]: E1009 07:51:04.201598 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:51:05.209923 kubelet[2521]: E1009 07:51:05.208572 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:51:05.982837 kubelet[2521]: E1009 07:51:05.982763 2521 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lxz42" podUID="8243e6c5-fffe-40ae-9ffc-3e5c0557a44d" Oct 9 07:51:07.058319 containerd[1459]: time="2024-10-09T07:51:07.058241228Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:51:07.060706 containerd[1459]: time="2024-10-09T07:51:07.060321079Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.1: active requests=0, bytes read=93083736" Oct 9 07:51:07.062015 containerd[1459]: time="2024-10-09T07:51:07.061958896Z" level=info msg="ImageCreate event name:\"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:51:07.066035 containerd[1459]: time="2024-10-09T07:51:07.065946094Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:51:07.067277 containerd[1459]: time="2024-10-09T07:51:07.067215011Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.1\" with image id \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\", size \"94576137\" in 5.625886533s" Oct 9 07:51:07.067726 containerd[1459]: time="2024-10-09T07:51:07.067445770Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\" returns image reference \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\"" Oct 9 07:51:07.071761 containerd[1459]: time="2024-10-09T07:51:07.071229670Z" level=info msg="CreateContainer within sandbox \"6f1e861f7770c24c418edc8fd774b3249a706ce20522fc003a3a4e53bd1673fa\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 9 07:51:07.109299 containerd[1459]: time="2024-10-09T07:51:07.109226143Z" level=info msg="CreateContainer within sandbox \"6f1e861f7770c24c418edc8fd774b3249a706ce20522fc003a3a4e53bd1673fa\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"baeeb3ef6d3af861d44c36a5ecda292e4a27241d0b19bbc7206dff8d7018575e\"" Oct 9 07:51:07.111371 containerd[1459]: time="2024-10-09T07:51:07.110298268Z" level=info msg="StartContainer for \"baeeb3ef6d3af861d44c36a5ecda292e4a27241d0b19bbc7206dff8d7018575e\"" Oct 9 07:51:07.276246 systemd[1]: Started cri-containerd-baeeb3ef6d3af861d44c36a5ecda292e4a27241d0b19bbc7206dff8d7018575e.scope - libcontainer container baeeb3ef6d3af861d44c36a5ecda292e4a27241d0b19bbc7206dff8d7018575e. Oct 9 07:51:07.344556 containerd[1459]: time="2024-10-09T07:51:07.344484911Z" level=info msg="StartContainer for \"baeeb3ef6d3af861d44c36a5ecda292e4a27241d0b19bbc7206dff8d7018575e\" returns successfully" Oct 9 07:51:07.982434 kubelet[2521]: E1009 07:51:07.981957 2521 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lxz42" podUID="8243e6c5-fffe-40ae-9ffc-3e5c0557a44d" Oct 9 07:51:08.181655 systemd[1]: cri-containerd-baeeb3ef6d3af861d44c36a5ecda292e4a27241d0b19bbc7206dff8d7018575e.scope: Deactivated successfully. Oct 9 07:51:08.239046 kubelet[2521]: E1009 07:51:08.234965 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:51:08.241954 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-baeeb3ef6d3af861d44c36a5ecda292e4a27241d0b19bbc7206dff8d7018575e-rootfs.mount: Deactivated successfully. Oct 9 07:51:08.260627 containerd[1459]: time="2024-10-09T07:51:08.259140508Z" level=info msg="shim disconnected" id=baeeb3ef6d3af861d44c36a5ecda292e4a27241d0b19bbc7206dff8d7018575e namespace=k8s.io Oct 9 07:51:08.260627 containerd[1459]: time="2024-10-09T07:51:08.259230539Z" level=warning msg="cleaning up after shim disconnected" id=baeeb3ef6d3af861d44c36a5ecda292e4a27241d0b19bbc7206dff8d7018575e namespace=k8s.io Oct 9 07:51:08.260627 containerd[1459]: time="2024-10-09T07:51:08.259244574Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 07:51:08.313720 containerd[1459]: time="2024-10-09T07:51:08.313009619Z" level=warning msg="cleanup warnings time=\"2024-10-09T07:51:08Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Oct 9 07:51:08.355446 kubelet[2521]: I1009 07:51:08.355392 2521 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Oct 9 07:51:08.441005 systemd[1]: Created slice kubepods-burstable-pod30cb655e_3275_4ce6_b495_a8243c54033b.slice - libcontainer container kubepods-burstable-pod30cb655e_3275_4ce6_b495_a8243c54033b.slice. Oct 9 07:51:08.466669 systemd[1]: Created slice kubepods-burstable-pod0ba2cf0b_92bf_40e7_afd1_29479bd52c4d.slice - libcontainer container kubepods-burstable-pod0ba2cf0b_92bf_40e7_afd1_29479bd52c4d.slice. Oct 9 07:51:08.482538 systemd[1]: Created slice kubepods-besteffort-podbd686d8f_58df_4186_8240_bba468f057cf.slice - libcontainer container kubepods-besteffort-podbd686d8f_58df_4186_8240_bba468f057cf.slice. Oct 9 07:51:08.494421 kubelet[2521]: I1009 07:51:08.494254 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/30cb655e-3275-4ce6-b495-a8243c54033b-config-volume\") pod \"coredns-6f6b679f8f-br874\" (UID: \"30cb655e-3275-4ce6-b495-a8243c54033b\") " pod="kube-system/coredns-6f6b679f8f-br874" Oct 9 07:51:08.494421 kubelet[2521]: I1009 07:51:08.494322 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0ba2cf0b-92bf-40e7-afd1-29479bd52c4d-config-volume\") pod \"coredns-6f6b679f8f-56sbq\" (UID: \"0ba2cf0b-92bf-40e7-afd1-29479bd52c4d\") " pod="kube-system/coredns-6f6b679f8f-56sbq" Oct 9 07:51:08.494421 kubelet[2521]: I1009 07:51:08.494359 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cgkf\" (UniqueName: \"kubernetes.io/projected/30cb655e-3275-4ce6-b495-a8243c54033b-kube-api-access-6cgkf\") pod \"coredns-6f6b679f8f-br874\" (UID: \"30cb655e-3275-4ce6-b495-a8243c54033b\") " pod="kube-system/coredns-6f6b679f8f-br874" Oct 9 07:51:08.494421 kubelet[2521]: I1009 07:51:08.494390 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89phs\" (UniqueName: \"kubernetes.io/projected/0ba2cf0b-92bf-40e7-afd1-29479bd52c4d-kube-api-access-89phs\") pod \"coredns-6f6b679f8f-56sbq\" (UID: \"0ba2cf0b-92bf-40e7-afd1-29479bd52c4d\") " pod="kube-system/coredns-6f6b679f8f-56sbq" Oct 9 07:51:08.600298 kubelet[2521]: I1009 07:51:08.597088 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gxrk\" (UniqueName: \"kubernetes.io/projected/bd686d8f-58df-4186-8240-bba468f057cf-kube-api-access-7gxrk\") pod \"calico-kube-controllers-8468d56f97-2sc4p\" (UID: \"bd686d8f-58df-4186-8240-bba468f057cf\") " pod="calico-system/calico-kube-controllers-8468d56f97-2sc4p" Oct 9 07:51:08.600298 kubelet[2521]: I1009 07:51:08.597314 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd686d8f-58df-4186-8240-bba468f057cf-tigera-ca-bundle\") pod \"calico-kube-controllers-8468d56f97-2sc4p\" (UID: \"bd686d8f-58df-4186-8240-bba468f057cf\") " pod="calico-system/calico-kube-controllers-8468d56f97-2sc4p" Oct 9 07:51:08.757042 kubelet[2521]: E1009 07:51:08.754558 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:51:08.758010 containerd[1459]: time="2024-10-09T07:51:08.757753117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-br874,Uid:30cb655e-3275-4ce6-b495-a8243c54033b,Namespace:kube-system,Attempt:0,}" Oct 9 07:51:08.776644 kubelet[2521]: E1009 07:51:08.776192 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:51:08.779845 containerd[1459]: time="2024-10-09T07:51:08.779186234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-56sbq,Uid:0ba2cf0b-92bf-40e7-afd1-29479bd52c4d,Namespace:kube-system,Attempt:0,}" Oct 9 07:51:08.798556 containerd[1459]: time="2024-10-09T07:51:08.798040598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8468d56f97-2sc4p,Uid:bd686d8f-58df-4186-8240-bba468f057cf,Namespace:calico-system,Attempt:0,}" Oct 9 07:51:09.164281 containerd[1459]: time="2024-10-09T07:51:09.164198380Z" level=error msg="Failed to destroy network for sandbox \"26d25664fe842060cb331b64fee0eeb4a1914af62993fddaa7b73207d5ed1e0d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:51:09.167838 containerd[1459]: time="2024-10-09T07:51:09.166009186Z" level=error msg="Failed to destroy network for sandbox \"fac05bb684250bae6346a07d5eab8947b923b1efd02279b4c25e48bfa25d7208\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:51:09.169154 containerd[1459]: time="2024-10-09T07:51:09.169078720Z" level=error msg="encountered an error cleaning up failed sandbox \"26d25664fe842060cb331b64fee0eeb4a1914af62993fddaa7b73207d5ed1e0d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:51:09.169436 containerd[1459]: time="2024-10-09T07:51:09.169397254Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8468d56f97-2sc4p,Uid:bd686d8f-58df-4186-8240-bba468f057cf,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"26d25664fe842060cb331b64fee0eeb4a1914af62993fddaa7b73207d5ed1e0d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:51:09.170209 containerd[1459]: time="2024-10-09T07:51:09.170135768Z" level=error msg="encountered an error cleaning up failed sandbox \"fac05bb684250bae6346a07d5eab8947b923b1efd02279b4c25e48bfa25d7208\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:51:09.170389 containerd[1459]: time="2024-10-09T07:51:09.170220807Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-56sbq,Uid:0ba2cf0b-92bf-40e7-afd1-29479bd52c4d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fac05bb684250bae6346a07d5eab8947b923b1efd02279b4c25e48bfa25d7208\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:51:09.177824 kubelet[2521]: E1009 07:51:09.176710 2521 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fac05bb684250bae6346a07d5eab8947b923b1efd02279b4c25e48bfa25d7208\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:51:09.177824 kubelet[2521]: E1009 07:51:09.176794 2521 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fac05bb684250bae6346a07d5eab8947b923b1efd02279b4c25e48bfa25d7208\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-56sbq" Oct 9 07:51:09.177824 kubelet[2521]: E1009 07:51:09.176819 2521 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fac05bb684250bae6346a07d5eab8947b923b1efd02279b4c25e48bfa25d7208\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-56sbq" Oct 9 07:51:09.178533 kubelet[2521]: E1009 07:51:09.176873 2521 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-56sbq_kube-system(0ba2cf0b-92bf-40e7-afd1-29479bd52c4d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-56sbq_kube-system(0ba2cf0b-92bf-40e7-afd1-29479bd52c4d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fac05bb684250bae6346a07d5eab8947b923b1efd02279b4c25e48bfa25d7208\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-56sbq" podUID="0ba2cf0b-92bf-40e7-afd1-29479bd52c4d" Oct 9 07:51:09.178533 kubelet[2521]: E1009 07:51:09.177630 2521 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26d25664fe842060cb331b64fee0eeb4a1914af62993fddaa7b73207d5ed1e0d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:51:09.178533 kubelet[2521]: E1009 07:51:09.177698 2521 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26d25664fe842060cb331b64fee0eeb4a1914af62993fddaa7b73207d5ed1e0d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8468d56f97-2sc4p" Oct 9 07:51:09.178675 kubelet[2521]: E1009 07:51:09.177721 2521 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26d25664fe842060cb331b64fee0eeb4a1914af62993fddaa7b73207d5ed1e0d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8468d56f97-2sc4p" Oct 9 07:51:09.178675 kubelet[2521]: E1009 07:51:09.177761 2521 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-8468d56f97-2sc4p_calico-system(bd686d8f-58df-4186-8240-bba468f057cf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-8468d56f97-2sc4p_calico-system(bd686d8f-58df-4186-8240-bba468f057cf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"26d25664fe842060cb331b64fee0eeb4a1914af62993fddaa7b73207d5ed1e0d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-8468d56f97-2sc4p" podUID="bd686d8f-58df-4186-8240-bba468f057cf" Oct 9 07:51:09.191101 containerd[1459]: time="2024-10-09T07:51:09.190510822Z" level=error msg="Failed to destroy network for sandbox \"b94e08d6d74da40177944e875e7d7e6708a6ca1ff5e5ccad3a11b9fb63fd37b5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:51:09.191101 containerd[1459]: time="2024-10-09T07:51:09.190972047Z" level=error msg="encountered an error cleaning up failed sandbox \"b94e08d6d74da40177944e875e7d7e6708a6ca1ff5e5ccad3a11b9fb63fd37b5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:51:09.191101 containerd[1459]: time="2024-10-09T07:51:09.191038477Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-br874,Uid:30cb655e-3275-4ce6-b495-a8243c54033b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b94e08d6d74da40177944e875e7d7e6708a6ca1ff5e5ccad3a11b9fb63fd37b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:51:09.191565 kubelet[2521]: E1009 07:51:09.191525 2521 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b94e08d6d74da40177944e875e7d7e6708a6ca1ff5e5ccad3a11b9fb63fd37b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:51:09.192139 kubelet[2521]: E1009 07:51:09.191865 2521 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b94e08d6d74da40177944e875e7d7e6708a6ca1ff5e5ccad3a11b9fb63fd37b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-br874" Oct 9 07:51:09.192139 kubelet[2521]: E1009 07:51:09.191927 2521 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b94e08d6d74da40177944e875e7d7e6708a6ca1ff5e5ccad3a11b9fb63fd37b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-br874" Oct 9 07:51:09.192139 kubelet[2521]: E1009 07:51:09.192006 2521 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-br874_kube-system(30cb655e-3275-4ce6-b495-a8243c54033b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-br874_kube-system(30cb655e-3275-4ce6-b495-a8243c54033b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b94e08d6d74da40177944e875e7d7e6708a6ca1ff5e5ccad3a11b9fb63fd37b5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-br874" podUID="30cb655e-3275-4ce6-b495-a8243c54033b" Oct 9 07:51:09.242513 kubelet[2521]: I1009 07:51:09.242464 2521 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fac05bb684250bae6346a07d5eab8947b923b1efd02279b4c25e48bfa25d7208" Oct 9 07:51:09.250421 kubelet[2521]: I1009 07:51:09.249498 2521 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b94e08d6d74da40177944e875e7d7e6708a6ca1ff5e5ccad3a11b9fb63fd37b5" Oct 9 07:51:09.250656 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b94e08d6d74da40177944e875e7d7e6708a6ca1ff5e5ccad3a11b9fb63fd37b5-shm.mount: Deactivated successfully. Oct 9 07:51:09.258350 containerd[1459]: time="2024-10-09T07:51:09.258232479Z" level=info msg="StopPodSandbox for \"fac05bb684250bae6346a07d5eab8947b923b1efd02279b4c25e48bfa25d7208\"" Oct 9 07:51:09.259421 containerd[1459]: time="2024-10-09T07:51:09.258934708Z" level=info msg="StopPodSandbox for \"b94e08d6d74da40177944e875e7d7e6708a6ca1ff5e5ccad3a11b9fb63fd37b5\"" Oct 9 07:51:09.261359 containerd[1459]: time="2024-10-09T07:51:09.261268029Z" level=info msg="Ensure that sandbox fac05bb684250bae6346a07d5eab8947b923b1efd02279b4c25e48bfa25d7208 in task-service has been cleanup successfully" Oct 9 07:51:09.262570 containerd[1459]: time="2024-10-09T07:51:09.261470438Z" level=info msg="Ensure that sandbox b94e08d6d74da40177944e875e7d7e6708a6ca1ff5e5ccad3a11b9fb63fd37b5 in task-service has been cleanup successfully" Oct 9 07:51:09.268305 kubelet[2521]: E1009 07:51:09.268258 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:51:09.276251 containerd[1459]: time="2024-10-09T07:51:09.276202112Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\"" Oct 9 07:51:09.281209 kubelet[2521]: I1009 07:51:09.281139 2521 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26d25664fe842060cb331b64fee0eeb4a1914af62993fddaa7b73207d5ed1e0d" Oct 9 07:51:09.285181 containerd[1459]: time="2024-10-09T07:51:09.284415297Z" level=info msg="StopPodSandbox for \"26d25664fe842060cb331b64fee0eeb4a1914af62993fddaa7b73207d5ed1e0d\"" Oct 9 07:51:09.287146 containerd[1459]: time="2024-10-09T07:51:09.286872647Z" level=info msg="Ensure that sandbox 26d25664fe842060cb331b64fee0eeb4a1914af62993fddaa7b73207d5ed1e0d in task-service has been cleanup successfully" Oct 9 07:51:09.397769 containerd[1459]: time="2024-10-09T07:51:09.394037891Z" level=error msg="StopPodSandbox for \"b94e08d6d74da40177944e875e7d7e6708a6ca1ff5e5ccad3a11b9fb63fd37b5\" failed" error="failed to destroy network for sandbox \"b94e08d6d74da40177944e875e7d7e6708a6ca1ff5e5ccad3a11b9fb63fd37b5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:51:09.397769 containerd[1459]: time="2024-10-09T07:51:09.397362254Z" level=error msg="StopPodSandbox for \"fac05bb684250bae6346a07d5eab8947b923b1efd02279b4c25e48bfa25d7208\" failed" error="failed to destroy network for sandbox \"fac05bb684250bae6346a07d5eab8947b923b1efd02279b4c25e48bfa25d7208\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:51:09.398038 kubelet[2521]: E1009 07:51:09.397479 2521 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b94e08d6d74da40177944e875e7d7e6708a6ca1ff5e5ccad3a11b9fb63fd37b5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b94e08d6d74da40177944e875e7d7e6708a6ca1ff5e5ccad3a11b9fb63fd37b5" Oct 9 07:51:09.398038 kubelet[2521]: E1009 07:51:09.397561 2521 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b94e08d6d74da40177944e875e7d7e6708a6ca1ff5e5ccad3a11b9fb63fd37b5"} Oct 9 07:51:09.398038 kubelet[2521]: E1009 07:51:09.397651 2521 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"30cb655e-3275-4ce6-b495-a8243c54033b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b94e08d6d74da40177944e875e7d7e6708a6ca1ff5e5ccad3a11b9fb63fd37b5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 07:51:09.398038 kubelet[2521]: E1009 07:51:09.397689 2521 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"30cb655e-3275-4ce6-b495-a8243c54033b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b94e08d6d74da40177944e875e7d7e6708a6ca1ff5e5ccad3a11b9fb63fd37b5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-br874" podUID="30cb655e-3275-4ce6-b495-a8243c54033b" Oct 9 07:51:09.398603 kubelet[2521]: E1009 07:51:09.398421 2521 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fac05bb684250bae6346a07d5eab8947b923b1efd02279b4c25e48bfa25d7208\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fac05bb684250bae6346a07d5eab8947b923b1efd02279b4c25e48bfa25d7208" Oct 9 07:51:09.398603 kubelet[2521]: E1009 07:51:09.398461 2521 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fac05bb684250bae6346a07d5eab8947b923b1efd02279b4c25e48bfa25d7208"} Oct 9 07:51:09.398603 kubelet[2521]: E1009 07:51:09.398504 2521 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0ba2cf0b-92bf-40e7-afd1-29479bd52c4d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fac05bb684250bae6346a07d5eab8947b923b1efd02279b4c25e48bfa25d7208\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 07:51:09.398603 kubelet[2521]: E1009 07:51:09.398535 2521 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0ba2cf0b-92bf-40e7-afd1-29479bd52c4d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fac05bb684250bae6346a07d5eab8947b923b1efd02279b4c25e48bfa25d7208\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-56sbq" podUID="0ba2cf0b-92bf-40e7-afd1-29479bd52c4d" Oct 9 07:51:09.409971 containerd[1459]: time="2024-10-09T07:51:09.409875943Z" level=error msg="StopPodSandbox for \"26d25664fe842060cb331b64fee0eeb4a1914af62993fddaa7b73207d5ed1e0d\" failed" error="failed to destroy network for sandbox \"26d25664fe842060cb331b64fee0eeb4a1914af62993fddaa7b73207d5ed1e0d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:51:09.410596 kubelet[2521]: E1009 07:51:09.410354 2521 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"26d25664fe842060cb331b64fee0eeb4a1914af62993fddaa7b73207d5ed1e0d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="26d25664fe842060cb331b64fee0eeb4a1914af62993fddaa7b73207d5ed1e0d" Oct 9 07:51:09.410596 kubelet[2521]: E1009 07:51:09.410438 2521 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"26d25664fe842060cb331b64fee0eeb4a1914af62993fddaa7b73207d5ed1e0d"} Oct 9 07:51:09.410596 kubelet[2521]: E1009 07:51:09.410489 2521 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bd686d8f-58df-4186-8240-bba468f057cf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"26d25664fe842060cb331b64fee0eeb4a1914af62993fddaa7b73207d5ed1e0d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 07:51:09.410596 kubelet[2521]: E1009 07:51:09.410527 2521 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bd686d8f-58df-4186-8240-bba468f057cf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"26d25664fe842060cb331b64fee0eeb4a1914af62993fddaa7b73207d5ed1e0d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-8468d56f97-2sc4p" podUID="bd686d8f-58df-4186-8240-bba468f057cf" Oct 9 07:51:09.994236 systemd[1]: Created slice kubepods-besteffort-pod8243e6c5_fffe_40ae_9ffc_3e5c0557a44d.slice - libcontainer container kubepods-besteffort-pod8243e6c5_fffe_40ae_9ffc_3e5c0557a44d.slice. Oct 9 07:51:09.998942 containerd[1459]: time="2024-10-09T07:51:09.998826560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lxz42,Uid:8243e6c5-fffe-40ae-9ffc-3e5c0557a44d,Namespace:calico-system,Attempt:0,}" Oct 9 07:51:10.133953 containerd[1459]: time="2024-10-09T07:51:10.131159569Z" level=error msg="Failed to destroy network for sandbox \"ccc48ae28a7e5f2b117790cd70dc01754a0d3e1e1b5d220e732ef26ac559a883\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:51:10.134754 containerd[1459]: time="2024-10-09T07:51:10.134683724Z" level=error msg="encountered an error cleaning up failed sandbox \"ccc48ae28a7e5f2b117790cd70dc01754a0d3e1e1b5d220e732ef26ac559a883\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:51:10.135258 containerd[1459]: time="2024-10-09T07:51:10.135138101Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lxz42,Uid:8243e6c5-fffe-40ae-9ffc-3e5c0557a44d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ccc48ae28a7e5f2b117790cd70dc01754a0d3e1e1b5d220e732ef26ac559a883\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:51:10.137319 kubelet[2521]: E1009 07:51:10.137216 2521 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ccc48ae28a7e5f2b117790cd70dc01754a0d3e1e1b5d220e732ef26ac559a883\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:51:10.137431 kubelet[2521]: E1009 07:51:10.137328 2521 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ccc48ae28a7e5f2b117790cd70dc01754a0d3e1e1b5d220e732ef26ac559a883\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lxz42" Oct 9 07:51:10.137431 kubelet[2521]: E1009 07:51:10.137361 2521 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ccc48ae28a7e5f2b117790cd70dc01754a0d3e1e1b5d220e732ef26ac559a883\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lxz42" Oct 9 07:51:10.137525 kubelet[2521]: E1009 07:51:10.137421 2521 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-lxz42_calico-system(8243e6c5-fffe-40ae-9ffc-3e5c0557a44d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-lxz42_calico-system(8243e6c5-fffe-40ae-9ffc-3e5c0557a44d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ccc48ae28a7e5f2b117790cd70dc01754a0d3e1e1b5d220e732ef26ac559a883\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lxz42" podUID="8243e6c5-fffe-40ae-9ffc-3e5c0557a44d" Oct 9 07:51:10.137838 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ccc48ae28a7e5f2b117790cd70dc01754a0d3e1e1b5d220e732ef26ac559a883-shm.mount: Deactivated successfully. Oct 9 07:51:10.286050 kubelet[2521]: I1009 07:51:10.285379 2521 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ccc48ae28a7e5f2b117790cd70dc01754a0d3e1e1b5d220e732ef26ac559a883" Oct 9 07:51:10.289977 containerd[1459]: time="2024-10-09T07:51:10.289004652Z" level=info msg="StopPodSandbox for \"ccc48ae28a7e5f2b117790cd70dc01754a0d3e1e1b5d220e732ef26ac559a883\"" Oct 9 07:51:10.289977 containerd[1459]: time="2024-10-09T07:51:10.289325787Z" level=info msg="Ensure that sandbox ccc48ae28a7e5f2b117790cd70dc01754a0d3e1e1b5d220e732ef26ac559a883 in task-service has been cleanup successfully" Oct 9 07:51:10.336800 containerd[1459]: time="2024-10-09T07:51:10.336713962Z" level=error msg="StopPodSandbox for \"ccc48ae28a7e5f2b117790cd70dc01754a0d3e1e1b5d220e732ef26ac559a883\" failed" error="failed to destroy network for sandbox \"ccc48ae28a7e5f2b117790cd70dc01754a0d3e1e1b5d220e732ef26ac559a883\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:51:10.337103 kubelet[2521]: E1009 07:51:10.337033 2521 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ccc48ae28a7e5f2b117790cd70dc01754a0d3e1e1b5d220e732ef26ac559a883\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ccc48ae28a7e5f2b117790cd70dc01754a0d3e1e1b5d220e732ef26ac559a883" Oct 9 07:51:10.337103 kubelet[2521]: E1009 07:51:10.337086 2521 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ccc48ae28a7e5f2b117790cd70dc01754a0d3e1e1b5d220e732ef26ac559a883"} Oct 9 07:51:10.337217 kubelet[2521]: E1009 07:51:10.337122 2521 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8243e6c5-fffe-40ae-9ffc-3e5c0557a44d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ccc48ae28a7e5f2b117790cd70dc01754a0d3e1e1b5d220e732ef26ac559a883\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 07:51:10.337217 kubelet[2521]: E1009 07:51:10.337151 2521 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8243e6c5-fffe-40ae-9ffc-3e5c0557a44d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ccc48ae28a7e5f2b117790cd70dc01754a0d3e1e1b5d220e732ef26ac559a883\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lxz42" podUID="8243e6c5-fffe-40ae-9ffc-3e5c0557a44d" Oct 9 07:51:15.299269 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3475410616.mount: Deactivated successfully. Oct 9 07:51:15.506778 containerd[1459]: time="2024-10-09T07:51:15.506516472Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:51:15.527085 containerd[1459]: time="2024-10-09T07:51:15.466560780Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.1: active requests=0, bytes read=117873564" Oct 9 07:51:15.546858 containerd[1459]: time="2024-10-09T07:51:15.546791989Z" level=info msg="ImageCreate event name:\"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:51:15.549847 containerd[1459]: time="2024-10-09T07:51:15.549685841Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:51:15.550977 containerd[1459]: time="2024-10-09T07:51:15.550932054Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.1\" with image id \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\", size \"117873426\" in 6.274239475s" Oct 9 07:51:15.550977 containerd[1459]: time="2024-10-09T07:51:15.550976509Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\" returns image reference \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\"" Oct 9 07:51:15.625276 containerd[1459]: time="2024-10-09T07:51:15.625207560Z" level=info msg="CreateContainer within sandbox \"6f1e861f7770c24c418edc8fd774b3249a706ce20522fc003a3a4e53bd1673fa\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 9 07:51:15.673858 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount305650381.mount: Deactivated successfully. Oct 9 07:51:15.687956 containerd[1459]: time="2024-10-09T07:51:15.687724967Z" level=info msg="CreateContainer within sandbox \"6f1e861f7770c24c418edc8fd774b3249a706ce20522fc003a3a4e53bd1673fa\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"90a987e4655cd578319287b647aa824586ee690423d0d105912f260d1608aa95\"" Oct 9 07:51:15.691038 containerd[1459]: time="2024-10-09T07:51:15.688852753Z" level=info msg="StartContainer for \"90a987e4655cd578319287b647aa824586ee690423d0d105912f260d1608aa95\"" Oct 9 07:51:15.933675 systemd[1]: Started cri-containerd-90a987e4655cd578319287b647aa824586ee690423d0d105912f260d1608aa95.scope - libcontainer container 90a987e4655cd578319287b647aa824586ee690423d0d105912f260d1608aa95. Oct 9 07:51:16.040514 containerd[1459]: time="2024-10-09T07:51:16.040457609Z" level=info msg="StartContainer for \"90a987e4655cd578319287b647aa824586ee690423d0d105912f260d1608aa95\" returns successfully" Oct 9 07:51:16.153916 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 9 07:51:16.156012 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 9 07:51:16.322903 kubelet[2521]: E1009 07:51:16.322670 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:51:17.325064 kubelet[2521]: E1009 07:51:17.324992 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:51:17.371343 systemd[1]: run-containerd-runc-k8s.io-90a987e4655cd578319287b647aa824586ee690423d0d105912f260d1608aa95-runc.m85jdg.mount: Deactivated successfully. Oct 9 07:51:18.332679 kubelet[2521]: E1009 07:51:18.332630 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:51:18.549922 kernel: bpftool[3692]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Oct 9 07:51:19.056396 systemd-networkd[1358]: vxlan.calico: Link UP Oct 9 07:51:19.056412 systemd-networkd[1358]: vxlan.calico: Gained carrier Oct 9 07:51:20.137136 systemd-networkd[1358]: vxlan.calico: Gained IPv6LL Oct 9 07:51:21.983246 containerd[1459]: time="2024-10-09T07:51:21.983201654Z" level=info msg="StopPodSandbox for \"b94e08d6d74da40177944e875e7d7e6708a6ca1ff5e5ccad3a11b9fb63fd37b5\"" Oct 9 07:51:22.168875 kubelet[2521]: I1009 07:51:22.146403 2521 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-s7m66" podStartSLOduration=8.286439882 podStartE2EDuration="27.140935s" podCreationTimestamp="2024-10-09 07:50:55 +0000 UTC" firstStartedPulling="2024-10-09 07:50:56.712065309 +0000 UTC m=+14.007308360" lastFinishedPulling="2024-10-09 07:51:15.566560429 +0000 UTC m=+32.861803478" observedRunningTime="2024-10-09 07:51:16.369043663 +0000 UTC m=+33.664286723" watchObservedRunningTime="2024-10-09 07:51:22.140935 +0000 UTC m=+39.436178130" Oct 9 07:51:22.361164 containerd[1459]: 2024-10-09 07:51:22.116 [INFO][3781] k8s.go 608: Cleaning up netns ContainerID="b94e08d6d74da40177944e875e7d7e6708a6ca1ff5e5ccad3a11b9fb63fd37b5" Oct 9 07:51:22.361164 containerd[1459]: 2024-10-09 07:51:22.118 [INFO][3781] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="b94e08d6d74da40177944e875e7d7e6708a6ca1ff5e5ccad3a11b9fb63fd37b5" iface="eth0" netns="/var/run/netns/cni-5382fa46-1e81-d546-b744-febc759e9574" Oct 9 07:51:22.361164 containerd[1459]: 2024-10-09 07:51:22.119 [INFO][3781] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="b94e08d6d74da40177944e875e7d7e6708a6ca1ff5e5ccad3a11b9fb63fd37b5" iface="eth0" netns="/var/run/netns/cni-5382fa46-1e81-d546-b744-febc759e9574" Oct 9 07:51:22.361164 containerd[1459]: 2024-10-09 07:51:22.121 [INFO][3781] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="b94e08d6d74da40177944e875e7d7e6708a6ca1ff5e5ccad3a11b9fb63fd37b5" iface="eth0" netns="/var/run/netns/cni-5382fa46-1e81-d546-b744-febc759e9574" Oct 9 07:51:22.361164 containerd[1459]: 2024-10-09 07:51:22.121 [INFO][3781] k8s.go 615: Releasing IP address(es) ContainerID="b94e08d6d74da40177944e875e7d7e6708a6ca1ff5e5ccad3a11b9fb63fd37b5" Oct 9 07:51:22.361164 containerd[1459]: 2024-10-09 07:51:22.121 [INFO][3781] utils.go 188: Calico CNI releasing IP address ContainerID="b94e08d6d74da40177944e875e7d7e6708a6ca1ff5e5ccad3a11b9fb63fd37b5" Oct 9 07:51:22.361164 containerd[1459]: 2024-10-09 07:51:22.332 [INFO][3788] ipam_plugin.go 417: Releasing address using handleID ContainerID="b94e08d6d74da40177944e875e7d7e6708a6ca1ff5e5ccad3a11b9fb63fd37b5" HandleID="k8s-pod-network.b94e08d6d74da40177944e875e7d7e6708a6ca1ff5e5ccad3a11b9fb63fd37b5" Workload="ci--4081.1.0--6--a1de16b848-k8s-coredns--6f6b679f8f--br874-eth0" Oct 9 07:51:22.361164 containerd[1459]: 2024-10-09 07:51:22.333 [INFO][3788] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:51:22.361164 containerd[1459]: 2024-10-09 07:51:22.334 [INFO][3788] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:51:22.361164 containerd[1459]: 2024-10-09 07:51:22.348 [WARNING][3788] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="b94e08d6d74da40177944e875e7d7e6708a6ca1ff5e5ccad3a11b9fb63fd37b5" HandleID="k8s-pod-network.b94e08d6d74da40177944e875e7d7e6708a6ca1ff5e5ccad3a11b9fb63fd37b5" Workload="ci--4081.1.0--6--a1de16b848-k8s-coredns--6f6b679f8f--br874-eth0" Oct 9 07:51:22.361164 containerd[1459]: 2024-10-09 07:51:22.348 [INFO][3788] ipam_plugin.go 445: Releasing address using workloadID ContainerID="b94e08d6d74da40177944e875e7d7e6708a6ca1ff5e5ccad3a11b9fb63fd37b5" HandleID="k8s-pod-network.b94e08d6d74da40177944e875e7d7e6708a6ca1ff5e5ccad3a11b9fb63fd37b5" Workload="ci--4081.1.0--6--a1de16b848-k8s-coredns--6f6b679f8f--br874-eth0" Oct 9 07:51:22.361164 containerd[1459]: 2024-10-09 07:51:22.352 [INFO][3788] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:51:22.361164 containerd[1459]: 2024-10-09 07:51:22.357 [INFO][3781] k8s.go 621: Teardown processing complete. ContainerID="b94e08d6d74da40177944e875e7d7e6708a6ca1ff5e5ccad3a11b9fb63fd37b5" Oct 9 07:51:22.364628 containerd[1459]: time="2024-10-09T07:51:22.362155485Z" level=info msg="TearDown network for sandbox \"b94e08d6d74da40177944e875e7d7e6708a6ca1ff5e5ccad3a11b9fb63fd37b5\" successfully" Oct 9 07:51:22.364628 containerd[1459]: time="2024-10-09T07:51:22.362200069Z" level=info msg="StopPodSandbox for \"b94e08d6d74da40177944e875e7d7e6708a6ca1ff5e5ccad3a11b9fb63fd37b5\" returns successfully" Oct 9 07:51:22.364840 kubelet[2521]: E1009 07:51:22.362790 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:51:22.366249 containerd[1459]: time="2024-10-09T07:51:22.365551869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-br874,Uid:30cb655e-3275-4ce6-b495-a8243c54033b,Namespace:kube-system,Attempt:1,}" Oct 9 07:51:22.374334 systemd[1]: run-netns-cni\x2d5382fa46\x2d1e81\x2dd546\x2db744\x2dfebc759e9574.mount: Deactivated successfully. Oct 9 07:51:22.662240 systemd-networkd[1358]: calibd735613a92: Link UP Oct 9 07:51:22.665214 systemd-networkd[1358]: calibd735613a92: Gained carrier Oct 9 07:51:22.700291 containerd[1459]: 2024-10-09 07:51:22.519 [INFO][3794] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.1.0--6--a1de16b848-k8s-coredns--6f6b679f8f--br874-eth0 coredns-6f6b679f8f- kube-system 30cb655e-3275-4ce6-b495-a8243c54033b 708 0 2024-10-09 07:50:48 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.1.0-6-a1de16b848 coredns-6f6b679f8f-br874 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calibd735613a92 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="78f9ca2e1f67b0bfbeeb312fcb79db0aeecb2b38d88829c984c8f24555c0ff4f" Namespace="kube-system" Pod="coredns-6f6b679f8f-br874" WorkloadEndpoint="ci--4081.1.0--6--a1de16b848-k8s-coredns--6f6b679f8f--br874-" Oct 9 07:51:22.700291 containerd[1459]: 2024-10-09 07:51:22.520 [INFO][3794] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="78f9ca2e1f67b0bfbeeb312fcb79db0aeecb2b38d88829c984c8f24555c0ff4f" Namespace="kube-system" Pod="coredns-6f6b679f8f-br874" WorkloadEndpoint="ci--4081.1.0--6--a1de16b848-k8s-coredns--6f6b679f8f--br874-eth0" Oct 9 07:51:22.700291 containerd[1459]: 2024-10-09 07:51:22.576 [INFO][3805] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="78f9ca2e1f67b0bfbeeb312fcb79db0aeecb2b38d88829c984c8f24555c0ff4f" HandleID="k8s-pod-network.78f9ca2e1f67b0bfbeeb312fcb79db0aeecb2b38d88829c984c8f24555c0ff4f" Workload="ci--4081.1.0--6--a1de16b848-k8s-coredns--6f6b679f8f--br874-eth0" Oct 9 07:51:22.700291 containerd[1459]: 2024-10-09 07:51:22.588 [INFO][3805] ipam_plugin.go 270: Auto assigning IP ContainerID="78f9ca2e1f67b0bfbeeb312fcb79db0aeecb2b38d88829c984c8f24555c0ff4f" HandleID="k8s-pod-network.78f9ca2e1f67b0bfbeeb312fcb79db0aeecb2b38d88829c984c8f24555c0ff4f" Workload="ci--4081.1.0--6--a1de16b848-k8s-coredns--6f6b679f8f--br874-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002957d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.1.0-6-a1de16b848", "pod":"coredns-6f6b679f8f-br874", "timestamp":"2024-10-09 07:51:22.576156606 +0000 UTC"}, Hostname:"ci-4081.1.0-6-a1de16b848", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 07:51:22.700291 containerd[1459]: 2024-10-09 07:51:22.588 [INFO][3805] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:51:22.700291 containerd[1459]: 2024-10-09 07:51:22.588 [INFO][3805] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:51:22.700291 containerd[1459]: 2024-10-09 07:51:22.588 [INFO][3805] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.1.0-6-a1de16b848' Oct 9 07:51:22.700291 containerd[1459]: 2024-10-09 07:51:22.592 [INFO][3805] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.78f9ca2e1f67b0bfbeeb312fcb79db0aeecb2b38d88829c984c8f24555c0ff4f" host="ci-4081.1.0-6-a1de16b848" Oct 9 07:51:22.700291 containerd[1459]: 2024-10-09 07:51:22.602 [INFO][3805] ipam.go 372: Looking up existing affinities for host host="ci-4081.1.0-6-a1de16b848" Oct 9 07:51:22.700291 containerd[1459]: 2024-10-09 07:51:22.615 [INFO][3805] ipam.go 489: Trying affinity for 192.168.77.192/26 host="ci-4081.1.0-6-a1de16b848" Oct 9 07:51:22.700291 containerd[1459]: 2024-10-09 07:51:22.621 [INFO][3805] ipam.go 155: Attempting to load block cidr=192.168.77.192/26 host="ci-4081.1.0-6-a1de16b848" Oct 9 07:51:22.700291 containerd[1459]: 2024-10-09 07:51:22.626 [INFO][3805] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.77.192/26 host="ci-4081.1.0-6-a1de16b848" Oct 9 07:51:22.700291 containerd[1459]: 2024-10-09 07:51:22.626 [INFO][3805] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.77.192/26 handle="k8s-pod-network.78f9ca2e1f67b0bfbeeb312fcb79db0aeecb2b38d88829c984c8f24555c0ff4f" host="ci-4081.1.0-6-a1de16b848" Oct 9 07:51:22.700291 containerd[1459]: 2024-10-09 07:51:22.628 [INFO][3805] ipam.go 1685: Creating new handle: k8s-pod-network.78f9ca2e1f67b0bfbeeb312fcb79db0aeecb2b38d88829c984c8f24555c0ff4f Oct 9 07:51:22.700291 containerd[1459]: 2024-10-09 07:51:22.635 [INFO][3805] ipam.go 1203: Writing block in order to claim IPs block=192.168.77.192/26 handle="k8s-pod-network.78f9ca2e1f67b0bfbeeb312fcb79db0aeecb2b38d88829c984c8f24555c0ff4f" host="ci-4081.1.0-6-a1de16b848" Oct 9 07:51:22.700291 containerd[1459]: 2024-10-09 07:51:22.652 [INFO][3805] ipam.go 1216: Successfully claimed IPs: [192.168.77.193/26] block=192.168.77.192/26 handle="k8s-pod-network.78f9ca2e1f67b0bfbeeb312fcb79db0aeecb2b38d88829c984c8f24555c0ff4f" host="ci-4081.1.0-6-a1de16b848" Oct 9 07:51:22.700291 containerd[1459]: 2024-10-09 07:51:22.652 [INFO][3805] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.77.193/26] handle="k8s-pod-network.78f9ca2e1f67b0bfbeeb312fcb79db0aeecb2b38d88829c984c8f24555c0ff4f" host="ci-4081.1.0-6-a1de16b848" Oct 9 07:51:22.700291 containerd[1459]: 2024-10-09 07:51:22.652 [INFO][3805] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:51:22.700291 containerd[1459]: 2024-10-09 07:51:22.652 [INFO][3805] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.77.193/26] IPv6=[] ContainerID="78f9ca2e1f67b0bfbeeb312fcb79db0aeecb2b38d88829c984c8f24555c0ff4f" HandleID="k8s-pod-network.78f9ca2e1f67b0bfbeeb312fcb79db0aeecb2b38d88829c984c8f24555c0ff4f" Workload="ci--4081.1.0--6--a1de16b848-k8s-coredns--6f6b679f8f--br874-eth0" Oct 9 07:51:22.701652 containerd[1459]: 2024-10-09 07:51:22.656 [INFO][3794] k8s.go 386: Populated endpoint ContainerID="78f9ca2e1f67b0bfbeeb312fcb79db0aeecb2b38d88829c984c8f24555c0ff4f" Namespace="kube-system" Pod="coredns-6f6b679f8f-br874" WorkloadEndpoint="ci--4081.1.0--6--a1de16b848-k8s-coredns--6f6b679f8f--br874-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.1.0--6--a1de16b848-k8s-coredns--6f6b679f8f--br874-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"30cb655e-3275-4ce6-b495-a8243c54033b", ResourceVersion:"708", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 50, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.1.0-6-a1de16b848", ContainerID:"", Pod:"coredns-6f6b679f8f-br874", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.77.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibd735613a92", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:51:22.701652 containerd[1459]: 2024-10-09 07:51:22.657 [INFO][3794] k8s.go 387: Calico CNI using IPs: [192.168.77.193/32] ContainerID="78f9ca2e1f67b0bfbeeb312fcb79db0aeecb2b38d88829c984c8f24555c0ff4f" Namespace="kube-system" Pod="coredns-6f6b679f8f-br874" WorkloadEndpoint="ci--4081.1.0--6--a1de16b848-k8s-coredns--6f6b679f8f--br874-eth0" Oct 9 07:51:22.701652 containerd[1459]: 2024-10-09 07:51:22.657 [INFO][3794] dataplane_linux.go 68: Setting the host side veth name to calibd735613a92 ContainerID="78f9ca2e1f67b0bfbeeb312fcb79db0aeecb2b38d88829c984c8f24555c0ff4f" Namespace="kube-system" Pod="coredns-6f6b679f8f-br874" WorkloadEndpoint="ci--4081.1.0--6--a1de16b848-k8s-coredns--6f6b679f8f--br874-eth0" Oct 9 07:51:22.701652 containerd[1459]: 2024-10-09 07:51:22.663 [INFO][3794] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="78f9ca2e1f67b0bfbeeb312fcb79db0aeecb2b38d88829c984c8f24555c0ff4f" Namespace="kube-system" Pod="coredns-6f6b679f8f-br874" WorkloadEndpoint="ci--4081.1.0--6--a1de16b848-k8s-coredns--6f6b679f8f--br874-eth0" Oct 9 07:51:22.701652 containerd[1459]: 2024-10-09 07:51:22.664 [INFO][3794] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="78f9ca2e1f67b0bfbeeb312fcb79db0aeecb2b38d88829c984c8f24555c0ff4f" Namespace="kube-system" Pod="coredns-6f6b679f8f-br874" WorkloadEndpoint="ci--4081.1.0--6--a1de16b848-k8s-coredns--6f6b679f8f--br874-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.1.0--6--a1de16b848-k8s-coredns--6f6b679f8f--br874-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"30cb655e-3275-4ce6-b495-a8243c54033b", ResourceVersion:"708", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 50, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.1.0-6-a1de16b848", ContainerID:"78f9ca2e1f67b0bfbeeb312fcb79db0aeecb2b38d88829c984c8f24555c0ff4f", Pod:"coredns-6f6b679f8f-br874", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.77.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibd735613a92", MAC:"06:4e:f2:a4:c4:19", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:51:22.701652 containerd[1459]: 2024-10-09 07:51:22.692 [INFO][3794] k8s.go 500: Wrote updated endpoint to datastore ContainerID="78f9ca2e1f67b0bfbeeb312fcb79db0aeecb2b38d88829c984c8f24555c0ff4f" Namespace="kube-system" Pod="coredns-6f6b679f8f-br874" WorkloadEndpoint="ci--4081.1.0--6--a1de16b848-k8s-coredns--6f6b679f8f--br874-eth0" Oct 9 07:51:22.750472 containerd[1459]: time="2024-10-09T07:51:22.749793019Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:51:22.750472 containerd[1459]: time="2024-10-09T07:51:22.749911334Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:51:22.750472 containerd[1459]: time="2024-10-09T07:51:22.749937925Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:51:22.750472 containerd[1459]: time="2024-10-09T07:51:22.750085744Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:51:22.799260 systemd[1]: Started cri-containerd-78f9ca2e1f67b0bfbeeb312fcb79db0aeecb2b38d88829c984c8f24555c0ff4f.scope - libcontainer container 78f9ca2e1f67b0bfbeeb312fcb79db0aeecb2b38d88829c984c8f24555c0ff4f. Oct 9 07:51:22.892642 containerd[1459]: time="2024-10-09T07:51:22.892559374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-br874,Uid:30cb655e-3275-4ce6-b495-a8243c54033b,Namespace:kube-system,Attempt:1,} returns sandbox id \"78f9ca2e1f67b0bfbeeb312fcb79db0aeecb2b38d88829c984c8f24555c0ff4f\"" Oct 9 07:51:22.895980 kubelet[2521]: E1009 07:51:22.894540 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:51:22.900796 containerd[1459]: time="2024-10-09T07:51:22.900731893Z" level=info msg="CreateContainer within sandbox \"78f9ca2e1f67b0bfbeeb312fcb79db0aeecb2b38d88829c984c8f24555c0ff4f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 9 07:51:22.946038 containerd[1459]: time="2024-10-09T07:51:22.945831321Z" level=info msg="CreateContainer within sandbox \"78f9ca2e1f67b0bfbeeb312fcb79db0aeecb2b38d88829c984c8f24555c0ff4f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e0bac3b6e572e3691c0544a04625e86a7e1ced1d4699cdee5ab2406fb934360a\"" Oct 9 07:51:22.949283 containerd[1459]: time="2024-10-09T07:51:22.948466689Z" level=info msg="StartContainer for \"e0bac3b6e572e3691c0544a04625e86a7e1ced1d4699cdee5ab2406fb934360a\"" Oct 9 07:51:22.992836 containerd[1459]: time="2024-10-09T07:51:22.990813314Z" level=info msg="StopPodSandbox for \"fac05bb684250bae6346a07d5eab8947b923b1efd02279b4c25e48bfa25d7208\"" Oct 9 07:51:23.000461 systemd[1]: Started cri-containerd-e0bac3b6e572e3691c0544a04625e86a7e1ced1d4699cdee5ab2406fb934360a.scope - libcontainer container e0bac3b6e572e3691c0544a04625e86a7e1ced1d4699cdee5ab2406fb934360a. Oct 9 07:51:23.108952 containerd[1459]: time="2024-10-09T07:51:23.107741040Z" level=info msg="StartContainer for \"e0bac3b6e572e3691c0544a04625e86a7e1ced1d4699cdee5ab2406fb934360a\" returns successfully" Oct 9 07:51:23.188625 containerd[1459]: 2024-10-09 07:51:23.109 [INFO][3907] k8s.go 608: Cleaning up netns ContainerID="fac05bb684250bae6346a07d5eab8947b923b1efd02279b4c25e48bfa25d7208" Oct 9 07:51:23.188625 containerd[1459]: 2024-10-09 07:51:23.112 [INFO][3907] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="fac05bb684250bae6346a07d5eab8947b923b1efd02279b4c25e48bfa25d7208" iface="eth0" netns="/var/run/netns/cni-2ea1063e-62db-f683-c50b-c3c719886caf" Oct 9 07:51:23.188625 containerd[1459]: 2024-10-09 07:51:23.114 [INFO][3907] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="fac05bb684250bae6346a07d5eab8947b923b1efd02279b4c25e48bfa25d7208" iface="eth0" netns="/var/run/netns/cni-2ea1063e-62db-f683-c50b-c3c719886caf" Oct 9 07:51:23.188625 containerd[1459]: 2024-10-09 07:51:23.114 [INFO][3907] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="fac05bb684250bae6346a07d5eab8947b923b1efd02279b4c25e48bfa25d7208" iface="eth0" netns="/var/run/netns/cni-2ea1063e-62db-f683-c50b-c3c719886caf" Oct 9 07:51:23.188625 containerd[1459]: 2024-10-09 07:51:23.114 [INFO][3907] k8s.go 615: Releasing IP address(es) ContainerID="fac05bb684250bae6346a07d5eab8947b923b1efd02279b4c25e48bfa25d7208" Oct 9 07:51:23.188625 containerd[1459]: 2024-10-09 07:51:23.114 [INFO][3907] utils.go 188: Calico CNI releasing IP address ContainerID="fac05bb684250bae6346a07d5eab8947b923b1efd02279b4c25e48bfa25d7208" Oct 9 07:51:23.188625 containerd[1459]: 2024-10-09 07:51:23.162 [INFO][3920] ipam_plugin.go 417: Releasing address using handleID ContainerID="fac05bb684250bae6346a07d5eab8947b923b1efd02279b4c25e48bfa25d7208" HandleID="k8s-pod-network.fac05bb684250bae6346a07d5eab8947b923b1efd02279b4c25e48bfa25d7208" Workload="ci--4081.1.0--6--a1de16b848-k8s-coredns--6f6b679f8f--56sbq-eth0" Oct 9 07:51:23.188625 containerd[1459]: 2024-10-09 07:51:23.163 [INFO][3920] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:51:23.188625 containerd[1459]: 2024-10-09 07:51:23.167 [INFO][3920] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:51:23.188625 containerd[1459]: 2024-10-09 07:51:23.179 [WARNING][3920] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="fac05bb684250bae6346a07d5eab8947b923b1efd02279b4c25e48bfa25d7208" HandleID="k8s-pod-network.fac05bb684250bae6346a07d5eab8947b923b1efd02279b4c25e48bfa25d7208" Workload="ci--4081.1.0--6--a1de16b848-k8s-coredns--6f6b679f8f--56sbq-eth0" Oct 9 07:51:23.188625 containerd[1459]: 2024-10-09 07:51:23.179 [INFO][3920] ipam_plugin.go 445: Releasing address using workloadID ContainerID="fac05bb684250bae6346a07d5eab8947b923b1efd02279b4c25e48bfa25d7208" HandleID="k8s-pod-network.fac05bb684250bae6346a07d5eab8947b923b1efd02279b4c25e48bfa25d7208" Workload="ci--4081.1.0--6--a1de16b848-k8s-coredns--6f6b679f8f--56sbq-eth0" Oct 9 07:51:23.188625 containerd[1459]: 2024-10-09 07:51:23.183 [INFO][3920] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:51:23.188625 containerd[1459]: 2024-10-09 07:51:23.185 [INFO][3907] k8s.go 621: Teardown processing complete. ContainerID="fac05bb684250bae6346a07d5eab8947b923b1efd02279b4c25e48bfa25d7208" Oct 9 07:51:23.190790 containerd[1459]: time="2024-10-09T07:51:23.189193583Z" level=info msg="TearDown network for sandbox \"fac05bb684250bae6346a07d5eab8947b923b1efd02279b4c25e48bfa25d7208\" successfully" Oct 9 07:51:23.190790 containerd[1459]: time="2024-10-09T07:51:23.189246995Z" level=info msg="StopPodSandbox for \"fac05bb684250bae6346a07d5eab8947b923b1efd02279b4c25e48bfa25d7208\" returns successfully" Oct 9 07:51:23.191284 kubelet[2521]: E1009 07:51:23.191219 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:51:23.193869 containerd[1459]: time="2024-10-09T07:51:23.193817857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-56sbq,Uid:0ba2cf0b-92bf-40e7-afd1-29479bd52c4d,Namespace:kube-system,Attempt:1,}" Oct 9 07:51:23.367203 kubelet[2521]: E1009 07:51:23.362718 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:51:23.385906 systemd[1]: run-containerd-runc-k8s.io-78f9ca2e1f67b0bfbeeb312fcb79db0aeecb2b38d88829c984c8f24555c0ff4f-runc.IurRMY.mount: Deactivated successfully. Oct 9 07:51:23.388395 systemd[1]: run-netns-cni\x2d2ea1063e\x2d62db\x2df683\x2dc50b\x2dc3c719886caf.mount: Deactivated successfully. Oct 9 07:51:23.458626 kubelet[2521]: I1009 07:51:23.457860 2521 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-br874" podStartSLOduration=35.457832742 podStartE2EDuration="35.457832742s" podCreationTimestamp="2024-10-09 07:50:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 07:51:23.441982302 +0000 UTC m=+40.737225362" watchObservedRunningTime="2024-10-09 07:51:23.457832742 +0000 UTC m=+40.753075801" Oct 9 07:51:23.588670 systemd-networkd[1358]: cali908f02abf77: Link UP Oct 9 07:51:23.588949 systemd-networkd[1358]: cali908f02abf77: Gained carrier Oct 9 07:51:23.627075 containerd[1459]: 2024-10-09 07:51:23.301 [INFO][3937] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.1.0--6--a1de16b848-k8s-coredns--6f6b679f8f--56sbq-eth0 coredns-6f6b679f8f- kube-system 0ba2cf0b-92bf-40e7-afd1-29479bd52c4d 718 0 2024-10-09 07:50:48 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.1.0-6-a1de16b848 coredns-6f6b679f8f-56sbq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali908f02abf77 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="0aaf8ee2d41dd8e636887c2b0d04ab7176313eab69ccbf660f176e54eccd8507" Namespace="kube-system" Pod="coredns-6f6b679f8f-56sbq" WorkloadEndpoint="ci--4081.1.0--6--a1de16b848-k8s-coredns--6f6b679f8f--56sbq-" Oct 9 07:51:23.627075 containerd[1459]: 2024-10-09 07:51:23.301 [INFO][3937] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0aaf8ee2d41dd8e636887c2b0d04ab7176313eab69ccbf660f176e54eccd8507" Namespace="kube-system" Pod="coredns-6f6b679f8f-56sbq" WorkloadEndpoint="ci--4081.1.0--6--a1de16b848-k8s-coredns--6f6b679f8f--56sbq-eth0" Oct 9 07:51:23.627075 containerd[1459]: 2024-10-09 07:51:23.416 [INFO][3943] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0aaf8ee2d41dd8e636887c2b0d04ab7176313eab69ccbf660f176e54eccd8507" HandleID="k8s-pod-network.0aaf8ee2d41dd8e636887c2b0d04ab7176313eab69ccbf660f176e54eccd8507" Workload="ci--4081.1.0--6--a1de16b848-k8s-coredns--6f6b679f8f--56sbq-eth0" Oct 9 07:51:23.627075 containerd[1459]: 2024-10-09 07:51:23.462 [INFO][3943] ipam_plugin.go 270: Auto assigning IP ContainerID="0aaf8ee2d41dd8e636887c2b0d04ab7176313eab69ccbf660f176e54eccd8507" HandleID="k8s-pod-network.0aaf8ee2d41dd8e636887c2b0d04ab7176313eab69ccbf660f176e54eccd8507" Workload="ci--4081.1.0--6--a1de16b848-k8s-coredns--6f6b679f8f--56sbq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290330), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.1.0-6-a1de16b848", "pod":"coredns-6f6b679f8f-56sbq", "timestamp":"2024-10-09 07:51:23.41661333 +0000 UTC"}, Hostname:"ci-4081.1.0-6-a1de16b848", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 07:51:23.627075 containerd[1459]: 2024-10-09 07:51:23.463 [INFO][3943] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:51:23.627075 containerd[1459]: 2024-10-09 07:51:23.463 [INFO][3943] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:51:23.627075 containerd[1459]: 2024-10-09 07:51:23.463 [INFO][3943] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.1.0-6-a1de16b848' Oct 9 07:51:23.627075 containerd[1459]: 2024-10-09 07:51:23.486 [INFO][3943] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0aaf8ee2d41dd8e636887c2b0d04ab7176313eab69ccbf660f176e54eccd8507" host="ci-4081.1.0-6-a1de16b848" Oct 9 07:51:23.627075 containerd[1459]: 2024-10-09 07:51:23.539 [INFO][3943] ipam.go 372: Looking up existing affinities for host host="ci-4081.1.0-6-a1de16b848" Oct 9 07:51:23.627075 containerd[1459]: 2024-10-09 07:51:23.548 [INFO][3943] ipam.go 489: Trying affinity for 192.168.77.192/26 host="ci-4081.1.0-6-a1de16b848" Oct 9 07:51:23.627075 containerd[1459]: 2024-10-09 07:51:23.552 [INFO][3943] ipam.go 155: Attempting to load block cidr=192.168.77.192/26 host="ci-4081.1.0-6-a1de16b848" Oct 9 07:51:23.627075 containerd[1459]: 2024-10-09 07:51:23.556 [INFO][3943] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.77.192/26 host="ci-4081.1.0-6-a1de16b848" Oct 9 07:51:23.627075 containerd[1459]: 2024-10-09 07:51:23.556 [INFO][3943] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.77.192/26 handle="k8s-pod-network.0aaf8ee2d41dd8e636887c2b0d04ab7176313eab69ccbf660f176e54eccd8507" host="ci-4081.1.0-6-a1de16b848" Oct 9 07:51:23.627075 containerd[1459]: 2024-10-09 07:51:23.559 [INFO][3943] ipam.go 1685: Creating new handle: k8s-pod-network.0aaf8ee2d41dd8e636887c2b0d04ab7176313eab69ccbf660f176e54eccd8507 Oct 9 07:51:23.627075 containerd[1459]: 2024-10-09 07:51:23.566 [INFO][3943] ipam.go 1203: Writing block in order to claim IPs block=192.168.77.192/26 handle="k8s-pod-network.0aaf8ee2d41dd8e636887c2b0d04ab7176313eab69ccbf660f176e54eccd8507" host="ci-4081.1.0-6-a1de16b848" Oct 9 07:51:23.627075 containerd[1459]: 2024-10-09 07:51:23.579 [INFO][3943] ipam.go 1216: Successfully claimed IPs: [192.168.77.194/26] block=192.168.77.192/26 handle="k8s-pod-network.0aaf8ee2d41dd8e636887c2b0d04ab7176313eab69ccbf660f176e54eccd8507" host="ci-4081.1.0-6-a1de16b848" Oct 9 07:51:23.627075 containerd[1459]: 2024-10-09 07:51:23.579 [INFO][3943] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.77.194/26] handle="k8s-pod-network.0aaf8ee2d41dd8e636887c2b0d04ab7176313eab69ccbf660f176e54eccd8507" host="ci-4081.1.0-6-a1de16b848" Oct 9 07:51:23.627075 containerd[1459]: 2024-10-09 07:51:23.579 [INFO][3943] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:51:23.627075 containerd[1459]: 2024-10-09 07:51:23.579 [INFO][3943] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.77.194/26] IPv6=[] ContainerID="0aaf8ee2d41dd8e636887c2b0d04ab7176313eab69ccbf660f176e54eccd8507" HandleID="k8s-pod-network.0aaf8ee2d41dd8e636887c2b0d04ab7176313eab69ccbf660f176e54eccd8507" Workload="ci--4081.1.0--6--a1de16b848-k8s-coredns--6f6b679f8f--56sbq-eth0" Oct 9 07:51:23.628296 containerd[1459]: 2024-10-09 07:51:23.583 [INFO][3937] k8s.go 386: Populated endpoint ContainerID="0aaf8ee2d41dd8e636887c2b0d04ab7176313eab69ccbf660f176e54eccd8507" Namespace="kube-system" Pod="coredns-6f6b679f8f-56sbq" WorkloadEndpoint="ci--4081.1.0--6--a1de16b848-k8s-coredns--6f6b679f8f--56sbq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.1.0--6--a1de16b848-k8s-coredns--6f6b679f8f--56sbq-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"0ba2cf0b-92bf-40e7-afd1-29479bd52c4d", ResourceVersion:"718", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 50, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.1.0-6-a1de16b848", ContainerID:"", Pod:"coredns-6f6b679f8f-56sbq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.77.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali908f02abf77", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:51:23.628296 containerd[1459]: 2024-10-09 07:51:23.584 [INFO][3937] k8s.go 387: Calico CNI using IPs: [192.168.77.194/32] ContainerID="0aaf8ee2d41dd8e636887c2b0d04ab7176313eab69ccbf660f176e54eccd8507" Namespace="kube-system" Pod="coredns-6f6b679f8f-56sbq" WorkloadEndpoint="ci--4081.1.0--6--a1de16b848-k8s-coredns--6f6b679f8f--56sbq-eth0" Oct 9 07:51:23.628296 containerd[1459]: 2024-10-09 07:51:23.584 [INFO][3937] dataplane_linux.go 68: Setting the host side veth name to cali908f02abf77 ContainerID="0aaf8ee2d41dd8e636887c2b0d04ab7176313eab69ccbf660f176e54eccd8507" Namespace="kube-system" Pod="coredns-6f6b679f8f-56sbq" WorkloadEndpoint="ci--4081.1.0--6--a1de16b848-k8s-coredns--6f6b679f8f--56sbq-eth0" Oct 9 07:51:23.628296 containerd[1459]: 2024-10-09 07:51:23.587 [INFO][3937] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="0aaf8ee2d41dd8e636887c2b0d04ab7176313eab69ccbf660f176e54eccd8507" Namespace="kube-system" Pod="coredns-6f6b679f8f-56sbq" WorkloadEndpoint="ci--4081.1.0--6--a1de16b848-k8s-coredns--6f6b679f8f--56sbq-eth0" Oct 9 07:51:23.628296 containerd[1459]: 2024-10-09 07:51:23.591 [INFO][3937] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0aaf8ee2d41dd8e636887c2b0d04ab7176313eab69ccbf660f176e54eccd8507" Namespace="kube-system" Pod="coredns-6f6b679f8f-56sbq" WorkloadEndpoint="ci--4081.1.0--6--a1de16b848-k8s-coredns--6f6b679f8f--56sbq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.1.0--6--a1de16b848-k8s-coredns--6f6b679f8f--56sbq-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"0ba2cf0b-92bf-40e7-afd1-29479bd52c4d", ResourceVersion:"718", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 50, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.1.0-6-a1de16b848", ContainerID:"0aaf8ee2d41dd8e636887c2b0d04ab7176313eab69ccbf660f176e54eccd8507", Pod:"coredns-6f6b679f8f-56sbq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.77.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali908f02abf77", MAC:"9e:8b:9f:74:25:3f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:51:23.628296 containerd[1459]: 2024-10-09 07:51:23.610 [INFO][3937] k8s.go 500: Wrote updated endpoint to datastore ContainerID="0aaf8ee2d41dd8e636887c2b0d04ab7176313eab69ccbf660f176e54eccd8507" Namespace="kube-system" Pod="coredns-6f6b679f8f-56sbq" WorkloadEndpoint="ci--4081.1.0--6--a1de16b848-k8s-coredns--6f6b679f8f--56sbq-eth0" Oct 9 07:51:23.691613 containerd[1459]: time="2024-10-09T07:51:23.689194124Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:51:23.691613 containerd[1459]: time="2024-10-09T07:51:23.689300572Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:51:23.691613 containerd[1459]: time="2024-10-09T07:51:23.689319672Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:51:23.691613 containerd[1459]: time="2024-10-09T07:51:23.689447021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:51:23.747181 systemd[1]: Started cri-containerd-0aaf8ee2d41dd8e636887c2b0d04ab7176313eab69ccbf660f176e54eccd8507.scope - libcontainer container 0aaf8ee2d41dd8e636887c2b0d04ab7176313eab69ccbf660f176e54eccd8507. Oct 9 07:51:23.826942 containerd[1459]: time="2024-10-09T07:51:23.826389866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-56sbq,Uid:0ba2cf0b-92bf-40e7-afd1-29479bd52c4d,Namespace:kube-system,Attempt:1,} returns sandbox id \"0aaf8ee2d41dd8e636887c2b0d04ab7176313eab69ccbf660f176e54eccd8507\"" Oct 9 07:51:23.827849 kubelet[2521]: E1009 07:51:23.827778 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:51:23.834707 containerd[1459]: time="2024-10-09T07:51:23.834522425Z" level=info msg="CreateContainer within sandbox \"0aaf8ee2d41dd8e636887c2b0d04ab7176313eab69ccbf660f176e54eccd8507\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 9 07:51:23.861556 containerd[1459]: time="2024-10-09T07:51:23.858370657Z" level=info msg="CreateContainer within sandbox \"0aaf8ee2d41dd8e636887c2b0d04ab7176313eab69ccbf660f176e54eccd8507\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d0216fab00c8ec3d15b3caf9f4bb5da2363a1ba7a3e95b4b10bc8d7de6f55284\"" Oct 9 07:51:23.864227 containerd[1459]: time="2024-10-09T07:51:23.862181755Z" level=info msg="StartContainer for \"d0216fab00c8ec3d15b3caf9f4bb5da2363a1ba7a3e95b4b10bc8d7de6f55284\"" Oct 9 07:51:23.958171 systemd[1]: Started cri-containerd-d0216fab00c8ec3d15b3caf9f4bb5da2363a1ba7a3e95b4b10bc8d7de6f55284.scope - libcontainer container d0216fab00c8ec3d15b3caf9f4bb5da2363a1ba7a3e95b4b10bc8d7de6f55284. Oct 9 07:51:23.983604 containerd[1459]: time="2024-10-09T07:51:23.983455408Z" level=info msg="StopPodSandbox for \"26d25664fe842060cb331b64fee0eeb4a1914af62993fddaa7b73207d5ed1e0d\"" Oct 9 07:51:24.028106 containerd[1459]: time="2024-10-09T07:51:24.027984217Z" level=info msg="StartContainer for \"d0216fab00c8ec3d15b3caf9f4bb5da2363a1ba7a3e95b4b10bc8d7de6f55284\" returns successfully" Oct 9 07:51:24.224976 systemd[1]: Started sshd@9-64.23.134.87:22-139.178.89.65:41770.service - OpenSSH per-connection server daemon (139.178.89.65:41770). Oct 9 07:51:24.428762 containerd[1459]: 2024-10-09 07:51:24.238 [INFO][4053] k8s.go 608: Cleaning up netns ContainerID="26d25664fe842060cb331b64fee0eeb4a1914af62993fddaa7b73207d5ed1e0d" Oct 9 07:51:24.428762 containerd[1459]: 2024-10-09 07:51:24.238 [INFO][4053] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="26d25664fe842060cb331b64fee0eeb4a1914af62993fddaa7b73207d5ed1e0d" iface="eth0" netns="/var/run/netns/cni-fa24e712-8b99-4d0e-ef2a-094111ccb5cb" Oct 9 07:51:24.428762 containerd[1459]: 2024-10-09 07:51:24.240 [INFO][4053] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="26d25664fe842060cb331b64fee0eeb4a1914af62993fddaa7b73207d5ed1e0d" iface="eth0" netns="/var/run/netns/cni-fa24e712-8b99-4d0e-ef2a-094111ccb5cb" Oct 9 07:51:24.428762 containerd[1459]: 2024-10-09 07:51:24.240 [INFO][4053] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="26d25664fe842060cb331b64fee0eeb4a1914af62993fddaa7b73207d5ed1e0d" iface="eth0" netns="/var/run/netns/cni-fa24e712-8b99-4d0e-ef2a-094111ccb5cb" Oct 9 07:51:24.428762 containerd[1459]: 2024-10-09 07:51:24.240 [INFO][4053] k8s.go 615: Releasing IP address(es) ContainerID="26d25664fe842060cb331b64fee0eeb4a1914af62993fddaa7b73207d5ed1e0d" Oct 9 07:51:24.428762 containerd[1459]: 2024-10-09 07:51:24.240 [INFO][4053] utils.go 188: Calico CNI releasing IP address ContainerID="26d25664fe842060cb331b64fee0eeb4a1914af62993fddaa7b73207d5ed1e0d" Oct 9 07:51:24.428762 containerd[1459]: 2024-10-09 07:51:24.334 [INFO][4067] ipam_plugin.go 417: Releasing address using handleID ContainerID="26d25664fe842060cb331b64fee0eeb4a1914af62993fddaa7b73207d5ed1e0d" HandleID="k8s-pod-network.26d25664fe842060cb331b64fee0eeb4a1914af62993fddaa7b73207d5ed1e0d" Workload="ci--4081.1.0--6--a1de16b848-k8s-calico--kube--controllers--8468d56f97--2sc4p-eth0" Oct 9 07:51:24.428762 containerd[1459]: 2024-10-09 07:51:24.338 [INFO][4067] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:51:24.428762 containerd[1459]: 2024-10-09 07:51:24.338 [INFO][4067] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:51:24.428762 containerd[1459]: 2024-10-09 07:51:24.369 [WARNING][4067] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="26d25664fe842060cb331b64fee0eeb4a1914af62993fddaa7b73207d5ed1e0d" HandleID="k8s-pod-network.26d25664fe842060cb331b64fee0eeb4a1914af62993fddaa7b73207d5ed1e0d" Workload="ci--4081.1.0--6--a1de16b848-k8s-calico--kube--controllers--8468d56f97--2sc4p-eth0" Oct 9 07:51:24.428762 containerd[1459]: 2024-10-09 07:51:24.369 [INFO][4067] ipam_plugin.go 445: Releasing address using workloadID ContainerID="26d25664fe842060cb331b64fee0eeb4a1914af62993fddaa7b73207d5ed1e0d" HandleID="k8s-pod-network.26d25664fe842060cb331b64fee0eeb4a1914af62993fddaa7b73207d5ed1e0d" Workload="ci--4081.1.0--6--a1de16b848-k8s-calico--kube--controllers--8468d56f97--2sc4p-eth0" Oct 9 07:51:24.428762 containerd[1459]: 2024-10-09 07:51:24.385 [INFO][4067] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:51:24.428762 containerd[1459]: 2024-10-09 07:51:24.405 [INFO][4053] k8s.go 621: Teardown processing complete. ContainerID="26d25664fe842060cb331b64fee0eeb4a1914af62993fddaa7b73207d5ed1e0d" Oct 9 07:51:24.429705 containerd[1459]: time="2024-10-09T07:51:24.429096849Z" level=info msg="TearDown network for sandbox \"26d25664fe842060cb331b64fee0eeb4a1914af62993fddaa7b73207d5ed1e0d\" successfully" Oct 9 07:51:24.429705 containerd[1459]: time="2024-10-09T07:51:24.429173475Z" level=info msg="StopPodSandbox for \"26d25664fe842060cb331b64fee0eeb4a1914af62993fddaa7b73207d5ed1e0d\" returns successfully" Oct 9 07:51:24.435697 containerd[1459]: time="2024-10-09T07:51:24.432839573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8468d56f97-2sc4p,Uid:bd686d8f-58df-4186-8240-bba468f057cf,Namespace:calico-system,Attempt:1,}" Oct 9 07:51:24.456151 systemd[1]: run-netns-cni\x2dfa24e712\x2d8b99\x2d4d0e\x2def2a\x2d094111ccb5cb.mount: Deactivated successfully. Oct 9 07:51:24.461534 kubelet[2521]: E1009 07:51:24.458641 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:51:24.461534 kubelet[2521]: E1009 07:51:24.458656 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:51:24.474094 sshd[4066]: Accepted publickey for core from 139.178.89.65 port 41770 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:51:24.485396 sshd[4066]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:51:24.535592 systemd-logind[1447]: New session 10 of user core. Oct 9 07:51:24.541231 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 9 07:51:24.653776 kubelet[2521]: I1009 07:51:24.653428 2521 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-56sbq" podStartSLOduration=36.653399635 podStartE2EDuration="36.653399635s" podCreationTimestamp="2024-10-09 07:50:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 07:51:24.584325446 +0000 UTC m=+41.879568506" watchObservedRunningTime="2024-10-09 07:51:24.653399635 +0000 UTC m=+41.948642692" Oct 9 07:51:24.684974 systemd-networkd[1358]: calibd735613a92: Gained IPv6LL Oct 9 07:51:24.990907 containerd[1459]: time="2024-10-09T07:51:24.987956528Z" level=info msg="StopPodSandbox for \"ccc48ae28a7e5f2b117790cd70dc01754a0d3e1e1b5d220e732ef26ac559a883\"" Oct 9 07:51:25.028095 systemd-networkd[1358]: cali8063b1fa749: Link UP Oct 9 07:51:25.034559 systemd-networkd[1358]: cali8063b1fa749: Gained carrier Oct 9 07:51:25.131566 containerd[1459]: 2024-10-09 07:51:24.675 [INFO][4080] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.1.0--6--a1de16b848-k8s-calico--kube--controllers--8468d56f97--2sc4p-eth0 calico-kube-controllers-8468d56f97- calico-system bd686d8f-58df-4186-8240-bba468f057cf 768 0 2024-10-09 07:50:56 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:8468d56f97 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.1.0-6-a1de16b848 calico-kube-controllers-8468d56f97-2sc4p eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali8063b1fa749 [] []}} ContainerID="4fb24302862b969dde951c23a9e776ce597ad96add4e7ede89372631764b43ac" Namespace="calico-system" Pod="calico-kube-controllers-8468d56f97-2sc4p" WorkloadEndpoint="ci--4081.1.0--6--a1de16b848-k8s-calico--kube--controllers--8468d56f97--2sc4p-" Oct 9 07:51:25.131566 containerd[1459]: 2024-10-09 07:51:24.677 [INFO][4080] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4fb24302862b969dde951c23a9e776ce597ad96add4e7ede89372631764b43ac" Namespace="calico-system" Pod="calico-kube-controllers-8468d56f97-2sc4p" WorkloadEndpoint="ci--4081.1.0--6--a1de16b848-k8s-calico--kube--controllers--8468d56f97--2sc4p-eth0" Oct 9 07:51:25.131566 containerd[1459]: 2024-10-09 07:51:24.807 [INFO][4103] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4fb24302862b969dde951c23a9e776ce597ad96add4e7ede89372631764b43ac" HandleID="k8s-pod-network.4fb24302862b969dde951c23a9e776ce597ad96add4e7ede89372631764b43ac" Workload="ci--4081.1.0--6--a1de16b848-k8s-calico--kube--controllers--8468d56f97--2sc4p-eth0" Oct 9 07:51:25.131566 containerd[1459]: 2024-10-09 07:51:24.828 [INFO][4103] ipam_plugin.go 270: Auto assigning IP ContainerID="4fb24302862b969dde951c23a9e776ce597ad96add4e7ede89372631764b43ac" HandleID="k8s-pod-network.4fb24302862b969dde951c23a9e776ce597ad96add4e7ede89372631764b43ac" Workload="ci--4081.1.0--6--a1de16b848-k8s-calico--kube--controllers--8468d56f97--2sc4p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005da340), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.1.0-6-a1de16b848", "pod":"calico-kube-controllers-8468d56f97-2sc4p", "timestamp":"2024-10-09 07:51:24.8076787 +0000 UTC"}, Hostname:"ci-4081.1.0-6-a1de16b848", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 07:51:25.131566 containerd[1459]: 2024-10-09 07:51:24.829 [INFO][4103] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:51:25.131566 containerd[1459]: 2024-10-09 07:51:24.829 [INFO][4103] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:51:25.131566 containerd[1459]: 2024-10-09 07:51:24.829 [INFO][4103] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.1.0-6-a1de16b848' Oct 9 07:51:25.131566 containerd[1459]: 2024-10-09 07:51:24.836 [INFO][4103] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4fb24302862b969dde951c23a9e776ce597ad96add4e7ede89372631764b43ac" host="ci-4081.1.0-6-a1de16b848" Oct 9 07:51:25.131566 containerd[1459]: 2024-10-09 07:51:24.922 [INFO][4103] ipam.go 372: Looking up existing affinities for host host="ci-4081.1.0-6-a1de16b848" Oct 9 07:51:25.131566 containerd[1459]: 2024-10-09 07:51:24.935 [INFO][4103] ipam.go 489: Trying affinity for 192.168.77.192/26 host="ci-4081.1.0-6-a1de16b848" Oct 9 07:51:25.131566 containerd[1459]: 2024-10-09 07:51:24.942 [INFO][4103] ipam.go 155: Attempting to load block cidr=192.168.77.192/26 host="ci-4081.1.0-6-a1de16b848" Oct 9 07:51:25.131566 containerd[1459]: 2024-10-09 07:51:24.946 [INFO][4103] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.77.192/26 host="ci-4081.1.0-6-a1de16b848" Oct 9 07:51:25.131566 containerd[1459]: 2024-10-09 07:51:24.946 [INFO][4103] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.77.192/26 handle="k8s-pod-network.4fb24302862b969dde951c23a9e776ce597ad96add4e7ede89372631764b43ac" host="ci-4081.1.0-6-a1de16b848" Oct 9 07:51:25.131566 containerd[1459]: 2024-10-09 07:51:24.950 [INFO][4103] ipam.go 1685: Creating new handle: k8s-pod-network.4fb24302862b969dde951c23a9e776ce597ad96add4e7ede89372631764b43ac Oct 9 07:51:25.131566 containerd[1459]: 2024-10-09 07:51:24.964 [INFO][4103] ipam.go 1203: Writing block in order to claim IPs block=192.168.77.192/26 handle="k8s-pod-network.4fb24302862b969dde951c23a9e776ce597ad96add4e7ede89372631764b43ac" host="ci-4081.1.0-6-a1de16b848" Oct 9 07:51:25.131566 containerd[1459]: 2024-10-09 07:51:24.992 [INFO][4103] ipam.go 1216: Successfully claimed IPs: [192.168.77.195/26] block=192.168.77.192/26 handle="k8s-pod-network.4fb24302862b969dde951c23a9e776ce597ad96add4e7ede89372631764b43ac" host="ci-4081.1.0-6-a1de16b848" Oct 9 07:51:25.131566 containerd[1459]: 2024-10-09 07:51:24.992 [INFO][4103] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.77.195/26] handle="k8s-pod-network.4fb24302862b969dde951c23a9e776ce597ad96add4e7ede89372631764b43ac" host="ci-4081.1.0-6-a1de16b848" Oct 9 07:51:25.131566 containerd[1459]: 2024-10-09 07:51:24.992 [INFO][4103] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:51:25.131566 containerd[1459]: 2024-10-09 07:51:24.993 [INFO][4103] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.77.195/26] IPv6=[] ContainerID="4fb24302862b969dde951c23a9e776ce597ad96add4e7ede89372631764b43ac" HandleID="k8s-pod-network.4fb24302862b969dde951c23a9e776ce597ad96add4e7ede89372631764b43ac" Workload="ci--4081.1.0--6--a1de16b848-k8s-calico--kube--controllers--8468d56f97--2sc4p-eth0" Oct 9 07:51:25.136700 containerd[1459]: 2024-10-09 07:51:25.012 [INFO][4080] k8s.go 386: Populated endpoint ContainerID="4fb24302862b969dde951c23a9e776ce597ad96add4e7ede89372631764b43ac" Namespace="calico-system" Pod="calico-kube-controllers-8468d56f97-2sc4p" WorkloadEndpoint="ci--4081.1.0--6--a1de16b848-k8s-calico--kube--controllers--8468d56f97--2sc4p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.1.0--6--a1de16b848-k8s-calico--kube--controllers--8468d56f97--2sc4p-eth0", GenerateName:"calico-kube-controllers-8468d56f97-", Namespace:"calico-system", SelfLink:"", UID:"bd686d8f-58df-4186-8240-bba468f057cf", ResourceVersion:"768", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 50, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8468d56f97", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.1.0-6-a1de16b848", ContainerID:"", Pod:"calico-kube-controllers-8468d56f97-2sc4p", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.77.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8063b1fa749", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:51:25.136700 containerd[1459]: 2024-10-09 07:51:25.012 [INFO][4080] k8s.go 387: Calico CNI using IPs: [192.168.77.195/32] ContainerID="4fb24302862b969dde951c23a9e776ce597ad96add4e7ede89372631764b43ac" Namespace="calico-system" Pod="calico-kube-controllers-8468d56f97-2sc4p" WorkloadEndpoint="ci--4081.1.0--6--a1de16b848-k8s-calico--kube--controllers--8468d56f97--2sc4p-eth0" Oct 9 07:51:25.136700 containerd[1459]: 2024-10-09 07:51:25.013 [INFO][4080] dataplane_linux.go 68: Setting the host side veth name to cali8063b1fa749 ContainerID="4fb24302862b969dde951c23a9e776ce597ad96add4e7ede89372631764b43ac" Namespace="calico-system" Pod="calico-kube-controllers-8468d56f97-2sc4p" WorkloadEndpoint="ci--4081.1.0--6--a1de16b848-k8s-calico--kube--controllers--8468d56f97--2sc4p-eth0" Oct 9 07:51:25.136700 containerd[1459]: 2024-10-09 07:51:25.035 [INFO][4080] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="4fb24302862b969dde951c23a9e776ce597ad96add4e7ede89372631764b43ac" Namespace="calico-system" Pod="calico-kube-controllers-8468d56f97-2sc4p" WorkloadEndpoint="ci--4081.1.0--6--a1de16b848-k8s-calico--kube--controllers--8468d56f97--2sc4p-eth0" Oct 9 07:51:25.136700 containerd[1459]: 2024-10-09 07:51:25.037 [INFO][4080] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4fb24302862b969dde951c23a9e776ce597ad96add4e7ede89372631764b43ac" Namespace="calico-system" Pod="calico-kube-controllers-8468d56f97-2sc4p" WorkloadEndpoint="ci--4081.1.0--6--a1de16b848-k8s-calico--kube--controllers--8468d56f97--2sc4p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.1.0--6--a1de16b848-k8s-calico--kube--controllers--8468d56f97--2sc4p-eth0", GenerateName:"calico-kube-controllers-8468d56f97-", Namespace:"calico-system", SelfLink:"", UID:"bd686d8f-58df-4186-8240-bba468f057cf", ResourceVersion:"768", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 50, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8468d56f97", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.1.0-6-a1de16b848", ContainerID:"4fb24302862b969dde951c23a9e776ce597ad96add4e7ede89372631764b43ac", Pod:"calico-kube-controllers-8468d56f97-2sc4p", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.77.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8063b1fa749", MAC:"66:ae:e7:82:97:df", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:51:25.136700 containerd[1459]: 2024-10-09 07:51:25.120 [INFO][4080] k8s.go 500: Wrote updated endpoint to datastore ContainerID="4fb24302862b969dde951c23a9e776ce597ad96add4e7ede89372631764b43ac" Namespace="calico-system" Pod="calico-kube-controllers-8468d56f97-2sc4p" WorkloadEndpoint="ci--4081.1.0--6--a1de16b848-k8s-calico--kube--controllers--8468d56f97--2sc4p-eth0" Oct 9 07:51:25.154215 sshd[4066]: pam_unix(sshd:session): session closed for user core Oct 9 07:51:25.163882 systemd[1]: sshd@9-64.23.134.87:22-139.178.89.65:41770.service: Deactivated successfully. Oct 9 07:51:25.173326 systemd[1]: session-10.scope: Deactivated successfully. Oct 9 07:51:25.183240 systemd-logind[1447]: Session 10 logged out. Waiting for processes to exit. Oct 9 07:51:25.191372 systemd-logind[1447]: Removed session 10. Oct 9 07:51:25.252215 containerd[1459]: time="2024-10-09T07:51:25.249752428Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:51:25.252215 containerd[1459]: time="2024-10-09T07:51:25.251271480Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:51:25.252215 containerd[1459]: time="2024-10-09T07:51:25.251321355Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:51:25.252215 containerd[1459]: time="2024-10-09T07:51:25.251549041Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:51:25.316985 systemd[1]: Started cri-containerd-4fb24302862b969dde951c23a9e776ce597ad96add4e7ede89372631764b43ac.scope - libcontainer container 4fb24302862b969dde951c23a9e776ce597ad96add4e7ede89372631764b43ac. Oct 9 07:51:25.321510 systemd-networkd[1358]: cali908f02abf77: Gained IPv6LL Oct 9 07:51:25.406981 containerd[1459]: 2024-10-09 07:51:25.258 [INFO][4139] k8s.go 608: Cleaning up netns ContainerID="ccc48ae28a7e5f2b117790cd70dc01754a0d3e1e1b5d220e732ef26ac559a883" Oct 9 07:51:25.406981 containerd[1459]: 2024-10-09 07:51:25.258 [INFO][4139] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="ccc48ae28a7e5f2b117790cd70dc01754a0d3e1e1b5d220e732ef26ac559a883" iface="eth0" netns="/var/run/netns/cni-5ba30800-0ed2-e2ac-327d-78f1b828a846" Oct 9 07:51:25.406981 containerd[1459]: 2024-10-09 07:51:25.259 [INFO][4139] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="ccc48ae28a7e5f2b117790cd70dc01754a0d3e1e1b5d220e732ef26ac559a883" iface="eth0" netns="/var/run/netns/cni-5ba30800-0ed2-e2ac-327d-78f1b828a846" Oct 9 07:51:25.406981 containerd[1459]: 2024-10-09 07:51:25.265 [INFO][4139] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="ccc48ae28a7e5f2b117790cd70dc01754a0d3e1e1b5d220e732ef26ac559a883" iface="eth0" netns="/var/run/netns/cni-5ba30800-0ed2-e2ac-327d-78f1b828a846" Oct 9 07:51:25.406981 containerd[1459]: 2024-10-09 07:51:25.265 [INFO][4139] k8s.go 615: Releasing IP address(es) ContainerID="ccc48ae28a7e5f2b117790cd70dc01754a0d3e1e1b5d220e732ef26ac559a883" Oct 9 07:51:25.406981 containerd[1459]: 2024-10-09 07:51:25.265 [INFO][4139] utils.go 188: Calico CNI releasing IP address ContainerID="ccc48ae28a7e5f2b117790cd70dc01754a0d3e1e1b5d220e732ef26ac559a883" Oct 9 07:51:25.406981 containerd[1459]: 2024-10-09 07:51:25.377 [INFO][4178] ipam_plugin.go 417: Releasing address using handleID ContainerID="ccc48ae28a7e5f2b117790cd70dc01754a0d3e1e1b5d220e732ef26ac559a883" HandleID="k8s-pod-network.ccc48ae28a7e5f2b117790cd70dc01754a0d3e1e1b5d220e732ef26ac559a883" Workload="ci--4081.1.0--6--a1de16b848-k8s-csi--node--driver--lxz42-eth0" Oct 9 07:51:25.406981 containerd[1459]: 2024-10-09 07:51:25.378 [INFO][4178] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:51:25.406981 containerd[1459]: 2024-10-09 07:51:25.379 [INFO][4178] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:51:25.406981 containerd[1459]: 2024-10-09 07:51:25.388 [WARNING][4178] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="ccc48ae28a7e5f2b117790cd70dc01754a0d3e1e1b5d220e732ef26ac559a883" HandleID="k8s-pod-network.ccc48ae28a7e5f2b117790cd70dc01754a0d3e1e1b5d220e732ef26ac559a883" Workload="ci--4081.1.0--6--a1de16b848-k8s-csi--node--driver--lxz42-eth0" Oct 9 07:51:25.406981 containerd[1459]: 2024-10-09 07:51:25.388 [INFO][4178] ipam_plugin.go 445: Releasing address using workloadID ContainerID="ccc48ae28a7e5f2b117790cd70dc01754a0d3e1e1b5d220e732ef26ac559a883" HandleID="k8s-pod-network.ccc48ae28a7e5f2b117790cd70dc01754a0d3e1e1b5d220e732ef26ac559a883" Workload="ci--4081.1.0--6--a1de16b848-k8s-csi--node--driver--lxz42-eth0" Oct 9 07:51:25.406981 containerd[1459]: 2024-10-09 07:51:25.391 [INFO][4178] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:51:25.406981 containerd[1459]: 2024-10-09 07:51:25.398 [INFO][4139] k8s.go 621: Teardown processing complete. ContainerID="ccc48ae28a7e5f2b117790cd70dc01754a0d3e1e1b5d220e732ef26ac559a883" Oct 9 07:51:25.412365 containerd[1459]: time="2024-10-09T07:51:25.409455631Z" level=info msg="TearDown network for sandbox \"ccc48ae28a7e5f2b117790cd70dc01754a0d3e1e1b5d220e732ef26ac559a883\" successfully" Oct 9 07:51:25.412365 containerd[1459]: time="2024-10-09T07:51:25.409499245Z" level=info msg="StopPodSandbox for \"ccc48ae28a7e5f2b117790cd70dc01754a0d3e1e1b5d220e732ef26ac559a883\" returns successfully" Oct 9 07:51:25.417788 containerd[1459]: time="2024-10-09T07:51:25.417691884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lxz42,Uid:8243e6c5-fffe-40ae-9ffc-3e5c0557a44d,Namespace:calico-system,Attempt:1,}" Oct 9 07:51:25.419418 systemd[1]: run-netns-cni\x2d5ba30800\x2d0ed2\x2de2ac\x2d327d\x2d78f1b828a846.mount: Deactivated successfully. Oct 9 07:51:25.470351 kubelet[2521]: E1009 07:51:25.465745 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:51:25.470351 kubelet[2521]: E1009 07:51:25.466028 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:51:25.575092 containerd[1459]: time="2024-10-09T07:51:25.574598554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8468d56f97-2sc4p,Uid:bd686d8f-58df-4186-8240-bba468f057cf,Namespace:calico-system,Attempt:1,} returns sandbox id \"4fb24302862b969dde951c23a9e776ce597ad96add4e7ede89372631764b43ac\"" Oct 9 07:51:25.577848 containerd[1459]: time="2024-10-09T07:51:25.577803347Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\"" Oct 9 07:51:25.893089 systemd-networkd[1358]: calic0e436d85ae: Link UP Oct 9 07:51:25.895757 systemd-networkd[1358]: calic0e436d85ae: Gained carrier Oct 9 07:51:25.932554 containerd[1459]: 2024-10-09 07:51:25.604 [INFO][4201] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.1.0--6--a1de16b848-k8s-csi--node--driver--lxz42-eth0 csi-node-driver- calico-system 8243e6c5-fffe-40ae-9ffc-3e5c0557a44d 792 0 2024-10-09 07:50:56 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:779867c8f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ci-4081.1.0-6-a1de16b848 csi-node-driver-lxz42 eth0 default [] [] [kns.calico-system ksa.calico-system.default] calic0e436d85ae [] []}} ContainerID="e9f044612ffe49c2d87fa6816cd7eead979eb1655f45c7cfb8a359c70009d8dc" Namespace="calico-system" Pod="csi-node-driver-lxz42" WorkloadEndpoint="ci--4081.1.0--6--a1de16b848-k8s-csi--node--driver--lxz42-" Oct 9 07:51:25.932554 containerd[1459]: 2024-10-09 07:51:25.604 [INFO][4201] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e9f044612ffe49c2d87fa6816cd7eead979eb1655f45c7cfb8a359c70009d8dc" Namespace="calico-system" Pod="csi-node-driver-lxz42" WorkloadEndpoint="ci--4081.1.0--6--a1de16b848-k8s-csi--node--driver--lxz42-eth0" Oct 9 07:51:25.932554 containerd[1459]: 2024-10-09 07:51:25.693 [INFO][4220] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e9f044612ffe49c2d87fa6816cd7eead979eb1655f45c7cfb8a359c70009d8dc" HandleID="k8s-pod-network.e9f044612ffe49c2d87fa6816cd7eead979eb1655f45c7cfb8a359c70009d8dc" Workload="ci--4081.1.0--6--a1de16b848-k8s-csi--node--driver--lxz42-eth0" Oct 9 07:51:25.932554 containerd[1459]: 2024-10-09 07:51:25.712 [INFO][4220] ipam_plugin.go 270: Auto assigning IP ContainerID="e9f044612ffe49c2d87fa6816cd7eead979eb1655f45c7cfb8a359c70009d8dc" HandleID="k8s-pod-network.e9f044612ffe49c2d87fa6816cd7eead979eb1655f45c7cfb8a359c70009d8dc" Workload="ci--4081.1.0--6--a1de16b848-k8s-csi--node--driver--lxz42-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290e70), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.1.0-6-a1de16b848", "pod":"csi-node-driver-lxz42", "timestamp":"2024-10-09 07:51:25.693626661 +0000 UTC"}, Hostname:"ci-4081.1.0-6-a1de16b848", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 07:51:25.932554 containerd[1459]: 2024-10-09 07:51:25.712 [INFO][4220] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:51:25.932554 containerd[1459]: 2024-10-09 07:51:25.713 [INFO][4220] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:51:25.932554 containerd[1459]: 2024-10-09 07:51:25.713 [INFO][4220] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.1.0-6-a1de16b848' Oct 9 07:51:25.932554 containerd[1459]: 2024-10-09 07:51:25.720 [INFO][4220] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e9f044612ffe49c2d87fa6816cd7eead979eb1655f45c7cfb8a359c70009d8dc" host="ci-4081.1.0-6-a1de16b848" Oct 9 07:51:25.932554 containerd[1459]: 2024-10-09 07:51:25.817 [INFO][4220] ipam.go 372: Looking up existing affinities for host host="ci-4081.1.0-6-a1de16b848" Oct 9 07:51:25.932554 containerd[1459]: 2024-10-09 07:51:25.829 [INFO][4220] ipam.go 489: Trying affinity for 192.168.77.192/26 host="ci-4081.1.0-6-a1de16b848" Oct 9 07:51:25.932554 containerd[1459]: 2024-10-09 07:51:25.834 [INFO][4220] ipam.go 155: Attempting to load block cidr=192.168.77.192/26 host="ci-4081.1.0-6-a1de16b848" Oct 9 07:51:25.932554 containerd[1459]: 2024-10-09 07:51:25.840 [INFO][4220] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.77.192/26 host="ci-4081.1.0-6-a1de16b848" Oct 9 07:51:25.932554 containerd[1459]: 2024-10-09 07:51:25.840 [INFO][4220] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.77.192/26 handle="k8s-pod-network.e9f044612ffe49c2d87fa6816cd7eead979eb1655f45c7cfb8a359c70009d8dc" host="ci-4081.1.0-6-a1de16b848" Oct 9 07:51:25.932554 containerd[1459]: 2024-10-09 07:51:25.846 [INFO][4220] ipam.go 1685: Creating new handle: k8s-pod-network.e9f044612ffe49c2d87fa6816cd7eead979eb1655f45c7cfb8a359c70009d8dc Oct 9 07:51:25.932554 containerd[1459]: 2024-10-09 07:51:25.861 [INFO][4220] ipam.go 1203: Writing block in order to claim IPs block=192.168.77.192/26 handle="k8s-pod-network.e9f044612ffe49c2d87fa6816cd7eead979eb1655f45c7cfb8a359c70009d8dc" host="ci-4081.1.0-6-a1de16b848" Oct 9 07:51:25.932554 containerd[1459]: 2024-10-09 07:51:25.882 [INFO][4220] ipam.go 1216: Successfully claimed IPs: [192.168.77.196/26] block=192.168.77.192/26 handle="k8s-pod-network.e9f044612ffe49c2d87fa6816cd7eead979eb1655f45c7cfb8a359c70009d8dc" host="ci-4081.1.0-6-a1de16b848" Oct 9 07:51:25.932554 containerd[1459]: 2024-10-09 07:51:25.882 [INFO][4220] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.77.196/26] handle="k8s-pod-network.e9f044612ffe49c2d87fa6816cd7eead979eb1655f45c7cfb8a359c70009d8dc" host="ci-4081.1.0-6-a1de16b848" Oct 9 07:51:25.932554 containerd[1459]: 2024-10-09 07:51:25.882 [INFO][4220] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:51:25.932554 containerd[1459]: 2024-10-09 07:51:25.882 [INFO][4220] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.77.196/26] IPv6=[] ContainerID="e9f044612ffe49c2d87fa6816cd7eead979eb1655f45c7cfb8a359c70009d8dc" HandleID="k8s-pod-network.e9f044612ffe49c2d87fa6816cd7eead979eb1655f45c7cfb8a359c70009d8dc" Workload="ci--4081.1.0--6--a1de16b848-k8s-csi--node--driver--lxz42-eth0" Oct 9 07:51:25.934235 containerd[1459]: 2024-10-09 07:51:25.886 [INFO][4201] k8s.go 386: Populated endpoint ContainerID="e9f044612ffe49c2d87fa6816cd7eead979eb1655f45c7cfb8a359c70009d8dc" Namespace="calico-system" Pod="csi-node-driver-lxz42" WorkloadEndpoint="ci--4081.1.0--6--a1de16b848-k8s-csi--node--driver--lxz42-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.1.0--6--a1de16b848-k8s-csi--node--driver--lxz42-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8243e6c5-fffe-40ae-9ffc-3e5c0557a44d", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 50, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"779867c8f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.1.0-6-a1de16b848", ContainerID:"", Pod:"csi-node-driver-lxz42", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.77.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calic0e436d85ae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:51:25.934235 containerd[1459]: 2024-10-09 07:51:25.887 [INFO][4201] k8s.go 387: Calico CNI using IPs: [192.168.77.196/32] ContainerID="e9f044612ffe49c2d87fa6816cd7eead979eb1655f45c7cfb8a359c70009d8dc" Namespace="calico-system" Pod="csi-node-driver-lxz42" WorkloadEndpoint="ci--4081.1.0--6--a1de16b848-k8s-csi--node--driver--lxz42-eth0" Oct 9 07:51:25.934235 containerd[1459]: 2024-10-09 07:51:25.887 [INFO][4201] dataplane_linux.go 68: Setting the host side veth name to calic0e436d85ae ContainerID="e9f044612ffe49c2d87fa6816cd7eead979eb1655f45c7cfb8a359c70009d8dc" Namespace="calico-system" Pod="csi-node-driver-lxz42" WorkloadEndpoint="ci--4081.1.0--6--a1de16b848-k8s-csi--node--driver--lxz42-eth0" Oct 9 07:51:25.934235 containerd[1459]: 2024-10-09 07:51:25.895 [INFO][4201] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="e9f044612ffe49c2d87fa6816cd7eead979eb1655f45c7cfb8a359c70009d8dc" Namespace="calico-system" Pod="csi-node-driver-lxz42" WorkloadEndpoint="ci--4081.1.0--6--a1de16b848-k8s-csi--node--driver--lxz42-eth0" Oct 9 07:51:25.934235 containerd[1459]: 2024-10-09 07:51:25.896 [INFO][4201] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e9f044612ffe49c2d87fa6816cd7eead979eb1655f45c7cfb8a359c70009d8dc" Namespace="calico-system" Pod="csi-node-driver-lxz42" WorkloadEndpoint="ci--4081.1.0--6--a1de16b848-k8s-csi--node--driver--lxz42-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.1.0--6--a1de16b848-k8s-csi--node--driver--lxz42-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8243e6c5-fffe-40ae-9ffc-3e5c0557a44d", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 50, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"779867c8f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.1.0-6-a1de16b848", ContainerID:"e9f044612ffe49c2d87fa6816cd7eead979eb1655f45c7cfb8a359c70009d8dc", Pod:"csi-node-driver-lxz42", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.77.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calic0e436d85ae", MAC:"5e:f3:84:0f:dd:2b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:51:25.934235 containerd[1459]: 2024-10-09 07:51:25.921 [INFO][4201] k8s.go 500: Wrote updated endpoint to datastore ContainerID="e9f044612ffe49c2d87fa6816cd7eead979eb1655f45c7cfb8a359c70009d8dc" Namespace="calico-system" Pod="csi-node-driver-lxz42" WorkloadEndpoint="ci--4081.1.0--6--a1de16b848-k8s-csi--node--driver--lxz42-eth0" Oct 9 07:51:25.994104 containerd[1459]: time="2024-10-09T07:51:25.993122873Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:51:25.995814 containerd[1459]: time="2024-10-09T07:51:25.995379986Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:51:25.995814 containerd[1459]: time="2024-10-09T07:51:25.995449417Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:51:25.995814 containerd[1459]: time="2024-10-09T07:51:25.995668845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:51:26.077124 systemd[1]: Started cri-containerd-e9f044612ffe49c2d87fa6816cd7eead979eb1655f45c7cfb8a359c70009d8dc.scope - libcontainer container e9f044612ffe49c2d87fa6816cd7eead979eb1655f45c7cfb8a359c70009d8dc. Oct 9 07:51:26.169850 containerd[1459]: time="2024-10-09T07:51:26.168068615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lxz42,Uid:8243e6c5-fffe-40ae-9ffc-3e5c0557a44d,Namespace:calico-system,Attempt:1,} returns sandbox id \"e9f044612ffe49c2d87fa6816cd7eead979eb1655f45c7cfb8a359c70009d8dc\"" Oct 9 07:51:26.482055 kubelet[2521]: E1009 07:51:26.480834 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:51:26.485168 kubelet[2521]: E1009 07:51:26.480854 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:51:26.574697 systemd[1]: Started sshd@10-64.23.134.87:22-60.191.20.210:23456.service - OpenSSH per-connection server daemon (60.191.20.210:23456). Oct 9 07:51:27.051087 systemd-networkd[1358]: cali8063b1fa749: Gained IPv6LL Oct 9 07:51:27.114699 systemd-networkd[1358]: calic0e436d85ae: Gained IPv6LL Oct 9 07:51:28.704199 containerd[1459]: time="2024-10-09T07:51:28.703968836Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:51:28.708431 containerd[1459]: time="2024-10-09T07:51:28.708168164Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.1: active requests=0, bytes read=33507125" Oct 9 07:51:28.711897 containerd[1459]: time="2024-10-09T07:51:28.711804198Z" level=info msg="ImageCreate event name:\"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:51:28.717349 containerd[1459]: time="2024-10-09T07:51:28.717264908Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:51:28.719318 containerd[1459]: time="2024-10-09T07:51:28.719138467Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" with image id \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\", size \"34999494\" in 3.141287339s" Oct 9 07:51:28.720234 containerd[1459]: time="2024-10-09T07:51:28.719310730Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" returns image reference \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\"" Oct 9 07:51:28.723931 containerd[1459]: time="2024-10-09T07:51:28.722816356Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\"" Oct 9 07:51:28.763249 containerd[1459]: time="2024-10-09T07:51:28.763171211Z" level=info msg="CreateContainer within sandbox \"4fb24302862b969dde951c23a9e776ce597ad96add4e7ede89372631764b43ac\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Oct 9 07:51:28.818037 containerd[1459]: time="2024-10-09T07:51:28.817915581Z" level=info msg="CreateContainer within sandbox \"4fb24302862b969dde951c23a9e776ce597ad96add4e7ede89372631764b43ac\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"025671f7fa668b1dd32ec44ec680fc56c388a5a4bf046a1323ce7861a8e2bf7c\"" Oct 9 07:51:28.820370 containerd[1459]: time="2024-10-09T07:51:28.819456488Z" level=info msg="StartContainer for \"025671f7fa668b1dd32ec44ec680fc56c388a5a4bf046a1323ce7861a8e2bf7c\"" Oct 9 07:51:28.991235 systemd[1]: Started cri-containerd-025671f7fa668b1dd32ec44ec680fc56c388a5a4bf046a1323ce7861a8e2bf7c.scope - libcontainer container 025671f7fa668b1dd32ec44ec680fc56c388a5a4bf046a1323ce7861a8e2bf7c. Oct 9 07:51:29.116137 containerd[1459]: time="2024-10-09T07:51:29.116049391Z" level=info msg="StartContainer for \"025671f7fa668b1dd32ec44ec680fc56c388a5a4bf046a1323ce7861a8e2bf7c\" returns successfully" Oct 9 07:51:29.521927 kubelet[2521]: I1009 07:51:29.520529 2521 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-8468d56f97-2sc4p" podStartSLOduration=30.375123599 podStartE2EDuration="33.520497496s" podCreationTimestamp="2024-10-09 07:50:56 +0000 UTC" firstStartedPulling="2024-10-09 07:51:25.577022934 +0000 UTC m=+42.872265975" lastFinishedPulling="2024-10-09 07:51:28.722396808 +0000 UTC m=+46.017639872" observedRunningTime="2024-10-09 07:51:29.520336351 +0000 UTC m=+46.815579414" watchObservedRunningTime="2024-10-09 07:51:29.520497496 +0000 UTC m=+46.815740559" Oct 9 07:51:30.179764 systemd[1]: Started sshd@11-64.23.134.87:22-139.178.89.65:48918.service - OpenSSH per-connection server daemon (139.178.89.65:48918). Oct 9 07:51:30.390518 sshd[4338]: Accepted publickey for core from 139.178.89.65 port 48918 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:51:30.397402 sshd[4338]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:51:30.413692 systemd-logind[1447]: New session 11 of user core. Oct 9 07:51:30.416421 containerd[1459]: time="2024-10-09T07:51:30.415895957Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:51:30.422491 containerd[1459]: time="2024-10-09T07:51:30.420862361Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.1: active requests=0, bytes read=7642081" Oct 9 07:51:30.422200 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 9 07:51:30.426001 containerd[1459]: time="2024-10-09T07:51:30.423976891Z" level=info msg="ImageCreate event name:\"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:51:30.440054 containerd[1459]: time="2024-10-09T07:51:30.439048932Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:51:30.449044 containerd[1459]: time="2024-10-09T07:51:30.448972594Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.1\" with image id \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\", size \"9134482\" in 1.726096543s" Oct 9 07:51:30.449303 containerd[1459]: time="2024-10-09T07:51:30.449149750Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\" returns image reference \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\"" Oct 9 07:51:30.459196 containerd[1459]: time="2024-10-09T07:51:30.459072030Z" level=info msg="CreateContainer within sandbox \"e9f044612ffe49c2d87fa6816cd7eead979eb1655f45c7cfb8a359c70009d8dc\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Oct 9 07:51:30.537296 containerd[1459]: time="2024-10-09T07:51:30.537210360Z" level=info msg="CreateContainer within sandbox \"e9f044612ffe49c2d87fa6816cd7eead979eb1655f45c7cfb8a359c70009d8dc\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"e031907a2f3ea70349fe8dae3fdaa5c025e0f79023fe5788ee58afc964d1a8d2\"" Oct 9 07:51:30.542918 containerd[1459]: time="2024-10-09T07:51:30.542467684Z" level=info msg="StartContainer for \"e031907a2f3ea70349fe8dae3fdaa5c025e0f79023fe5788ee58afc964d1a8d2\"" Oct 9 07:51:30.696306 systemd[1]: Started cri-containerd-e031907a2f3ea70349fe8dae3fdaa5c025e0f79023fe5788ee58afc964d1a8d2.scope - libcontainer container e031907a2f3ea70349fe8dae3fdaa5c025e0f79023fe5788ee58afc964d1a8d2. Oct 9 07:51:30.979164 containerd[1459]: time="2024-10-09T07:51:30.978352784Z" level=info msg="StartContainer for \"e031907a2f3ea70349fe8dae3fdaa5c025e0f79023fe5788ee58afc964d1a8d2\" returns successfully" Oct 9 07:51:30.990187 containerd[1459]: time="2024-10-09T07:51:30.988544465Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\"" Oct 9 07:51:31.132632 sshd[4338]: pam_unix(sshd:session): session closed for user core Oct 9 07:51:31.142778 systemd[1]: sshd@11-64.23.134.87:22-139.178.89.65:48918.service: Deactivated successfully. Oct 9 07:51:31.148179 systemd[1]: session-11.scope: Deactivated successfully. Oct 9 07:51:31.155803 systemd-logind[1447]: Session 11 logged out. Waiting for processes to exit. Oct 9 07:51:31.159642 systemd-logind[1447]: Removed session 11. Oct 9 07:51:32.666172 containerd[1459]: time="2024-10-09T07:51:32.665923273Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:51:32.668907 containerd[1459]: time="2024-10-09T07:51:32.668504109Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1: active requests=0, bytes read=12907822" Oct 9 07:51:32.670585 containerd[1459]: time="2024-10-09T07:51:32.670248576Z" level=info msg="ImageCreate event name:\"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:51:32.678975 containerd[1459]: time="2024-10-09T07:51:32.678681746Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:51:32.682369 containerd[1459]: time="2024-10-09T07:51:32.682303668Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" with image id \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\", size \"14400175\" in 1.693689152s" Oct 9 07:51:32.682894 containerd[1459]: time="2024-10-09T07:51:32.682582925Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" returns image reference \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\"" Oct 9 07:51:32.686162 containerd[1459]: time="2024-10-09T07:51:32.685994984Z" level=info msg="CreateContainer within sandbox \"e9f044612ffe49c2d87fa6816cd7eead979eb1655f45c7cfb8a359c70009d8dc\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Oct 9 07:51:32.715452 containerd[1459]: time="2024-10-09T07:51:32.715367048Z" level=info msg="CreateContainer within sandbox \"e9f044612ffe49c2d87fa6816cd7eead979eb1655f45c7cfb8a359c70009d8dc\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"724c4aa161c7a4dc12c2af2c56c0daec967a50522fc46091a23b90f40b674f14\"" Oct 9 07:51:32.717369 containerd[1459]: time="2024-10-09T07:51:32.717287202Z" level=info msg="StartContainer for \"724c4aa161c7a4dc12c2af2c56c0daec967a50522fc46091a23b90f40b674f14\"" Oct 9 07:51:32.781421 systemd[1]: Started cri-containerd-724c4aa161c7a4dc12c2af2c56c0daec967a50522fc46091a23b90f40b674f14.scope - libcontainer container 724c4aa161c7a4dc12c2af2c56c0daec967a50522fc46091a23b90f40b674f14. Oct 9 07:51:32.876956 containerd[1459]: time="2024-10-09T07:51:32.876845128Z" level=info msg="StartContainer for \"724c4aa161c7a4dc12c2af2c56c0daec967a50522fc46091a23b90f40b674f14\" returns successfully" Oct 9 07:51:33.228162 kubelet[2521]: I1009 07:51:33.227871 2521 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Oct 9 07:51:33.228162 kubelet[2521]: I1009 07:51:33.228039 2521 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Oct 9 07:51:33.590226 kubelet[2521]: I1009 07:51:33.590144 2521 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-lxz42" podStartSLOduration=31.079229646 podStartE2EDuration="37.590121214s" podCreationTimestamp="2024-10-09 07:50:56 +0000 UTC" firstStartedPulling="2024-10-09 07:51:26.17283012 +0000 UTC m=+43.468073158" lastFinishedPulling="2024-10-09 07:51:32.683721688 +0000 UTC m=+49.978964726" observedRunningTime="2024-10-09 07:51:33.586471591 +0000 UTC m=+50.881714650" watchObservedRunningTime="2024-10-09 07:51:33.590121214 +0000 UTC m=+50.885364301" Oct 9 07:51:36.145980 systemd[1]: Started sshd@12-64.23.134.87:22-139.178.89.65:48906.service - OpenSSH per-connection server daemon (139.178.89.65:48906). Oct 9 07:51:36.241204 sshd[4456]: Accepted publickey for core from 139.178.89.65 port 48906 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:51:36.240984 sshd[4456]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:51:36.247926 systemd-logind[1447]: New session 12 of user core. Oct 9 07:51:36.254272 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 9 07:51:36.500382 sshd[4456]: pam_unix(sshd:session): session closed for user core Oct 9 07:51:36.510458 systemd[1]: sshd@12-64.23.134.87:22-139.178.89.65:48906.service: Deactivated successfully. Oct 9 07:51:36.514424 systemd[1]: session-12.scope: Deactivated successfully. Oct 9 07:51:36.517409 systemd-logind[1447]: Session 12 logged out. Waiting for processes to exit. Oct 9 07:51:36.522529 systemd[1]: Started sshd@13-64.23.134.87:22-139.178.89.65:48920.service - OpenSSH per-connection server daemon (139.178.89.65:48920). Oct 9 07:51:36.525651 systemd-logind[1447]: Removed session 12. Oct 9 07:51:36.581658 sshd[4470]: Accepted publickey for core from 139.178.89.65 port 48920 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:51:36.584107 sshd[4470]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:51:36.590423 systemd-logind[1447]: New session 13 of user core. Oct 9 07:51:36.597235 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 9 07:51:36.890155 sshd[4470]: pam_unix(sshd:session): session closed for user core Oct 9 07:51:36.911302 systemd[1]: Started sshd@14-64.23.134.87:22-139.178.89.65:48924.service - OpenSSH per-connection server daemon (139.178.89.65:48924). Oct 9 07:51:36.912083 systemd[1]: sshd@13-64.23.134.87:22-139.178.89.65:48920.service: Deactivated successfully. Oct 9 07:51:36.920762 systemd[1]: session-13.scope: Deactivated successfully. Oct 9 07:51:36.923714 systemd-logind[1447]: Session 13 logged out. Waiting for processes to exit. Oct 9 07:51:36.930196 systemd-logind[1447]: Removed session 13. Oct 9 07:51:36.998325 sshd[4479]: Accepted publickey for core from 139.178.89.65 port 48924 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:51:37.000699 sshd[4479]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:51:37.007496 systemd-logind[1447]: New session 14 of user core. Oct 9 07:51:37.015220 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 9 07:51:37.231876 sshd[4479]: pam_unix(sshd:session): session closed for user core Oct 9 07:51:37.237631 systemd[1]: sshd@14-64.23.134.87:22-139.178.89.65:48924.service: Deactivated successfully. Oct 9 07:51:37.242446 systemd[1]: session-14.scope: Deactivated successfully. Oct 9 07:51:37.244827 systemd-logind[1447]: Session 14 logged out. Waiting for processes to exit. Oct 9 07:51:37.246587 systemd-logind[1447]: Removed session 14. Oct 9 07:51:39.566132 systemd[1]: run-containerd-runc-k8s.io-90a987e4655cd578319287b647aa824586ee690423d0d105912f260d1608aa95-runc.w1NCMQ.mount: Deactivated successfully. Oct 9 07:51:39.642742 kubelet[2521]: E1009 07:51:39.642678 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:51:42.254434 systemd[1]: Started sshd@15-64.23.134.87:22-139.178.89.65:48938.service - OpenSSH per-connection server daemon (139.178.89.65:48938). Oct 9 07:51:42.312231 sshd[4529]: Accepted publickey for core from 139.178.89.65 port 48938 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:51:42.315906 sshd[4529]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:51:42.324279 systemd-logind[1447]: New session 15 of user core. Oct 9 07:51:42.332307 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 9 07:51:42.510782 sshd[4529]: pam_unix(sshd:session): session closed for user core Oct 9 07:51:42.517390 systemd[1]: sshd@15-64.23.134.87:22-139.178.89.65:48938.service: Deactivated successfully. Oct 9 07:51:42.521598 systemd[1]: session-15.scope: Deactivated successfully. Oct 9 07:51:42.523612 systemd-logind[1447]: Session 15 logged out. Waiting for processes to exit. Oct 9 07:51:42.525701 systemd-logind[1447]: Removed session 15. Oct 9 07:51:42.939075 containerd[1459]: time="2024-10-09T07:51:42.938945886Z" level=info msg="StopPodSandbox for \"26d25664fe842060cb331b64fee0eeb4a1914af62993fddaa7b73207d5ed1e0d\"" Oct 9 07:51:43.063981 containerd[1459]: 2024-10-09 07:51:43.010 [WARNING][4553] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="26d25664fe842060cb331b64fee0eeb4a1914af62993fddaa7b73207d5ed1e0d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.1.0--6--a1de16b848-k8s-calico--kube--controllers--8468d56f97--2sc4p-eth0", GenerateName:"calico-kube-controllers-8468d56f97-", Namespace:"calico-system", SelfLink:"", UID:"bd686d8f-58df-4186-8240-bba468f057cf", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 50, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8468d56f97", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.1.0-6-a1de16b848", ContainerID:"4fb24302862b969dde951c23a9e776ce597ad96add4e7ede89372631764b43ac", Pod:"calico-kube-controllers-8468d56f97-2sc4p", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.77.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8063b1fa749", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:51:43.063981 containerd[1459]: 2024-10-09 07:51:43.011 [INFO][4553] k8s.go 608: Cleaning up netns ContainerID="26d25664fe842060cb331b64fee0eeb4a1914af62993fddaa7b73207d5ed1e0d" Oct 9 07:51:43.063981 containerd[1459]: 2024-10-09 07:51:43.011 [INFO][4553] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="26d25664fe842060cb331b64fee0eeb4a1914af62993fddaa7b73207d5ed1e0d" iface="eth0" netns="" Oct 9 07:51:43.063981 containerd[1459]: 2024-10-09 07:51:43.011 [INFO][4553] k8s.go 615: Releasing IP address(es) ContainerID="26d25664fe842060cb331b64fee0eeb4a1914af62993fddaa7b73207d5ed1e0d" Oct 9 07:51:43.063981 containerd[1459]: 2024-10-09 07:51:43.011 [INFO][4553] utils.go 188: Calico CNI releasing IP address ContainerID="26d25664fe842060cb331b64fee0eeb4a1914af62993fddaa7b73207d5ed1e0d" Oct 9 07:51:43.063981 containerd[1459]: 2024-10-09 07:51:43.046 [INFO][4560] ipam_plugin.go 417: Releasing address using handleID ContainerID="26d25664fe842060cb331b64fee0eeb4a1914af62993fddaa7b73207d5ed1e0d" HandleID="k8s-pod-network.26d25664fe842060cb331b64fee0eeb4a1914af62993fddaa7b73207d5ed1e0d" Workload="ci--4081.1.0--6--a1de16b848-k8s-calico--kube--controllers--8468d56f97--2sc4p-eth0" Oct 9 07:51:43.063981 containerd[1459]: 2024-10-09 07:51:43.047 [INFO][4560] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:51:43.063981 containerd[1459]: 2024-10-09 07:51:43.047 [INFO][4560] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:51:43.063981 containerd[1459]: 2024-10-09 07:51:43.056 [WARNING][4560] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="26d25664fe842060cb331b64fee0eeb4a1914af62993fddaa7b73207d5ed1e0d" HandleID="k8s-pod-network.26d25664fe842060cb331b64fee0eeb4a1914af62993fddaa7b73207d5ed1e0d" Workload="ci--4081.1.0--6--a1de16b848-k8s-calico--kube--controllers--8468d56f97--2sc4p-eth0" Oct 9 07:51:43.063981 containerd[1459]: 2024-10-09 07:51:43.056 [INFO][4560] ipam_plugin.go 445: Releasing address using workloadID ContainerID="26d25664fe842060cb331b64fee0eeb4a1914af62993fddaa7b73207d5ed1e0d" HandleID="k8s-pod-network.26d25664fe842060cb331b64fee0eeb4a1914af62993fddaa7b73207d5ed1e0d" Workload="ci--4081.1.0--6--a1de16b848-k8s-calico--kube--controllers--8468d56f97--2sc4p-eth0" Oct 9 07:51:43.063981 containerd[1459]: 2024-10-09 07:51:43.059 [INFO][4560] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:51:43.063981 containerd[1459]: 2024-10-09 07:51:43.061 [INFO][4553] k8s.go 621: Teardown processing complete. ContainerID="26d25664fe842060cb331b64fee0eeb4a1914af62993fddaa7b73207d5ed1e0d" Oct 9 07:51:43.064540 containerd[1459]: time="2024-10-09T07:51:43.064041511Z" level=info msg="TearDown network for sandbox \"26d25664fe842060cb331b64fee0eeb4a1914af62993fddaa7b73207d5ed1e0d\" successfully" Oct 9 07:51:43.064540 containerd[1459]: time="2024-10-09T07:51:43.064075272Z" level=info msg="StopPodSandbox for \"26d25664fe842060cb331b64fee0eeb4a1914af62993fddaa7b73207d5ed1e0d\" returns successfully" Oct 9 07:51:43.065414 containerd[1459]: time="2024-10-09T07:51:43.065345006Z" level=info msg="RemovePodSandbox for \"26d25664fe842060cb331b64fee0eeb4a1914af62993fddaa7b73207d5ed1e0d\"" Oct 9 07:51:43.065414 containerd[1459]: time="2024-10-09T07:51:43.065395316Z" level=info msg="Forcibly stopping sandbox \"26d25664fe842060cb331b64fee0eeb4a1914af62993fddaa7b73207d5ed1e0d\"" Oct 9 07:51:43.215222 containerd[1459]: 2024-10-09 07:51:43.146 [WARNING][4580] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="26d25664fe842060cb331b64fee0eeb4a1914af62993fddaa7b73207d5ed1e0d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.1.0--6--a1de16b848-k8s-calico--kube--controllers--8468d56f97--2sc4p-eth0", GenerateName:"calico-kube-controllers-8468d56f97-", Namespace:"calico-system", SelfLink:"", UID:"bd686d8f-58df-4186-8240-bba468f057cf", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 50, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8468d56f97", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.1.0-6-a1de16b848", ContainerID:"4fb24302862b969dde951c23a9e776ce597ad96add4e7ede89372631764b43ac", Pod:"calico-kube-controllers-8468d56f97-2sc4p", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.77.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8063b1fa749", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:51:43.215222 containerd[1459]: 2024-10-09 07:51:43.146 [INFO][4580] k8s.go 608: Cleaning up netns ContainerID="26d25664fe842060cb331b64fee0eeb4a1914af62993fddaa7b73207d5ed1e0d" Oct 9 07:51:43.215222 containerd[1459]: 2024-10-09 07:51:43.146 [INFO][4580] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="26d25664fe842060cb331b64fee0eeb4a1914af62993fddaa7b73207d5ed1e0d" iface="eth0" netns="" Oct 9 07:51:43.215222 containerd[1459]: 2024-10-09 07:51:43.146 [INFO][4580] k8s.go 615: Releasing IP address(es) ContainerID="26d25664fe842060cb331b64fee0eeb4a1914af62993fddaa7b73207d5ed1e0d" Oct 9 07:51:43.215222 containerd[1459]: 2024-10-09 07:51:43.146 [INFO][4580] utils.go 188: Calico CNI releasing IP address ContainerID="26d25664fe842060cb331b64fee0eeb4a1914af62993fddaa7b73207d5ed1e0d" Oct 9 07:51:43.215222 containerd[1459]: 2024-10-09 07:51:43.193 [INFO][4586] ipam_plugin.go 417: Releasing address using handleID ContainerID="26d25664fe842060cb331b64fee0eeb4a1914af62993fddaa7b73207d5ed1e0d" HandleID="k8s-pod-network.26d25664fe842060cb331b64fee0eeb4a1914af62993fddaa7b73207d5ed1e0d" Workload="ci--4081.1.0--6--a1de16b848-k8s-calico--kube--controllers--8468d56f97--2sc4p-eth0" Oct 9 07:51:43.215222 containerd[1459]: 2024-10-09 07:51:43.195 [INFO][4586] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:51:43.215222 containerd[1459]: 2024-10-09 07:51:43.195 [INFO][4586] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:51:43.215222 containerd[1459]: 2024-10-09 07:51:43.205 [WARNING][4586] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="26d25664fe842060cb331b64fee0eeb4a1914af62993fddaa7b73207d5ed1e0d" HandleID="k8s-pod-network.26d25664fe842060cb331b64fee0eeb4a1914af62993fddaa7b73207d5ed1e0d" Workload="ci--4081.1.0--6--a1de16b848-k8s-calico--kube--controllers--8468d56f97--2sc4p-eth0" Oct 9 07:51:43.215222 containerd[1459]: 2024-10-09 07:51:43.205 [INFO][4586] ipam_plugin.go 445: Releasing address using workloadID ContainerID="26d25664fe842060cb331b64fee0eeb4a1914af62993fddaa7b73207d5ed1e0d" HandleID="k8s-pod-network.26d25664fe842060cb331b64fee0eeb4a1914af62993fddaa7b73207d5ed1e0d" Workload="ci--4081.1.0--6--a1de16b848-k8s-calico--kube--controllers--8468d56f97--2sc4p-eth0" Oct 9 07:51:43.215222 containerd[1459]: 2024-10-09 07:51:43.209 [INFO][4586] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:51:43.215222 containerd[1459]: 2024-10-09 07:51:43.212 [INFO][4580] k8s.go 621: Teardown processing complete. ContainerID="26d25664fe842060cb331b64fee0eeb4a1914af62993fddaa7b73207d5ed1e0d" Oct 9 07:51:43.215222 containerd[1459]: time="2024-10-09T07:51:43.215175396Z" level=info msg="TearDown network for sandbox \"26d25664fe842060cb331b64fee0eeb4a1914af62993fddaa7b73207d5ed1e0d\" successfully" Oct 9 07:51:43.237643 containerd[1459]: time="2024-10-09T07:51:43.237550329Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"26d25664fe842060cb331b64fee0eeb4a1914af62993fddaa7b73207d5ed1e0d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 07:51:43.237863 containerd[1459]: time="2024-10-09T07:51:43.237702539Z" level=info msg="RemovePodSandbox \"26d25664fe842060cb331b64fee0eeb4a1914af62993fddaa7b73207d5ed1e0d\" returns successfully" Oct 9 07:51:43.239087 containerd[1459]: time="2024-10-09T07:51:43.239038717Z" level=info msg="StopPodSandbox for \"b94e08d6d74da40177944e875e7d7e6708a6ca1ff5e5ccad3a11b9fb63fd37b5\"" Oct 9 07:51:43.374186 containerd[1459]: 2024-10-09 07:51:43.312 [WARNING][4604] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b94e08d6d74da40177944e875e7d7e6708a6ca1ff5e5ccad3a11b9fb63fd37b5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.1.0--6--a1de16b848-k8s-coredns--6f6b679f8f--br874-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"30cb655e-3275-4ce6-b495-a8243c54033b", ResourceVersion:"778", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 50, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.1.0-6-a1de16b848", ContainerID:"78f9ca2e1f67b0bfbeeb312fcb79db0aeecb2b38d88829c984c8f24555c0ff4f", Pod:"coredns-6f6b679f8f-br874", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.77.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibd735613a92", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:51:43.374186 containerd[1459]: 2024-10-09 07:51:43.313 [INFO][4604] k8s.go 608: Cleaning up netns ContainerID="b94e08d6d74da40177944e875e7d7e6708a6ca1ff5e5ccad3a11b9fb63fd37b5" Oct 9 07:51:43.374186 containerd[1459]: 2024-10-09 07:51:43.313 [INFO][4604] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="b94e08d6d74da40177944e875e7d7e6708a6ca1ff5e5ccad3a11b9fb63fd37b5" iface="eth0" netns="" Oct 9 07:51:43.374186 containerd[1459]: 2024-10-09 07:51:43.313 [INFO][4604] k8s.go 615: Releasing IP address(es) ContainerID="b94e08d6d74da40177944e875e7d7e6708a6ca1ff5e5ccad3a11b9fb63fd37b5" Oct 9 07:51:43.374186 containerd[1459]: 2024-10-09 07:51:43.313 [INFO][4604] utils.go 188: Calico CNI releasing IP address ContainerID="b94e08d6d74da40177944e875e7d7e6708a6ca1ff5e5ccad3a11b9fb63fd37b5" Oct 9 07:51:43.374186 containerd[1459]: 2024-10-09 07:51:43.350 [INFO][4610] ipam_plugin.go 417: Releasing address using handleID ContainerID="b94e08d6d74da40177944e875e7d7e6708a6ca1ff5e5ccad3a11b9fb63fd37b5" HandleID="k8s-pod-network.b94e08d6d74da40177944e875e7d7e6708a6ca1ff5e5ccad3a11b9fb63fd37b5" Workload="ci--4081.1.0--6--a1de16b848-k8s-coredns--6f6b679f8f--br874-eth0" Oct 9 07:51:43.374186 containerd[1459]: 2024-10-09 07:51:43.350 [INFO][4610] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:51:43.374186 containerd[1459]: 2024-10-09 07:51:43.350 [INFO][4610] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:51:43.374186 containerd[1459]: 2024-10-09 07:51:43.362 [WARNING][4610] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="b94e08d6d74da40177944e875e7d7e6708a6ca1ff5e5ccad3a11b9fb63fd37b5" HandleID="k8s-pod-network.b94e08d6d74da40177944e875e7d7e6708a6ca1ff5e5ccad3a11b9fb63fd37b5" Workload="ci--4081.1.0--6--a1de16b848-k8s-coredns--6f6b679f8f--br874-eth0" Oct 9 07:51:43.374186 containerd[1459]: 2024-10-09 07:51:43.362 [INFO][4610] ipam_plugin.go 445: Releasing address using workloadID ContainerID="b94e08d6d74da40177944e875e7d7e6708a6ca1ff5e5ccad3a11b9fb63fd37b5" HandleID="k8s-pod-network.b94e08d6d74da40177944e875e7d7e6708a6ca1ff5e5ccad3a11b9fb63fd37b5" Workload="ci--4081.1.0--6--a1de16b848-k8s-coredns--6f6b679f8f--br874-eth0" Oct 9 07:51:43.374186 containerd[1459]: 2024-10-09 07:51:43.366 [INFO][4610] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:51:43.374186 containerd[1459]: 2024-10-09 07:51:43.369 [INFO][4604] k8s.go 621: Teardown processing complete. ContainerID="b94e08d6d74da40177944e875e7d7e6708a6ca1ff5e5ccad3a11b9fb63fd37b5" Oct 9 07:51:43.374186 containerd[1459]: time="2024-10-09T07:51:43.373795960Z" level=info msg="TearDown network for sandbox \"b94e08d6d74da40177944e875e7d7e6708a6ca1ff5e5ccad3a11b9fb63fd37b5\" successfully" Oct 9 07:51:43.374186 containerd[1459]: time="2024-10-09T07:51:43.373836555Z" level=info msg="StopPodSandbox for \"b94e08d6d74da40177944e875e7d7e6708a6ca1ff5e5ccad3a11b9fb63fd37b5\" returns successfully" Oct 9 07:51:43.376490 containerd[1459]: time="2024-10-09T07:51:43.374558536Z" level=info msg="RemovePodSandbox for \"b94e08d6d74da40177944e875e7d7e6708a6ca1ff5e5ccad3a11b9fb63fd37b5\"" Oct 9 07:51:43.376490 containerd[1459]: time="2024-10-09T07:51:43.374603241Z" level=info msg="Forcibly stopping sandbox \"b94e08d6d74da40177944e875e7d7e6708a6ca1ff5e5ccad3a11b9fb63fd37b5\"" Oct 9 07:51:43.538657 containerd[1459]: 2024-10-09 07:51:43.460 [WARNING][4629] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b94e08d6d74da40177944e875e7d7e6708a6ca1ff5e5ccad3a11b9fb63fd37b5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.1.0--6--a1de16b848-k8s-coredns--6f6b679f8f--br874-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"30cb655e-3275-4ce6-b495-a8243c54033b", ResourceVersion:"778", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 50, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.1.0-6-a1de16b848", ContainerID:"78f9ca2e1f67b0bfbeeb312fcb79db0aeecb2b38d88829c984c8f24555c0ff4f", Pod:"coredns-6f6b679f8f-br874", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.77.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibd735613a92", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:51:43.538657 containerd[1459]: 2024-10-09 07:51:43.461 [INFO][4629] k8s.go 608: Cleaning up netns ContainerID="b94e08d6d74da40177944e875e7d7e6708a6ca1ff5e5ccad3a11b9fb63fd37b5" Oct 9 07:51:43.538657 containerd[1459]: 2024-10-09 07:51:43.461 [INFO][4629] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="b94e08d6d74da40177944e875e7d7e6708a6ca1ff5e5ccad3a11b9fb63fd37b5" iface="eth0" netns="" Oct 9 07:51:43.538657 containerd[1459]: 2024-10-09 07:51:43.461 [INFO][4629] k8s.go 615: Releasing IP address(es) ContainerID="b94e08d6d74da40177944e875e7d7e6708a6ca1ff5e5ccad3a11b9fb63fd37b5" Oct 9 07:51:43.538657 containerd[1459]: 2024-10-09 07:51:43.461 [INFO][4629] utils.go 188: Calico CNI releasing IP address ContainerID="b94e08d6d74da40177944e875e7d7e6708a6ca1ff5e5ccad3a11b9fb63fd37b5" Oct 9 07:51:43.538657 containerd[1459]: 2024-10-09 07:51:43.514 [INFO][4635] ipam_plugin.go 417: Releasing address using handleID ContainerID="b94e08d6d74da40177944e875e7d7e6708a6ca1ff5e5ccad3a11b9fb63fd37b5" HandleID="k8s-pod-network.b94e08d6d74da40177944e875e7d7e6708a6ca1ff5e5ccad3a11b9fb63fd37b5" Workload="ci--4081.1.0--6--a1de16b848-k8s-coredns--6f6b679f8f--br874-eth0" Oct 9 07:51:43.538657 containerd[1459]: 2024-10-09 07:51:43.514 [INFO][4635] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:51:43.538657 containerd[1459]: 2024-10-09 07:51:43.514 [INFO][4635] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:51:43.538657 containerd[1459]: 2024-10-09 07:51:43.526 [WARNING][4635] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="b94e08d6d74da40177944e875e7d7e6708a6ca1ff5e5ccad3a11b9fb63fd37b5" HandleID="k8s-pod-network.b94e08d6d74da40177944e875e7d7e6708a6ca1ff5e5ccad3a11b9fb63fd37b5" Workload="ci--4081.1.0--6--a1de16b848-k8s-coredns--6f6b679f8f--br874-eth0" Oct 9 07:51:43.538657 containerd[1459]: 2024-10-09 07:51:43.526 [INFO][4635] ipam_plugin.go 445: Releasing address using workloadID ContainerID="b94e08d6d74da40177944e875e7d7e6708a6ca1ff5e5ccad3a11b9fb63fd37b5" HandleID="k8s-pod-network.b94e08d6d74da40177944e875e7d7e6708a6ca1ff5e5ccad3a11b9fb63fd37b5" Workload="ci--4081.1.0--6--a1de16b848-k8s-coredns--6f6b679f8f--br874-eth0" Oct 9 07:51:43.538657 containerd[1459]: 2024-10-09 07:51:43.531 [INFO][4635] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:51:43.538657 containerd[1459]: 2024-10-09 07:51:43.534 [INFO][4629] k8s.go 621: Teardown processing complete. ContainerID="b94e08d6d74da40177944e875e7d7e6708a6ca1ff5e5ccad3a11b9fb63fd37b5" Oct 9 07:51:43.539793 containerd[1459]: time="2024-10-09T07:51:43.539088397Z" level=info msg="TearDown network for sandbox \"b94e08d6d74da40177944e875e7d7e6708a6ca1ff5e5ccad3a11b9fb63fd37b5\" successfully" Oct 9 07:51:43.547108 containerd[1459]: time="2024-10-09T07:51:43.546866796Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b94e08d6d74da40177944e875e7d7e6708a6ca1ff5e5ccad3a11b9fb63fd37b5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 07:51:43.547108 containerd[1459]: time="2024-10-09T07:51:43.546985047Z" level=info msg="RemovePodSandbox \"b94e08d6d74da40177944e875e7d7e6708a6ca1ff5e5ccad3a11b9fb63fd37b5\" returns successfully" Oct 9 07:51:43.548674 containerd[1459]: time="2024-10-09T07:51:43.547666955Z" level=info msg="StopPodSandbox for \"ccc48ae28a7e5f2b117790cd70dc01754a0d3e1e1b5d220e732ef26ac559a883\"" Oct 9 07:51:43.758730 containerd[1459]: 2024-10-09 07:51:43.693 [WARNING][4654] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ccc48ae28a7e5f2b117790cd70dc01754a0d3e1e1b5d220e732ef26ac559a883" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.1.0--6--a1de16b848-k8s-csi--node--driver--lxz42-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8243e6c5-fffe-40ae-9ffc-3e5c0557a44d", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 50, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"779867c8f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.1.0-6-a1de16b848", ContainerID:"e9f044612ffe49c2d87fa6816cd7eead979eb1655f45c7cfb8a359c70009d8dc", Pod:"csi-node-driver-lxz42", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.77.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calic0e436d85ae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:51:43.758730 containerd[1459]: 2024-10-09 07:51:43.694 [INFO][4654] k8s.go 608: Cleaning up netns ContainerID="ccc48ae28a7e5f2b117790cd70dc01754a0d3e1e1b5d220e732ef26ac559a883" Oct 9 07:51:43.758730 containerd[1459]: 2024-10-09 07:51:43.694 [INFO][4654] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="ccc48ae28a7e5f2b117790cd70dc01754a0d3e1e1b5d220e732ef26ac559a883" iface="eth0" netns="" Oct 9 07:51:43.758730 containerd[1459]: 2024-10-09 07:51:43.694 [INFO][4654] k8s.go 615: Releasing IP address(es) ContainerID="ccc48ae28a7e5f2b117790cd70dc01754a0d3e1e1b5d220e732ef26ac559a883" Oct 9 07:51:43.758730 containerd[1459]: 2024-10-09 07:51:43.694 [INFO][4654] utils.go 188: Calico CNI releasing IP address ContainerID="ccc48ae28a7e5f2b117790cd70dc01754a0d3e1e1b5d220e732ef26ac559a883" Oct 9 07:51:43.758730 containerd[1459]: 2024-10-09 07:51:43.738 [INFO][4660] ipam_plugin.go 417: Releasing address using handleID ContainerID="ccc48ae28a7e5f2b117790cd70dc01754a0d3e1e1b5d220e732ef26ac559a883" HandleID="k8s-pod-network.ccc48ae28a7e5f2b117790cd70dc01754a0d3e1e1b5d220e732ef26ac559a883" Workload="ci--4081.1.0--6--a1de16b848-k8s-csi--node--driver--lxz42-eth0" Oct 9 07:51:43.758730 containerd[1459]: 2024-10-09 07:51:43.738 [INFO][4660] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:51:43.758730 containerd[1459]: 2024-10-09 07:51:43.739 [INFO][4660] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:51:43.758730 containerd[1459]: 2024-10-09 07:51:43.750 [WARNING][4660] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="ccc48ae28a7e5f2b117790cd70dc01754a0d3e1e1b5d220e732ef26ac559a883" HandleID="k8s-pod-network.ccc48ae28a7e5f2b117790cd70dc01754a0d3e1e1b5d220e732ef26ac559a883" Workload="ci--4081.1.0--6--a1de16b848-k8s-csi--node--driver--lxz42-eth0" Oct 9 07:51:43.758730 containerd[1459]: 2024-10-09 07:51:43.750 [INFO][4660] ipam_plugin.go 445: Releasing address using workloadID ContainerID="ccc48ae28a7e5f2b117790cd70dc01754a0d3e1e1b5d220e732ef26ac559a883" HandleID="k8s-pod-network.ccc48ae28a7e5f2b117790cd70dc01754a0d3e1e1b5d220e732ef26ac559a883" Workload="ci--4081.1.0--6--a1de16b848-k8s-csi--node--driver--lxz42-eth0" Oct 9 07:51:43.758730 containerd[1459]: 2024-10-09 07:51:43.753 [INFO][4660] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:51:43.758730 containerd[1459]: 2024-10-09 07:51:43.755 [INFO][4654] k8s.go 621: Teardown processing complete. ContainerID="ccc48ae28a7e5f2b117790cd70dc01754a0d3e1e1b5d220e732ef26ac559a883" Oct 9 07:51:43.760099 containerd[1459]: time="2024-10-09T07:51:43.758776191Z" level=info msg="TearDown network for sandbox \"ccc48ae28a7e5f2b117790cd70dc01754a0d3e1e1b5d220e732ef26ac559a883\" successfully" Oct 9 07:51:43.760099 containerd[1459]: time="2024-10-09T07:51:43.758802814Z" level=info msg="StopPodSandbox for \"ccc48ae28a7e5f2b117790cd70dc01754a0d3e1e1b5d220e732ef26ac559a883\" returns successfully" Oct 9 07:51:43.761017 containerd[1459]: time="2024-10-09T07:51:43.760545355Z" level=info msg="RemovePodSandbox for \"ccc48ae28a7e5f2b117790cd70dc01754a0d3e1e1b5d220e732ef26ac559a883\"" Oct 9 07:51:43.761017 containerd[1459]: time="2024-10-09T07:51:43.760591387Z" level=info msg="Forcibly stopping sandbox \"ccc48ae28a7e5f2b117790cd70dc01754a0d3e1e1b5d220e732ef26ac559a883\"" Oct 9 07:51:43.900232 containerd[1459]: 2024-10-09 07:51:43.839 [WARNING][4678] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ccc48ae28a7e5f2b117790cd70dc01754a0d3e1e1b5d220e732ef26ac559a883" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.1.0--6--a1de16b848-k8s-csi--node--driver--lxz42-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8243e6c5-fffe-40ae-9ffc-3e5c0557a44d", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 50, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"779867c8f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.1.0-6-a1de16b848", ContainerID:"e9f044612ffe49c2d87fa6816cd7eead979eb1655f45c7cfb8a359c70009d8dc", Pod:"csi-node-driver-lxz42", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.77.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calic0e436d85ae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:51:43.900232 containerd[1459]: 2024-10-09 07:51:43.839 [INFO][4678] k8s.go 608: Cleaning up netns ContainerID="ccc48ae28a7e5f2b117790cd70dc01754a0d3e1e1b5d220e732ef26ac559a883" Oct 9 07:51:43.900232 containerd[1459]: 2024-10-09 07:51:43.839 [INFO][4678] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="ccc48ae28a7e5f2b117790cd70dc01754a0d3e1e1b5d220e732ef26ac559a883" iface="eth0" netns="" Oct 9 07:51:43.900232 containerd[1459]: 2024-10-09 07:51:43.839 [INFO][4678] k8s.go 615: Releasing IP address(es) ContainerID="ccc48ae28a7e5f2b117790cd70dc01754a0d3e1e1b5d220e732ef26ac559a883" Oct 9 07:51:43.900232 containerd[1459]: 2024-10-09 07:51:43.839 [INFO][4678] utils.go 188: Calico CNI releasing IP address ContainerID="ccc48ae28a7e5f2b117790cd70dc01754a0d3e1e1b5d220e732ef26ac559a883" Oct 9 07:51:43.900232 containerd[1459]: 2024-10-09 07:51:43.880 [INFO][4684] ipam_plugin.go 417: Releasing address using handleID ContainerID="ccc48ae28a7e5f2b117790cd70dc01754a0d3e1e1b5d220e732ef26ac559a883" HandleID="k8s-pod-network.ccc48ae28a7e5f2b117790cd70dc01754a0d3e1e1b5d220e732ef26ac559a883" Workload="ci--4081.1.0--6--a1de16b848-k8s-csi--node--driver--lxz42-eth0" Oct 9 07:51:43.900232 containerd[1459]: 2024-10-09 07:51:43.881 [INFO][4684] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:51:43.900232 containerd[1459]: 2024-10-09 07:51:43.881 [INFO][4684] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:51:43.900232 containerd[1459]: 2024-10-09 07:51:43.890 [WARNING][4684] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="ccc48ae28a7e5f2b117790cd70dc01754a0d3e1e1b5d220e732ef26ac559a883" HandleID="k8s-pod-network.ccc48ae28a7e5f2b117790cd70dc01754a0d3e1e1b5d220e732ef26ac559a883" Workload="ci--4081.1.0--6--a1de16b848-k8s-csi--node--driver--lxz42-eth0" Oct 9 07:51:43.900232 containerd[1459]: 2024-10-09 07:51:43.890 [INFO][4684] ipam_plugin.go 445: Releasing address using workloadID ContainerID="ccc48ae28a7e5f2b117790cd70dc01754a0d3e1e1b5d220e732ef26ac559a883" HandleID="k8s-pod-network.ccc48ae28a7e5f2b117790cd70dc01754a0d3e1e1b5d220e732ef26ac559a883" Workload="ci--4081.1.0--6--a1de16b848-k8s-csi--node--driver--lxz42-eth0" Oct 9 07:51:43.900232 containerd[1459]: 2024-10-09 07:51:43.893 [INFO][4684] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:51:43.900232 containerd[1459]: 2024-10-09 07:51:43.895 [INFO][4678] k8s.go 621: Teardown processing complete. ContainerID="ccc48ae28a7e5f2b117790cd70dc01754a0d3e1e1b5d220e732ef26ac559a883" Oct 9 07:51:43.901560 containerd[1459]: time="2024-10-09T07:51:43.900910966Z" level=info msg="TearDown network for sandbox \"ccc48ae28a7e5f2b117790cd70dc01754a0d3e1e1b5d220e732ef26ac559a883\" successfully" Oct 9 07:51:43.907611 containerd[1459]: time="2024-10-09T07:51:43.907172325Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ccc48ae28a7e5f2b117790cd70dc01754a0d3e1e1b5d220e732ef26ac559a883\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 07:51:43.907611 containerd[1459]: time="2024-10-09T07:51:43.907528225Z" level=info msg="RemovePodSandbox \"ccc48ae28a7e5f2b117790cd70dc01754a0d3e1e1b5d220e732ef26ac559a883\" returns successfully" Oct 9 07:51:43.909426 containerd[1459]: time="2024-10-09T07:51:43.908859608Z" level=info msg="StopPodSandbox for \"fac05bb684250bae6346a07d5eab8947b923b1efd02279b4c25e48bfa25d7208\"" Oct 9 07:51:44.050440 containerd[1459]: 2024-10-09 07:51:43.982 [WARNING][4703] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fac05bb684250bae6346a07d5eab8947b923b1efd02279b4c25e48bfa25d7208" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.1.0--6--a1de16b848-k8s-coredns--6f6b679f8f--56sbq-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"0ba2cf0b-92bf-40e7-afd1-29479bd52c4d", ResourceVersion:"774", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 50, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.1.0-6-a1de16b848", ContainerID:"0aaf8ee2d41dd8e636887c2b0d04ab7176313eab69ccbf660f176e54eccd8507", Pod:"coredns-6f6b679f8f-56sbq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.77.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali908f02abf77", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:51:44.050440 containerd[1459]: 2024-10-09 07:51:43.983 [INFO][4703] k8s.go 608: Cleaning up netns ContainerID="fac05bb684250bae6346a07d5eab8947b923b1efd02279b4c25e48bfa25d7208" Oct 9 07:51:44.050440 containerd[1459]: 2024-10-09 07:51:43.983 [INFO][4703] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="fac05bb684250bae6346a07d5eab8947b923b1efd02279b4c25e48bfa25d7208" iface="eth0" netns="" Oct 9 07:51:44.050440 containerd[1459]: 2024-10-09 07:51:43.983 [INFO][4703] k8s.go 615: Releasing IP address(es) ContainerID="fac05bb684250bae6346a07d5eab8947b923b1efd02279b4c25e48bfa25d7208" Oct 9 07:51:44.050440 containerd[1459]: 2024-10-09 07:51:43.983 [INFO][4703] utils.go 188: Calico CNI releasing IP address ContainerID="fac05bb684250bae6346a07d5eab8947b923b1efd02279b4c25e48bfa25d7208" Oct 9 07:51:44.050440 containerd[1459]: 2024-10-09 07:51:44.030 [INFO][4709] ipam_plugin.go 417: Releasing address using handleID ContainerID="fac05bb684250bae6346a07d5eab8947b923b1efd02279b4c25e48bfa25d7208" HandleID="k8s-pod-network.fac05bb684250bae6346a07d5eab8947b923b1efd02279b4c25e48bfa25d7208" Workload="ci--4081.1.0--6--a1de16b848-k8s-coredns--6f6b679f8f--56sbq-eth0" Oct 9 07:51:44.050440 containerd[1459]: 2024-10-09 07:51:44.030 [INFO][4709] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:51:44.050440 containerd[1459]: 2024-10-09 07:51:44.030 [INFO][4709] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:51:44.050440 containerd[1459]: 2024-10-09 07:51:44.040 [WARNING][4709] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="fac05bb684250bae6346a07d5eab8947b923b1efd02279b4c25e48bfa25d7208" HandleID="k8s-pod-network.fac05bb684250bae6346a07d5eab8947b923b1efd02279b4c25e48bfa25d7208" Workload="ci--4081.1.0--6--a1de16b848-k8s-coredns--6f6b679f8f--56sbq-eth0" Oct 9 07:51:44.050440 containerd[1459]: 2024-10-09 07:51:44.040 [INFO][4709] ipam_plugin.go 445: Releasing address using workloadID ContainerID="fac05bb684250bae6346a07d5eab8947b923b1efd02279b4c25e48bfa25d7208" HandleID="k8s-pod-network.fac05bb684250bae6346a07d5eab8947b923b1efd02279b4c25e48bfa25d7208" Workload="ci--4081.1.0--6--a1de16b848-k8s-coredns--6f6b679f8f--56sbq-eth0" Oct 9 07:51:44.050440 containerd[1459]: 2024-10-09 07:51:44.044 [INFO][4709] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:51:44.050440 containerd[1459]: 2024-10-09 07:51:44.046 [INFO][4703] k8s.go 621: Teardown processing complete. ContainerID="fac05bb684250bae6346a07d5eab8947b923b1efd02279b4c25e48bfa25d7208" Oct 9 07:51:44.053054 containerd[1459]: time="2024-10-09T07:51:44.051810198Z" level=info msg="TearDown network for sandbox \"fac05bb684250bae6346a07d5eab8947b923b1efd02279b4c25e48bfa25d7208\" successfully" Oct 9 07:51:44.053054 containerd[1459]: time="2024-10-09T07:51:44.051855312Z" level=info msg="StopPodSandbox for \"fac05bb684250bae6346a07d5eab8947b923b1efd02279b4c25e48bfa25d7208\" returns successfully" Oct 9 07:51:44.053054 containerd[1459]: time="2024-10-09T07:51:44.052665161Z" level=info msg="RemovePodSandbox for \"fac05bb684250bae6346a07d5eab8947b923b1efd02279b4c25e48bfa25d7208\"" Oct 9 07:51:44.053054 containerd[1459]: time="2024-10-09T07:51:44.052699897Z" level=info msg="Forcibly stopping sandbox \"fac05bb684250bae6346a07d5eab8947b923b1efd02279b4c25e48bfa25d7208\"" Oct 9 07:51:44.168042 containerd[1459]: 2024-10-09 07:51:44.117 [WARNING][4727] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fac05bb684250bae6346a07d5eab8947b923b1efd02279b4c25e48bfa25d7208" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.1.0--6--a1de16b848-k8s-coredns--6f6b679f8f--56sbq-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"0ba2cf0b-92bf-40e7-afd1-29479bd52c4d", ResourceVersion:"774", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 50, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.1.0-6-a1de16b848", ContainerID:"0aaf8ee2d41dd8e636887c2b0d04ab7176313eab69ccbf660f176e54eccd8507", Pod:"coredns-6f6b679f8f-56sbq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.77.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali908f02abf77", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:51:44.168042 containerd[1459]: 2024-10-09 07:51:44.118 [INFO][4727] k8s.go 608: Cleaning up netns ContainerID="fac05bb684250bae6346a07d5eab8947b923b1efd02279b4c25e48bfa25d7208" Oct 9 07:51:44.168042 containerd[1459]: 2024-10-09 07:51:44.118 [INFO][4727] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="fac05bb684250bae6346a07d5eab8947b923b1efd02279b4c25e48bfa25d7208" iface="eth0" netns="" Oct 9 07:51:44.168042 containerd[1459]: 2024-10-09 07:51:44.118 [INFO][4727] k8s.go 615: Releasing IP address(es) ContainerID="fac05bb684250bae6346a07d5eab8947b923b1efd02279b4c25e48bfa25d7208" Oct 9 07:51:44.168042 containerd[1459]: 2024-10-09 07:51:44.118 [INFO][4727] utils.go 188: Calico CNI releasing IP address ContainerID="fac05bb684250bae6346a07d5eab8947b923b1efd02279b4c25e48bfa25d7208" Oct 9 07:51:44.168042 containerd[1459]: 2024-10-09 07:51:44.150 [INFO][4733] ipam_plugin.go 417: Releasing address using handleID ContainerID="fac05bb684250bae6346a07d5eab8947b923b1efd02279b4c25e48bfa25d7208" HandleID="k8s-pod-network.fac05bb684250bae6346a07d5eab8947b923b1efd02279b4c25e48bfa25d7208" Workload="ci--4081.1.0--6--a1de16b848-k8s-coredns--6f6b679f8f--56sbq-eth0" Oct 9 07:51:44.168042 containerd[1459]: 2024-10-09 07:51:44.150 [INFO][4733] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:51:44.168042 containerd[1459]: 2024-10-09 07:51:44.150 [INFO][4733] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:51:44.168042 containerd[1459]: 2024-10-09 07:51:44.159 [WARNING][4733] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="fac05bb684250bae6346a07d5eab8947b923b1efd02279b4c25e48bfa25d7208" HandleID="k8s-pod-network.fac05bb684250bae6346a07d5eab8947b923b1efd02279b4c25e48bfa25d7208" Workload="ci--4081.1.0--6--a1de16b848-k8s-coredns--6f6b679f8f--56sbq-eth0" Oct 9 07:51:44.168042 containerd[1459]: 2024-10-09 07:51:44.159 [INFO][4733] ipam_plugin.go 445: Releasing address using workloadID ContainerID="fac05bb684250bae6346a07d5eab8947b923b1efd02279b4c25e48bfa25d7208" HandleID="k8s-pod-network.fac05bb684250bae6346a07d5eab8947b923b1efd02279b4c25e48bfa25d7208" Workload="ci--4081.1.0--6--a1de16b848-k8s-coredns--6f6b679f8f--56sbq-eth0" Oct 9 07:51:44.168042 containerd[1459]: 2024-10-09 07:51:44.162 [INFO][4733] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:51:44.168042 containerd[1459]: 2024-10-09 07:51:44.165 [INFO][4727] k8s.go 621: Teardown processing complete. ContainerID="fac05bb684250bae6346a07d5eab8947b923b1efd02279b4c25e48bfa25d7208" Oct 9 07:51:44.168042 containerd[1459]: time="2024-10-09T07:51:44.167829332Z" level=info msg="TearDown network for sandbox \"fac05bb684250bae6346a07d5eab8947b923b1efd02279b4c25e48bfa25d7208\" successfully" Oct 9 07:51:44.172969 containerd[1459]: time="2024-10-09T07:51:44.172898162Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fac05bb684250bae6346a07d5eab8947b923b1efd02279b4c25e48bfa25d7208\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 07:51:44.173104 containerd[1459]: time="2024-10-09T07:51:44.173062339Z" level=info msg="RemovePodSandbox \"fac05bb684250bae6346a07d5eab8947b923b1efd02279b4c25e48bfa25d7208\" returns successfully" Oct 9 07:51:47.536465 systemd[1]: Started sshd@16-64.23.134.87:22-139.178.89.65:42216.service - OpenSSH per-connection server daemon (139.178.89.65:42216). Oct 9 07:51:47.670277 sshd[4742]: Accepted publickey for core from 139.178.89.65 port 42216 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:51:47.675590 sshd[4742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:51:47.684858 systemd-logind[1447]: New session 16 of user core. Oct 9 07:51:47.694342 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 9 07:51:47.962037 sshd[4742]: pam_unix(sshd:session): session closed for user core Oct 9 07:51:47.971162 systemd-logind[1447]: Session 16 logged out. Waiting for processes to exit. Oct 9 07:51:47.976242 systemd[1]: sshd@16-64.23.134.87:22-139.178.89.65:42216.service: Deactivated successfully. Oct 9 07:51:47.982011 systemd[1]: session-16.scope: Deactivated successfully. Oct 9 07:51:47.985871 systemd-logind[1447]: Removed session 16. Oct 9 07:51:50.975358 systemd[1]: Created slice kubepods-besteffort-podf4195dd8_28ea_41eb_ab10_09c0620cd19a.slice - libcontainer container kubepods-besteffort-podf4195dd8_28ea_41eb_ab10_09c0620cd19a.slice. Oct 9 07:51:51.086072 kubelet[2521]: I1009 07:51:51.085575 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snnbg\" (UniqueName: \"kubernetes.io/projected/f4195dd8-28ea-41eb-ab10-09c0620cd19a-kube-api-access-snnbg\") pod \"calico-apiserver-6b499549f-c8fgq\" (UID: \"f4195dd8-28ea-41eb-ab10-09c0620cd19a\") " pod="calico-apiserver/calico-apiserver-6b499549f-c8fgq" Oct 9 07:51:51.086072 kubelet[2521]: I1009 07:51:51.085921 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f4195dd8-28ea-41eb-ab10-09c0620cd19a-calico-apiserver-certs\") pod \"calico-apiserver-6b499549f-c8fgq\" (UID: \"f4195dd8-28ea-41eb-ab10-09c0620cd19a\") " pod="calico-apiserver/calico-apiserver-6b499549f-c8fgq" Oct 9 07:51:51.189461 kubelet[2521]: E1009 07:51:51.189365 2521 secret.go:188] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Oct 9 07:51:51.193214 kubelet[2521]: E1009 07:51:51.192236 2521 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f4195dd8-28ea-41eb-ab10-09c0620cd19a-calico-apiserver-certs podName:f4195dd8-28ea-41eb-ab10-09c0620cd19a nodeName:}" failed. No retries permitted until 2024-10-09 07:51:51.689521754 +0000 UTC m=+68.984764818 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/f4195dd8-28ea-41eb-ab10-09c0620cd19a-calico-apiserver-certs") pod "calico-apiserver-6b499549f-c8fgq" (UID: "f4195dd8-28ea-41eb-ab10-09c0620cd19a") : secret "calico-apiserver-certs" not found Oct 9 07:51:51.890676 containerd[1459]: time="2024-10-09T07:51:51.889669206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b499549f-c8fgq,Uid:f4195dd8-28ea-41eb-ab10-09c0620cd19a,Namespace:calico-apiserver,Attempt:0,}" Oct 9 07:51:52.141121 systemd-networkd[1358]: calif50167ef809: Link UP Oct 9 07:51:52.142319 systemd-networkd[1358]: calif50167ef809: Gained carrier Oct 9 07:51:52.180122 containerd[1459]: 2024-10-09 07:51:51.994 [INFO][4767] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.1.0--6--a1de16b848-k8s-calico--apiserver--6b499549f--c8fgq-eth0 calico-apiserver-6b499549f- calico-apiserver f4195dd8-28ea-41eb-ab10-09c0620cd19a 1011 0 2024-10-09 07:51:50 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6b499549f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.1.0-6-a1de16b848 calico-apiserver-6b499549f-c8fgq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif50167ef809 [] []}} ContainerID="2e5f0d48ab7993c45123bd7717bceefd6a27c1e147da136054ff576fa831eca3" Namespace="calico-apiserver" Pod="calico-apiserver-6b499549f-c8fgq" WorkloadEndpoint="ci--4081.1.0--6--a1de16b848-k8s-calico--apiserver--6b499549f--c8fgq-" Oct 9 07:51:52.180122 containerd[1459]: 2024-10-09 07:51:51.995 [INFO][4767] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2e5f0d48ab7993c45123bd7717bceefd6a27c1e147da136054ff576fa831eca3" Namespace="calico-apiserver" Pod="calico-apiserver-6b499549f-c8fgq" WorkloadEndpoint="ci--4081.1.0--6--a1de16b848-k8s-calico--apiserver--6b499549f--c8fgq-eth0" Oct 9 07:51:52.180122 containerd[1459]: 2024-10-09 07:51:52.052 [INFO][4778] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2e5f0d48ab7993c45123bd7717bceefd6a27c1e147da136054ff576fa831eca3" HandleID="k8s-pod-network.2e5f0d48ab7993c45123bd7717bceefd6a27c1e147da136054ff576fa831eca3" Workload="ci--4081.1.0--6--a1de16b848-k8s-calico--apiserver--6b499549f--c8fgq-eth0" Oct 9 07:51:52.180122 containerd[1459]: 2024-10-09 07:51:52.070 [INFO][4778] ipam_plugin.go 270: Auto assigning IP ContainerID="2e5f0d48ab7993c45123bd7717bceefd6a27c1e147da136054ff576fa831eca3" HandleID="k8s-pod-network.2e5f0d48ab7993c45123bd7717bceefd6a27c1e147da136054ff576fa831eca3" Workload="ci--4081.1.0--6--a1de16b848-k8s-calico--apiserver--6b499549f--c8fgq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000050d50), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.1.0-6-a1de16b848", "pod":"calico-apiserver-6b499549f-c8fgq", "timestamp":"2024-10-09 07:51:52.052416986 +0000 UTC"}, Hostname:"ci-4081.1.0-6-a1de16b848", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 07:51:52.180122 containerd[1459]: 2024-10-09 07:51:52.071 [INFO][4778] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:51:52.180122 containerd[1459]: 2024-10-09 07:51:52.071 [INFO][4778] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:51:52.180122 containerd[1459]: 2024-10-09 07:51:52.071 [INFO][4778] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.1.0-6-a1de16b848' Oct 9 07:51:52.180122 containerd[1459]: 2024-10-09 07:51:52.074 [INFO][4778] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2e5f0d48ab7993c45123bd7717bceefd6a27c1e147da136054ff576fa831eca3" host="ci-4081.1.0-6-a1de16b848" Oct 9 07:51:52.180122 containerd[1459]: 2024-10-09 07:51:52.083 [INFO][4778] ipam.go 372: Looking up existing affinities for host host="ci-4081.1.0-6-a1de16b848" Oct 9 07:51:52.180122 containerd[1459]: 2024-10-09 07:51:52.093 [INFO][4778] ipam.go 489: Trying affinity for 192.168.77.192/26 host="ci-4081.1.0-6-a1de16b848" Oct 9 07:51:52.180122 containerd[1459]: 2024-10-09 07:51:52.097 [INFO][4778] ipam.go 155: Attempting to load block cidr=192.168.77.192/26 host="ci-4081.1.0-6-a1de16b848" Oct 9 07:51:52.180122 containerd[1459]: 2024-10-09 07:51:52.101 [INFO][4778] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.77.192/26 host="ci-4081.1.0-6-a1de16b848" Oct 9 07:51:52.180122 containerd[1459]: 2024-10-09 07:51:52.101 [INFO][4778] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.77.192/26 handle="k8s-pod-network.2e5f0d48ab7993c45123bd7717bceefd6a27c1e147da136054ff576fa831eca3" host="ci-4081.1.0-6-a1de16b848" Oct 9 07:51:52.180122 containerd[1459]: 2024-10-09 07:51:52.104 [INFO][4778] ipam.go 1685: Creating new handle: k8s-pod-network.2e5f0d48ab7993c45123bd7717bceefd6a27c1e147da136054ff576fa831eca3 Oct 9 07:51:52.180122 containerd[1459]: 2024-10-09 07:51:52.112 [INFO][4778] ipam.go 1203: Writing block in order to claim IPs block=192.168.77.192/26 handle="k8s-pod-network.2e5f0d48ab7993c45123bd7717bceefd6a27c1e147da136054ff576fa831eca3" host="ci-4081.1.0-6-a1de16b848" Oct 9 07:51:52.180122 containerd[1459]: 2024-10-09 07:51:52.123 [INFO][4778] ipam.go 1216: Successfully claimed IPs: [192.168.77.197/26] block=192.168.77.192/26 handle="k8s-pod-network.2e5f0d48ab7993c45123bd7717bceefd6a27c1e147da136054ff576fa831eca3" host="ci-4081.1.0-6-a1de16b848" Oct 9 07:51:52.180122 containerd[1459]: 2024-10-09 07:51:52.124 [INFO][4778] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.77.197/26] handle="k8s-pod-network.2e5f0d48ab7993c45123bd7717bceefd6a27c1e147da136054ff576fa831eca3" host="ci-4081.1.0-6-a1de16b848" Oct 9 07:51:52.180122 containerd[1459]: 2024-10-09 07:51:52.124 [INFO][4778] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:51:52.180122 containerd[1459]: 2024-10-09 07:51:52.124 [INFO][4778] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.77.197/26] IPv6=[] ContainerID="2e5f0d48ab7993c45123bd7717bceefd6a27c1e147da136054ff576fa831eca3" HandleID="k8s-pod-network.2e5f0d48ab7993c45123bd7717bceefd6a27c1e147da136054ff576fa831eca3" Workload="ci--4081.1.0--6--a1de16b848-k8s-calico--apiserver--6b499549f--c8fgq-eth0" Oct 9 07:51:52.184997 containerd[1459]: 2024-10-09 07:51:52.128 [INFO][4767] k8s.go 386: Populated endpoint ContainerID="2e5f0d48ab7993c45123bd7717bceefd6a27c1e147da136054ff576fa831eca3" Namespace="calico-apiserver" Pod="calico-apiserver-6b499549f-c8fgq" WorkloadEndpoint="ci--4081.1.0--6--a1de16b848-k8s-calico--apiserver--6b499549f--c8fgq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.1.0--6--a1de16b848-k8s-calico--apiserver--6b499549f--c8fgq-eth0", GenerateName:"calico-apiserver-6b499549f-", Namespace:"calico-apiserver", SelfLink:"", UID:"f4195dd8-28ea-41eb-ab10-09c0620cd19a", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 51, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b499549f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.1.0-6-a1de16b848", ContainerID:"", Pod:"calico-apiserver-6b499549f-c8fgq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.77.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif50167ef809", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:51:52.184997 containerd[1459]: 2024-10-09 07:51:52.128 [INFO][4767] k8s.go 387: Calico CNI using IPs: [192.168.77.197/32] ContainerID="2e5f0d48ab7993c45123bd7717bceefd6a27c1e147da136054ff576fa831eca3" Namespace="calico-apiserver" Pod="calico-apiserver-6b499549f-c8fgq" WorkloadEndpoint="ci--4081.1.0--6--a1de16b848-k8s-calico--apiserver--6b499549f--c8fgq-eth0" Oct 9 07:51:52.184997 containerd[1459]: 2024-10-09 07:51:52.129 [INFO][4767] dataplane_linux.go 68: Setting the host side veth name to calif50167ef809 ContainerID="2e5f0d48ab7993c45123bd7717bceefd6a27c1e147da136054ff576fa831eca3" Namespace="calico-apiserver" Pod="calico-apiserver-6b499549f-c8fgq" WorkloadEndpoint="ci--4081.1.0--6--a1de16b848-k8s-calico--apiserver--6b499549f--c8fgq-eth0" Oct 9 07:51:52.184997 containerd[1459]: 2024-10-09 07:51:52.137 [INFO][4767] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="2e5f0d48ab7993c45123bd7717bceefd6a27c1e147da136054ff576fa831eca3" Namespace="calico-apiserver" Pod="calico-apiserver-6b499549f-c8fgq" WorkloadEndpoint="ci--4081.1.0--6--a1de16b848-k8s-calico--apiserver--6b499549f--c8fgq-eth0" Oct 9 07:51:52.184997 containerd[1459]: 2024-10-09 07:51:52.144 [INFO][4767] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2e5f0d48ab7993c45123bd7717bceefd6a27c1e147da136054ff576fa831eca3" Namespace="calico-apiserver" Pod="calico-apiserver-6b499549f-c8fgq" WorkloadEndpoint="ci--4081.1.0--6--a1de16b848-k8s-calico--apiserver--6b499549f--c8fgq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.1.0--6--a1de16b848-k8s-calico--apiserver--6b499549f--c8fgq-eth0", GenerateName:"calico-apiserver-6b499549f-", Namespace:"calico-apiserver", SelfLink:"", UID:"f4195dd8-28ea-41eb-ab10-09c0620cd19a", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 51, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b499549f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.1.0-6-a1de16b848", ContainerID:"2e5f0d48ab7993c45123bd7717bceefd6a27c1e147da136054ff576fa831eca3", Pod:"calico-apiserver-6b499549f-c8fgq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.77.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif50167ef809", MAC:"76:4d:84:d5:de:d1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:51:52.184997 containerd[1459]: 2024-10-09 07:51:52.173 [INFO][4767] k8s.go 500: Wrote updated endpoint to datastore ContainerID="2e5f0d48ab7993c45123bd7717bceefd6a27c1e147da136054ff576fa831eca3" Namespace="calico-apiserver" Pod="calico-apiserver-6b499549f-c8fgq" WorkloadEndpoint="ci--4081.1.0--6--a1de16b848-k8s-calico--apiserver--6b499549f--c8fgq-eth0" Oct 9 07:51:52.244386 containerd[1459]: time="2024-10-09T07:51:52.244189738Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:51:52.244386 containerd[1459]: time="2024-10-09T07:51:52.244276207Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:51:52.244386 containerd[1459]: time="2024-10-09T07:51:52.244290177Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:51:52.244996 containerd[1459]: time="2024-10-09T07:51:52.244481719Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:51:52.306264 systemd[1]: run-containerd-runc-k8s.io-2e5f0d48ab7993c45123bd7717bceefd6a27c1e147da136054ff576fa831eca3-runc.laHFjy.mount: Deactivated successfully. Oct 9 07:51:52.320717 systemd[1]: Started cri-containerd-2e5f0d48ab7993c45123bd7717bceefd6a27c1e147da136054ff576fa831eca3.scope - libcontainer container 2e5f0d48ab7993c45123bd7717bceefd6a27c1e147da136054ff576fa831eca3. Oct 9 07:51:52.462149 containerd[1459]: time="2024-10-09T07:51:52.461543770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b499549f-c8fgq,Uid:f4195dd8-28ea-41eb-ab10-09c0620cd19a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"2e5f0d48ab7993c45123bd7717bceefd6a27c1e147da136054ff576fa831eca3\"" Oct 9 07:51:52.470358 containerd[1459]: time="2024-10-09T07:51:52.470023480Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Oct 9 07:51:52.988284 systemd[1]: Started sshd@17-64.23.134.87:22-139.178.89.65:42218.service - OpenSSH per-connection server daemon (139.178.89.65:42218). Oct 9 07:51:53.103682 sshd[4841]: Accepted publickey for core from 139.178.89.65 port 42218 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:51:53.106267 sshd[4841]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:51:53.122413 systemd-logind[1447]: New session 17 of user core. Oct 9 07:51:53.127206 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 9 07:51:53.679245 sshd[4841]: pam_unix(sshd:session): session closed for user core Oct 9 07:51:53.692551 systemd[1]: sshd@17-64.23.134.87:22-139.178.89.65:42218.service: Deactivated successfully. Oct 9 07:51:53.699167 systemd[1]: session-17.scope: Deactivated successfully. Oct 9 07:51:53.709024 systemd-logind[1447]: Session 17 logged out. Waiting for processes to exit. Oct 9 07:51:53.713947 systemd-logind[1447]: Removed session 17. Oct 9 07:51:53.802120 systemd-networkd[1358]: calif50167ef809: Gained IPv6LL Oct 9 07:51:55.301843 containerd[1459]: time="2024-10-09T07:51:55.301777486Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:51:55.305577 containerd[1459]: time="2024-10-09T07:51:55.305512571Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=40419849" Oct 9 07:51:55.307353 containerd[1459]: time="2024-10-09T07:51:55.307283457Z" level=info msg="ImageCreate event name:\"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:51:55.312201 containerd[1459]: time="2024-10-09T07:51:55.312142448Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:51:55.314584 containerd[1459]: time="2024-10-09T07:51:55.314530954Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"41912266\" in 2.844455726s" Oct 9 07:51:55.315833 containerd[1459]: time="2024-10-09T07:51:55.314820206Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\"" Oct 9 07:51:55.320832 containerd[1459]: time="2024-10-09T07:51:55.320574259Z" level=info msg="CreateContainer within sandbox \"2e5f0d48ab7993c45123bd7717bceefd6a27c1e147da136054ff576fa831eca3\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 9 07:51:55.351452 containerd[1459]: time="2024-10-09T07:51:55.351056735Z" level=info msg="CreateContainer within sandbox \"2e5f0d48ab7993c45123bd7717bceefd6a27c1e147da136054ff576fa831eca3\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f1520e3c739bad9ef3cb91e3454350f2b7060678b7dea28ec04efe8a488aedfd\"" Oct 9 07:51:55.352754 containerd[1459]: time="2024-10-09T07:51:55.352630585Z" level=info msg="StartContainer for \"f1520e3c739bad9ef3cb91e3454350f2b7060678b7dea28ec04efe8a488aedfd\"" Oct 9 07:51:55.430299 systemd[1]: Started cri-containerd-f1520e3c739bad9ef3cb91e3454350f2b7060678b7dea28ec04efe8a488aedfd.scope - libcontainer container f1520e3c739bad9ef3cb91e3454350f2b7060678b7dea28ec04efe8a488aedfd. Oct 9 07:51:55.560567 containerd[1459]: time="2024-10-09T07:51:55.559359648Z" level=info msg="StartContainer for \"f1520e3c739bad9ef3cb91e3454350f2b7060678b7dea28ec04efe8a488aedfd\" returns successfully" Oct 9 07:51:55.733922 kubelet[2521]: I1009 07:51:55.733800 2521 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6b499549f-c8fgq" podStartSLOduration=2.884599012 podStartE2EDuration="5.733770497s" podCreationTimestamp="2024-10-09 07:51:50 +0000 UTC" firstStartedPulling="2024-10-09 07:51:52.468108621 +0000 UTC m=+69.763351660" lastFinishedPulling="2024-10-09 07:51:55.317280107 +0000 UTC m=+72.612523145" observedRunningTime="2024-10-09 07:51:55.733524178 +0000 UTC m=+73.028767240" watchObservedRunningTime="2024-10-09 07:51:55.733770497 +0000 UTC m=+73.029013561" Oct 9 07:51:55.984200 kubelet[2521]: E1009 07:51:55.982593 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:51:57.983947 kubelet[2521]: E1009 07:51:57.983113 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:51:58.565824 sshd[4279]: kex_exchange_identification: read: Connection reset by peer Oct 9 07:51:58.565824 sshd[4279]: Connection reset by 60.191.20.210 port 23456 Oct 9 07:51:58.567099 systemd[1]: sshd@10-64.23.134.87:22-60.191.20.210:23456.service: Deactivated successfully. Oct 9 07:51:58.697427 systemd[1]: Started sshd@18-64.23.134.87:22-139.178.89.65:39078.service - OpenSSH per-connection server daemon (139.178.89.65:39078). Oct 9 07:51:58.820705 sshd[4952]: Accepted publickey for core from 139.178.89.65 port 39078 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:51:58.823832 sshd[4952]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:51:58.830659 systemd-logind[1447]: New session 18 of user core. Oct 9 07:51:58.840229 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 9 07:51:59.454135 sshd[4952]: pam_unix(sshd:session): session closed for user core Oct 9 07:51:59.469907 systemd[1]: sshd@18-64.23.134.87:22-139.178.89.65:39078.service: Deactivated successfully. Oct 9 07:51:59.475353 systemd[1]: session-18.scope: Deactivated successfully. Oct 9 07:51:59.480961 systemd-logind[1447]: Session 18 logged out. Waiting for processes to exit. Oct 9 07:51:59.490576 systemd[1]: Started sshd@19-64.23.134.87:22-139.178.89.65:39094.service - OpenSSH per-connection server daemon (139.178.89.65:39094). Oct 9 07:51:59.497004 systemd-logind[1447]: Removed session 18. Oct 9 07:51:59.568476 sshd[4966]: Accepted publickey for core from 139.178.89.65 port 39094 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:51:59.571368 sshd[4966]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:51:59.584090 systemd-logind[1447]: New session 19 of user core. Oct 9 07:51:59.589224 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 9 07:52:00.150264 sshd[4966]: pam_unix(sshd:session): session closed for user core Oct 9 07:52:00.180062 systemd[1]: Started sshd@20-64.23.134.87:22-139.178.89.65:39104.service - OpenSSH per-connection server daemon (139.178.89.65:39104). Oct 9 07:52:00.184030 systemd[1]: sshd@19-64.23.134.87:22-139.178.89.65:39094.service: Deactivated successfully. Oct 9 07:52:00.197502 systemd[1]: session-19.scope: Deactivated successfully. Oct 9 07:52:00.211273 systemd-logind[1447]: Session 19 logged out. Waiting for processes to exit. Oct 9 07:52:00.224316 systemd-logind[1447]: Removed session 19. Oct 9 07:52:00.316955 sshd[4980]: Accepted publickey for core from 139.178.89.65 port 39104 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:52:00.319797 sshd[4980]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:52:00.330298 systemd-logind[1447]: New session 20 of user core. Oct 9 07:52:00.337245 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 9 07:52:01.983325 kubelet[2521]: E1009 07:52:01.983251 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:52:04.137444 sshd[4980]: pam_unix(sshd:session): session closed for user core Oct 9 07:52:04.155851 systemd[1]: sshd@20-64.23.134.87:22-139.178.89.65:39104.service: Deactivated successfully. Oct 9 07:52:04.161639 systemd[1]: session-20.scope: Deactivated successfully. Oct 9 07:52:04.165219 systemd-logind[1447]: Session 20 logged out. Waiting for processes to exit. Oct 9 07:52:04.173405 systemd-logind[1447]: Removed session 20. Oct 9 07:52:04.182733 systemd[1]: Started sshd@21-64.23.134.87:22-139.178.89.65:39118.service - OpenSSH per-connection server daemon (139.178.89.65:39118). Oct 9 07:52:04.279812 sshd[5011]: Accepted publickey for core from 139.178.89.65 port 39118 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:52:04.281404 sshd[5011]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:52:04.291870 systemd-logind[1447]: New session 21 of user core. Oct 9 07:52:04.297348 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 9 07:52:05.519627 sshd[5011]: pam_unix(sshd:session): session closed for user core Oct 9 07:52:05.533404 systemd[1]: sshd@21-64.23.134.87:22-139.178.89.65:39118.service: Deactivated successfully. Oct 9 07:52:05.541246 systemd[1]: session-21.scope: Deactivated successfully. Oct 9 07:52:05.544089 systemd-logind[1447]: Session 21 logged out. Waiting for processes to exit. Oct 9 07:52:05.561641 systemd[1]: Started sshd@22-64.23.134.87:22-139.178.89.65:48106.service - OpenSSH per-connection server daemon (139.178.89.65:48106). Oct 9 07:52:05.566719 systemd-logind[1447]: Removed session 21. Oct 9 07:52:05.683136 sshd[5023]: Accepted publickey for core from 139.178.89.65 port 48106 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:52:05.685021 sshd[5023]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:52:05.697218 systemd-logind[1447]: New session 22 of user core. Oct 9 07:52:05.703577 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 9 07:52:05.931303 sshd[5023]: pam_unix(sshd:session): session closed for user core Oct 9 07:52:05.936164 systemd-logind[1447]: Session 22 logged out. Waiting for processes to exit. Oct 9 07:52:05.937432 systemd[1]: sshd@22-64.23.134.87:22-139.178.89.65:48106.service: Deactivated successfully. Oct 9 07:52:05.942798 systemd[1]: session-22.scope: Deactivated successfully. Oct 9 07:52:05.946471 systemd-logind[1447]: Removed session 22. Oct 9 07:52:10.942708 systemd[1]: Started sshd@23-64.23.134.87:22-139.178.89.65:48108.service - OpenSSH per-connection server daemon (139.178.89.65:48108). Oct 9 07:52:11.027767 sshd[5058]: Accepted publickey for core from 139.178.89.65 port 48108 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:52:11.030099 sshd[5058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:52:11.038825 systemd-logind[1447]: New session 23 of user core. Oct 9 07:52:11.050173 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 9 07:52:11.265450 sshd[5058]: pam_unix(sshd:session): session closed for user core Oct 9 07:52:11.273780 systemd[1]: sshd@23-64.23.134.87:22-139.178.89.65:48108.service: Deactivated successfully. Oct 9 07:52:11.280061 systemd[1]: session-23.scope: Deactivated successfully. Oct 9 07:52:11.281790 systemd-logind[1447]: Session 23 logged out. Waiting for processes to exit. Oct 9 07:52:11.284914 systemd-logind[1447]: Removed session 23. Oct 9 07:52:14.983328 kubelet[2521]: E1009 07:52:14.982913 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:52:16.289328 systemd[1]: Started sshd@24-64.23.134.87:22-139.178.89.65:54386.service - OpenSSH per-connection server daemon (139.178.89.65:54386). Oct 9 07:52:16.353762 sshd[5081]: Accepted publickey for core from 139.178.89.65 port 54386 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:52:16.357001 sshd[5081]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:52:16.368865 systemd-logind[1447]: New session 24 of user core. Oct 9 07:52:16.376238 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 9 07:52:16.642988 sshd[5081]: pam_unix(sshd:session): session closed for user core Oct 9 07:52:16.649496 systemd[1]: sshd@24-64.23.134.87:22-139.178.89.65:54386.service: Deactivated successfully. Oct 9 07:52:16.653021 systemd[1]: session-24.scope: Deactivated successfully. Oct 9 07:52:16.654086 systemd-logind[1447]: Session 24 logged out. Waiting for processes to exit. Oct 9 07:52:16.656453 systemd-logind[1447]: Removed session 24. Oct 9 07:52:21.666459 systemd[1]: Started sshd@25-64.23.134.87:22-139.178.89.65:54402.service - OpenSSH per-connection server daemon (139.178.89.65:54402). Oct 9 07:52:21.784415 sshd[5096]: Accepted publickey for core from 139.178.89.65 port 54402 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:52:21.785523 sshd[5096]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:52:21.794243 systemd-logind[1447]: New session 25 of user core. Oct 9 07:52:21.798464 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 9 07:52:22.150132 sshd[5096]: pam_unix(sshd:session): session closed for user core Oct 9 07:52:22.157853 systemd[1]: sshd@25-64.23.134.87:22-139.178.89.65:54402.service: Deactivated successfully. Oct 9 07:52:22.161849 systemd[1]: session-25.scope: Deactivated successfully. Oct 9 07:52:22.163689 systemd-logind[1447]: Session 25 logged out. Waiting for processes to exit. Oct 9 07:52:22.165715 systemd-logind[1447]: Removed session 25. Oct 9 07:52:27.169520 systemd[1]: Started sshd@26-64.23.134.87:22-139.178.89.65:47554.service - OpenSSH per-connection server daemon (139.178.89.65:47554). Oct 9 07:52:27.220770 sshd[5133]: Accepted publickey for core from 139.178.89.65 port 47554 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:52:27.223486 sshd[5133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:52:27.232948 systemd-logind[1447]: New session 26 of user core. Oct 9 07:52:27.241239 systemd[1]: Started session-26.scope - Session 26 of User core. Oct 9 07:52:27.459123 sshd[5133]: pam_unix(sshd:session): session closed for user core Oct 9 07:52:27.466166 systemd[1]: sshd@26-64.23.134.87:22-139.178.89.65:47554.service: Deactivated successfully. Oct 9 07:52:27.470375 systemd[1]: session-26.scope: Deactivated successfully. Oct 9 07:52:27.471497 systemd-logind[1447]: Session 26 logged out. Waiting for processes to exit. Oct 9 07:52:27.473232 systemd-logind[1447]: Removed session 26.