Dec 13 08:53:28.976254 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Dec 12 23:15:00 -00 2024 Dec 13 08:53:28.976284 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 08:53:28.976299 kernel: BIOS-provided physical RAM map: Dec 13 08:53:28.976307 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 08:53:28.976313 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 08:53:28.976319 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 08:53:28.976327 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffd7fff] usable Dec 13 08:53:28.976333 kernel: BIOS-e820: [mem 0x000000007ffd8000-0x000000007fffffff] reserved Dec 13 08:53:28.976340 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 08:53:28.976348 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 08:53:28.976355 kernel: NX (Execute Disable) protection: active Dec 13 08:53:28.976362 kernel: APIC: Static calls initialized Dec 13 08:53:28.976368 kernel: SMBIOS 2.8 present. Dec 13 08:53:28.976375 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Dec 13 08:53:28.976383 kernel: Hypervisor detected: KVM Dec 13 08:53:28.976392 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 08:53:28.976399 kernel: kvm-clock: using sched offset of 3228102349 cycles Dec 13 08:53:28.976407 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 08:53:28.976414 kernel: tsc: Detected 1999.999 MHz processor Dec 13 08:53:28.976421 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 08:53:28.976429 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 08:53:28.976436 kernel: last_pfn = 0x7ffd8 max_arch_pfn = 0x400000000 Dec 13 08:53:28.976443 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 13 08:53:28.976450 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 08:53:28.976460 kernel: ACPI: Early table checksum verification disabled Dec 13 08:53:28.976467 kernel: ACPI: RSDP 0x00000000000F5A50 000014 (v00 BOCHS ) Dec 13 08:53:28.976474 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 08:53:28.976481 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 08:53:28.976488 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 08:53:28.976495 kernel: ACPI: FACS 0x000000007FFE0000 000040 Dec 13 08:53:28.976501 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 08:53:28.976508 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 08:53:28.976515 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 08:53:28.976525 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 08:53:28.976531 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Dec 13 08:53:28.976538 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Dec 13 08:53:28.976545 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Dec 13 08:53:28.976552 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Dec 13 08:53:28.976559 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Dec 13 08:53:28.976566 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Dec 13 08:53:28.976579 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Dec 13 08:53:28.976587 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 08:53:28.976594 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 08:53:28.976601 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 13 08:53:28.976609 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Dec 13 08:53:28.976616 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffd7fff] -> [mem 0x00000000-0x7ffd7fff] Dec 13 08:53:28.976624 kernel: NODE_DATA(0) allocated [mem 0x7ffd2000-0x7ffd7fff] Dec 13 08:53:28.976634 kernel: Zone ranges: Dec 13 08:53:28.976641 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 08:53:28.976648 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffd7fff] Dec 13 08:53:28.976655 kernel: Normal empty Dec 13 08:53:28.976663 kernel: Movable zone start for each node Dec 13 08:53:28.976670 kernel: Early memory node ranges Dec 13 08:53:28.976677 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 08:53:28.976684 kernel: node 0: [mem 0x0000000000100000-0x000000007ffd7fff] Dec 13 08:53:28.976691 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffd7fff] Dec 13 08:53:28.976701 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 08:53:28.976708 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 08:53:28.976716 kernel: On node 0, zone DMA32: 40 pages in unavailable ranges Dec 13 08:53:28.976723 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 08:53:28.976730 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 08:53:28.976738 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 08:53:28.976745 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 08:53:28.976752 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 08:53:28.976760 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 08:53:28.976769 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 08:53:28.976776 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 08:53:28.976788 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 08:53:28.976799 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 08:53:28.976811 kernel: TSC deadline timer available Dec 13 08:53:28.976821 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 08:53:28.976833 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 13 08:53:28.976845 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Dec 13 08:53:28.976858 kernel: Booting paravirtualized kernel on KVM Dec 13 08:53:28.976875 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 08:53:28.976887 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 13 08:53:28.976899 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Dec 13 08:53:28.976912 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Dec 13 08:53:28.976923 kernel: pcpu-alloc: [0] 0 1 Dec 13 08:53:28.976932 kernel: kvm-guest: PV spinlocks disabled, no host support Dec 13 08:53:28.976940 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 08:53:28.976949 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 08:53:28.976959 kernel: random: crng init done Dec 13 08:53:28.976966 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 08:53:28.976974 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 08:53:28.976981 kernel: Fallback order for Node 0: 0 Dec 13 08:53:28.976988 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515800 Dec 13 08:53:28.976995 kernel: Policy zone: DMA32 Dec 13 08:53:28.977002 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 08:53:28.977010 kernel: Memory: 1971192K/2096600K available (12288K kernel code, 2299K rwdata, 22724K rodata, 42844K init, 2348K bss, 125148K reserved, 0K cma-reserved) Dec 13 08:53:28.977017 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 08:53:28.977030 kernel: Kernel/User page tables isolation: enabled Dec 13 08:53:28.977040 kernel: ftrace: allocating 37902 entries in 149 pages Dec 13 08:53:28.977052 kernel: ftrace: allocated 149 pages with 4 groups Dec 13 08:53:28.977059 kernel: Dynamic Preempt: voluntary Dec 13 08:53:28.977066 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 08:53:28.977075 kernel: rcu: RCU event tracing is enabled. Dec 13 08:53:28.977082 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 08:53:28.977090 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 08:53:28.977097 kernel: Rude variant of Tasks RCU enabled. Dec 13 08:53:28.977107 kernel: Tracing variant of Tasks RCU enabled. Dec 13 08:53:28.977115 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 08:53:28.977122 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 08:53:28.977129 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 13 08:53:28.977137 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 08:53:28.977144 kernel: Console: colour VGA+ 80x25 Dec 13 08:53:28.977171 kernel: printk: console [tty0] enabled Dec 13 08:53:28.977178 kernel: printk: console [ttyS0] enabled Dec 13 08:53:28.977185 kernel: ACPI: Core revision 20230628 Dec 13 08:53:28.977193 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 13 08:53:28.977203 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 08:53:28.977210 kernel: x2apic enabled Dec 13 08:53:28.977217 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 08:53:28.977225 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 08:53:28.977232 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a85afc727, max_idle_ns: 881590685098 ns Dec 13 08:53:28.977240 kernel: Calibrating delay loop (skipped) preset value.. 3999.99 BogoMIPS (lpj=1999999) Dec 13 08:53:28.977247 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Dec 13 08:53:28.977255 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Dec 13 08:53:28.977273 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 08:53:28.977296 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 08:53:28.977309 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 08:53:28.977321 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 08:53:28.977329 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Dec 13 08:53:28.977337 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 08:53:28.977345 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 13 08:53:28.977353 kernel: MDS: Mitigation: Clear CPU buffers Dec 13 08:53:28.977361 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 08:53:28.977372 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 08:53:28.977380 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 08:53:28.977388 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 08:53:28.977396 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 08:53:28.977404 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 13 08:53:28.977411 kernel: Freeing SMP alternatives memory: 32K Dec 13 08:53:28.977419 kernel: pid_max: default: 32768 minimum: 301 Dec 13 08:53:28.977427 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 08:53:28.977438 kernel: landlock: Up and running. Dec 13 08:53:28.977451 kernel: SELinux: Initializing. Dec 13 08:53:28.977462 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 08:53:28.977474 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 08:53:28.977486 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Dec 13 08:53:28.977497 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 08:53:28.977509 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 08:53:28.977520 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 08:53:28.977536 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Dec 13 08:53:28.977549 kernel: signal: max sigframe size: 1776 Dec 13 08:53:28.977558 kernel: rcu: Hierarchical SRCU implementation. Dec 13 08:53:28.977566 kernel: rcu: Max phase no-delay instances is 400. Dec 13 08:53:28.977574 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 08:53:28.977582 kernel: smp: Bringing up secondary CPUs ... Dec 13 08:53:28.977590 kernel: smpboot: x86: Booting SMP configuration: Dec 13 08:53:28.977598 kernel: .... node #0, CPUs: #1 Dec 13 08:53:28.977606 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 08:53:28.977617 kernel: smpboot: Max logical packages: 1 Dec 13 08:53:28.977625 kernel: smpboot: Total of 2 processors activated (7999.99 BogoMIPS) Dec 13 08:53:28.977633 kernel: devtmpfs: initialized Dec 13 08:53:28.977641 kernel: x86/mm: Memory block size: 128MB Dec 13 08:53:28.977649 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 08:53:28.977657 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 08:53:28.977665 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 08:53:28.977673 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 08:53:28.977681 kernel: audit: initializing netlink subsys (disabled) Dec 13 08:53:28.977689 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 08:53:28.977699 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 08:53:28.977707 kernel: audit: type=2000 audit(1734080007.702:1): state=initialized audit_enabled=0 res=1 Dec 13 08:53:28.977715 kernel: cpuidle: using governor menu Dec 13 08:53:28.977723 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 08:53:28.977731 kernel: dca service started, version 1.12.1 Dec 13 08:53:28.977739 kernel: PCI: Using configuration type 1 for base access Dec 13 08:53:28.977748 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 08:53:28.977756 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 08:53:28.977766 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 08:53:28.977774 kernel: ACPI: Added _OSI(Module Device) Dec 13 08:53:28.977782 kernel: ACPI: Added _OSI(Processor Device) Dec 13 08:53:28.977790 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 08:53:28.977798 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 08:53:28.977806 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 08:53:28.977814 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 08:53:28.977821 kernel: ACPI: Interpreter enabled Dec 13 08:53:28.977830 kernel: ACPI: PM: (supports S0 S5) Dec 13 08:53:28.977838 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 08:53:28.977848 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 08:53:28.977856 kernel: PCI: Using E820 reservations for host bridge windows Dec 13 08:53:28.977864 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Dec 13 08:53:28.977872 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 08:53:28.980296 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 13 08:53:28.980462 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Dec 13 08:53:28.980562 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Dec 13 08:53:28.980579 kernel: acpiphp: Slot [3] registered Dec 13 08:53:28.980589 kernel: acpiphp: Slot [4] registered Dec 13 08:53:28.980597 kernel: acpiphp: Slot [5] registered Dec 13 08:53:28.980605 kernel: acpiphp: Slot [6] registered Dec 13 08:53:28.980613 kernel: acpiphp: Slot [7] registered Dec 13 08:53:28.980622 kernel: acpiphp: Slot [8] registered Dec 13 08:53:28.980630 kernel: acpiphp: Slot [9] registered Dec 13 08:53:28.980638 kernel: acpiphp: Slot [10] registered Dec 13 08:53:28.980646 kernel: acpiphp: Slot [11] registered Dec 13 08:53:28.980658 kernel: acpiphp: Slot [12] registered Dec 13 08:53:28.980665 kernel: acpiphp: Slot [13] registered Dec 13 08:53:28.980673 kernel: acpiphp: Slot [14] registered Dec 13 08:53:28.980682 kernel: acpiphp: Slot [15] registered Dec 13 08:53:28.980691 kernel: acpiphp: Slot [16] registered Dec 13 08:53:28.980704 kernel: acpiphp: Slot [17] registered Dec 13 08:53:28.980715 kernel: acpiphp: Slot [18] registered Dec 13 08:53:28.980727 kernel: acpiphp: Slot [19] registered Dec 13 08:53:28.980738 kernel: acpiphp: Slot [20] registered Dec 13 08:53:28.980749 kernel: acpiphp: Slot [21] registered Dec 13 08:53:28.980766 kernel: acpiphp: Slot [22] registered Dec 13 08:53:28.980778 kernel: acpiphp: Slot [23] registered Dec 13 08:53:28.980789 kernel: acpiphp: Slot [24] registered Dec 13 08:53:28.980813 kernel: acpiphp: Slot [25] registered Dec 13 08:53:28.980825 kernel: acpiphp: Slot [26] registered Dec 13 08:53:28.980837 kernel: acpiphp: Slot [27] registered Dec 13 08:53:28.980849 kernel: acpiphp: Slot [28] registered Dec 13 08:53:28.980861 kernel: acpiphp: Slot [29] registered Dec 13 08:53:28.980873 kernel: acpiphp: Slot [30] registered Dec 13 08:53:28.980890 kernel: acpiphp: Slot [31] registered Dec 13 08:53:28.980902 kernel: PCI host bridge to bus 0000:00 Dec 13 08:53:28.981060 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 08:53:28.981172 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 08:53:28.981259 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 08:53:28.981376 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Dec 13 08:53:28.981464 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Dec 13 08:53:28.981554 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 08:53:28.981682 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Dec 13 08:53:28.981798 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Dec 13 08:53:28.981906 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Dec 13 08:53:28.982003 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Dec 13 08:53:28.982128 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Dec 13 08:53:28.984335 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Dec 13 08:53:28.984484 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Dec 13 08:53:28.984630 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Dec 13 08:53:28.984763 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Dec 13 08:53:28.984857 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Dec 13 08:53:28.984965 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Dec 13 08:53:28.985065 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Dec 13 08:53:28.987241 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Dec 13 08:53:28.987404 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Dec 13 08:53:28.987505 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Dec 13 08:53:28.987612 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Dec 13 08:53:28.987735 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Dec 13 08:53:28.987828 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Dec 13 08:53:28.987945 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 08:53:28.988083 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Dec 13 08:53:28.988266 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Dec 13 08:53:28.988371 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Dec 13 08:53:28.988468 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Dec 13 08:53:28.988571 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Dec 13 08:53:28.988664 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Dec 13 08:53:28.988787 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Dec 13 08:53:28.988882 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Dec 13 08:53:28.989003 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Dec 13 08:53:28.989099 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Dec 13 08:53:28.989205 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Dec 13 08:53:28.989318 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Dec 13 08:53:28.989468 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Dec 13 08:53:28.989573 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Dec 13 08:53:28.989686 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Dec 13 08:53:28.989778 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Dec 13 08:53:28.989888 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Dec 13 08:53:28.989983 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Dec 13 08:53:28.990075 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Dec 13 08:53:28.992302 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Dec 13 08:53:28.992457 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Dec 13 08:53:28.992563 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Dec 13 08:53:28.992655 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Dec 13 08:53:28.992666 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 08:53:28.992675 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 08:53:28.992683 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 08:53:28.992692 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 08:53:28.992703 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 13 08:53:28.992711 kernel: iommu: Default domain type: Translated Dec 13 08:53:28.992719 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 08:53:28.992728 kernel: PCI: Using ACPI for IRQ routing Dec 13 08:53:28.992736 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 08:53:28.992745 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 08:53:28.992753 kernel: e820: reserve RAM buffer [mem 0x7ffd8000-0x7fffffff] Dec 13 08:53:28.992846 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Dec 13 08:53:28.992944 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Dec 13 08:53:28.993041 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 08:53:28.993052 kernel: vgaarb: loaded Dec 13 08:53:28.993060 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 13 08:53:28.993069 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 13 08:53:28.993077 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 08:53:28.993085 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 08:53:28.993094 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 08:53:28.993102 kernel: pnp: PnP ACPI init Dec 13 08:53:28.993110 kernel: pnp: PnP ACPI: found 4 devices Dec 13 08:53:28.993122 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 08:53:28.993138 kernel: NET: Registered PF_INET protocol family Dec 13 08:53:28.993165 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 08:53:28.993179 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 08:53:28.993188 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 08:53:28.993196 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 08:53:28.993205 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 13 08:53:28.993214 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 08:53:28.993228 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 08:53:28.993240 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 08:53:28.993248 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 08:53:28.993256 kernel: NET: Registered PF_XDP protocol family Dec 13 08:53:28.993392 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 08:53:28.993492 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 08:53:28.993580 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 08:53:28.993667 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Dec 13 08:53:28.993751 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Dec 13 08:53:28.993857 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Dec 13 08:53:28.993965 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 13 08:53:28.993984 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Dec 13 08:53:28.994114 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 36025 usecs Dec 13 08:53:28.994126 kernel: PCI: CLS 0 bytes, default 64 Dec 13 08:53:28.994135 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 08:53:28.994143 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a85afc727, max_idle_ns: 881590685098 ns Dec 13 08:53:28.996212 kernel: Initialise system trusted keyrings Dec 13 08:53:28.996252 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 08:53:28.996262 kernel: Key type asymmetric registered Dec 13 08:53:28.996270 kernel: Asymmetric key parser 'x509' registered Dec 13 08:53:28.996279 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 08:53:28.996287 kernel: io scheduler mq-deadline registered Dec 13 08:53:28.996296 kernel: io scheduler kyber registered Dec 13 08:53:28.996304 kernel: io scheduler bfq registered Dec 13 08:53:28.996312 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 08:53:28.996322 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Dec 13 08:53:28.996333 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Dec 13 08:53:28.996341 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Dec 13 08:53:28.996349 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 08:53:28.996357 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 08:53:28.996366 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 08:53:28.996374 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 08:53:28.996382 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 08:53:28.996391 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 08:53:28.996536 kernel: rtc_cmos 00:03: RTC can wake from S4 Dec 13 08:53:28.996631 kernel: rtc_cmos 00:03: registered as rtc0 Dec 13 08:53:28.996723 kernel: rtc_cmos 00:03: setting system clock to 2024-12-13T08:53:28 UTC (1734080008) Dec 13 08:53:28.996808 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Dec 13 08:53:28.996819 kernel: intel_pstate: CPU model not supported Dec 13 08:53:28.996827 kernel: NET: Registered PF_INET6 protocol family Dec 13 08:53:28.996835 kernel: Segment Routing with IPv6 Dec 13 08:53:28.996844 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 08:53:28.996852 kernel: NET: Registered PF_PACKET protocol family Dec 13 08:53:28.996863 kernel: Key type dns_resolver registered Dec 13 08:53:28.996871 kernel: IPI shorthand broadcast: enabled Dec 13 08:53:28.996879 kernel: sched_clock: Marking stable (1002005198, 137465253)->(1263095384, -123624933) Dec 13 08:53:28.996888 kernel: registered taskstats version 1 Dec 13 08:53:28.996896 kernel: Loading compiled-in X.509 certificates Dec 13 08:53:28.996904 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: c82d546f528d79a5758dcebbc47fb6daf92836a0' Dec 13 08:53:28.996912 kernel: Key type .fscrypt registered Dec 13 08:53:28.996920 kernel: Key type fscrypt-provisioning registered Dec 13 08:53:28.996929 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 08:53:28.996939 kernel: ima: Allocated hash algorithm: sha1 Dec 13 08:53:28.996947 kernel: ima: No architecture policies found Dec 13 08:53:28.996955 kernel: clk: Disabling unused clocks Dec 13 08:53:28.996963 kernel: Freeing unused kernel image (initmem) memory: 42844K Dec 13 08:53:28.996972 kernel: Write protecting the kernel read-only data: 36864k Dec 13 08:53:28.997000 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Dec 13 08:53:28.997011 kernel: Run /init as init process Dec 13 08:53:28.997020 kernel: with arguments: Dec 13 08:53:28.997028 kernel: /init Dec 13 08:53:28.997039 kernel: with environment: Dec 13 08:53:28.997047 kernel: HOME=/ Dec 13 08:53:28.997055 kernel: TERM=linux Dec 13 08:53:28.997063 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 08:53:28.997074 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 08:53:28.997086 systemd[1]: Detected virtualization kvm. Dec 13 08:53:28.997095 systemd[1]: Detected architecture x86-64. Dec 13 08:53:28.997106 systemd[1]: Running in initrd. Dec 13 08:53:28.997115 systemd[1]: No hostname configured, using default hostname. Dec 13 08:53:28.997123 systemd[1]: Hostname set to . Dec 13 08:53:28.997134 systemd[1]: Initializing machine ID from VM UUID. Dec 13 08:53:28.997142 systemd[1]: Queued start job for default target initrd.target. Dec 13 08:53:28.997170 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 08:53:28.997184 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 08:53:28.997197 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 08:53:28.997214 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 08:53:28.997227 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 08:53:28.997242 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 08:53:28.997253 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 08:53:28.997262 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 08:53:28.997271 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 08:53:28.997280 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 08:53:28.997309 systemd[1]: Reached target paths.target - Path Units. Dec 13 08:53:28.997323 systemd[1]: Reached target slices.target - Slice Units. Dec 13 08:53:28.997338 systemd[1]: Reached target swap.target - Swaps. Dec 13 08:53:28.997353 systemd[1]: Reached target timers.target - Timer Units. Dec 13 08:53:28.997363 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 08:53:28.997372 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 08:53:28.997383 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 08:53:28.997392 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 08:53:28.997401 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 08:53:28.997410 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 08:53:28.997419 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 08:53:28.997428 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 08:53:28.997437 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 08:53:28.997445 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 08:53:28.997461 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 08:53:28.997477 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 08:53:28.997493 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 08:53:28.997505 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 08:53:28.997514 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 08:53:28.997526 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 08:53:28.997535 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 08:53:28.997544 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 08:53:28.997588 systemd-journald[184]: Collecting audit messages is disabled. Dec 13 08:53:28.997627 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 08:53:28.997637 systemd-journald[184]: Journal started Dec 13 08:53:28.997659 systemd-journald[184]: Runtime Journal (/run/log/journal/6526abe300d14bbc87537600d5d631c9) is 4.9M, max 39.3M, 34.4M free. Dec 13 08:53:28.978697 systemd-modules-load[185]: Inserted module 'overlay' Dec 13 08:53:29.022829 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 08:53:29.022866 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 08:53:29.025980 systemd-modules-load[185]: Inserted module 'br_netfilter' Dec 13 08:53:29.026599 kernel: Bridge firewalling registered Dec 13 08:53:29.027872 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 08:53:29.033459 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 08:53:29.034326 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 08:53:29.041600 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 08:53:29.044340 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 08:53:29.046331 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 08:53:29.052788 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 08:53:29.071860 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 08:53:29.072885 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 08:53:29.076612 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 08:53:29.077446 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 08:53:29.086606 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 08:53:29.092094 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 08:53:29.105697 dracut-cmdline[217]: dracut-dracut-053 Dec 13 08:53:29.115172 dracut-cmdline[217]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 08:53:29.139446 systemd-resolved[218]: Positive Trust Anchors: Dec 13 08:53:29.140346 systemd-resolved[218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 08:53:29.140383 systemd-resolved[218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 08:53:29.146998 systemd-resolved[218]: Defaulting to hostname 'linux'. Dec 13 08:53:29.148588 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 08:53:29.150024 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 08:53:29.215255 kernel: SCSI subsystem initialized Dec 13 08:53:29.228191 kernel: Loading iSCSI transport class v2.0-870. Dec 13 08:53:29.241191 kernel: iscsi: registered transport (tcp) Dec 13 08:53:29.267234 kernel: iscsi: registered transport (qla4xxx) Dec 13 08:53:29.267341 kernel: QLogic iSCSI HBA Driver Dec 13 08:53:29.317743 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 08:53:29.338558 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 08:53:29.367134 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 08:53:29.367232 kernel: device-mapper: uevent: version 1.0.3 Dec 13 08:53:29.369180 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 08:53:29.414230 kernel: raid6: avx2x4 gen() 29494 MB/s Dec 13 08:53:29.432231 kernel: raid6: avx2x2 gen() 25788 MB/s Dec 13 08:53:29.450436 kernel: raid6: avx2x1 gen() 12301 MB/s Dec 13 08:53:29.450535 kernel: raid6: using algorithm avx2x4 gen() 29494 MB/s Dec 13 08:53:29.468452 kernel: raid6: .... xor() 10248 MB/s, rmw enabled Dec 13 08:53:29.468549 kernel: raid6: using avx2x2 recovery algorithm Dec 13 08:53:29.494233 kernel: xor: automatically using best checksumming function avx Dec 13 08:53:29.668216 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 08:53:29.683170 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 08:53:29.689665 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 08:53:29.719847 systemd-udevd[401]: Using default interface naming scheme 'v255'. Dec 13 08:53:29.726297 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 08:53:29.735407 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 08:53:29.757201 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation Dec 13 08:53:29.796116 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 08:53:29.802473 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 08:53:29.874622 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 08:53:29.882665 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 08:53:29.899108 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 08:53:29.901964 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 08:53:29.903345 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 08:53:29.905902 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 08:53:29.915728 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 08:53:29.938181 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 08:53:29.967178 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Dec 13 08:53:30.042334 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Dec 13 08:53:30.042482 kernel: scsi host0: Virtio SCSI HBA Dec 13 08:53:30.042642 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 08:53:30.042654 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 08:53:30.042666 kernel: GPT:9289727 != 125829119 Dec 13 08:53:30.042683 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 08:53:30.042699 kernel: GPT:9289727 != 125829119 Dec 13 08:53:30.042720 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 08:53:30.042756 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 08:53:30.042771 kernel: ACPI: bus type USB registered Dec 13 08:53:30.042782 kernel: usbcore: registered new interface driver usbfs Dec 13 08:53:30.042793 kernel: usbcore: registered new interface driver hub Dec 13 08:53:30.042809 kernel: usbcore: registered new device driver usb Dec 13 08:53:30.042826 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Dec 13 08:53:30.068899 kernel: virtio_blk virtio5: [vdb] 920 512-byte logical blocks (471 kB/460 KiB) Dec 13 08:53:30.069089 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 08:53:30.069103 kernel: AES CTR mode by8 optimization enabled Dec 13 08:53:30.025838 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 08:53:30.072248 kernel: libata version 3.00 loaded. Dec 13 08:53:30.025963 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 08:53:30.029204 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 08:53:30.029996 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 08:53:30.030735 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 08:53:30.031646 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 08:53:30.047518 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 08:53:30.105188 kernel: ata_piix 0000:00:01.1: version 2.13 Dec 13 08:53:30.132799 kernel: scsi host1: ata_piix Dec 13 08:53:30.132992 kernel: scsi host2: ata_piix Dec 13 08:53:30.133142 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Dec 13 08:53:30.133170 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Dec 13 08:53:30.145175 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (460) Dec 13 08:53:30.145236 kernel: BTRFS: device fsid c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (450) Dec 13 08:53:30.153483 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 13 08:53:30.179807 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 13 08:53:30.189132 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Dec 13 08:53:30.189450 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Dec 13 08:53:30.189584 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Dec 13 08:53:30.189702 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Dec 13 08:53:30.189813 kernel: hub 1-0:1.0: USB hub found Dec 13 08:53:30.189985 kernel: hub 1-0:1.0: 2 ports detected Dec 13 08:53:30.188837 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 08:53:30.196980 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 13 08:53:30.197781 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 13 08:53:30.203301 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 08:53:30.214461 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 08:53:30.219373 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 08:53:30.223665 disk-uuid[540]: Primary Header is updated. Dec 13 08:53:30.223665 disk-uuid[540]: Secondary Entries is updated. Dec 13 08:53:30.223665 disk-uuid[540]: Secondary Header is updated. Dec 13 08:53:30.229211 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 08:53:30.235223 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 08:53:30.243268 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 08:53:30.243581 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 08:53:31.242145 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 08:53:31.242234 disk-uuid[541]: The operation has completed successfully. Dec 13 08:53:31.283693 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 08:53:31.283825 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 08:53:31.297578 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 08:53:31.304256 sh[563]: Success Dec 13 08:53:31.321197 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 08:53:31.387989 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 08:53:31.402464 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 08:53:31.407781 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 08:53:31.438896 kernel: BTRFS info (device dm-0): first mount of filesystem c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be Dec 13 08:53:31.438983 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 08:53:31.439002 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 08:53:31.439016 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 08:53:31.440367 kernel: BTRFS info (device dm-0): using free space tree Dec 13 08:53:31.448494 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 08:53:31.449896 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 08:53:31.456447 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 08:53:31.460013 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 08:53:31.472180 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 08:53:31.474697 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 08:53:31.474746 kernel: BTRFS info (device vda6): using free space tree Dec 13 08:53:31.479184 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 08:53:31.490429 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 08:53:31.492870 kernel: BTRFS info (device vda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 08:53:31.498808 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 08:53:31.504390 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 08:53:31.639297 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 08:53:31.642360 ignition[654]: Ignition 2.19.0 Dec 13 08:53:31.646438 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 08:53:31.642370 ignition[654]: Stage: fetch-offline Dec 13 08:53:31.648441 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 08:53:31.642408 ignition[654]: no configs at "/usr/lib/ignition/base.d" Dec 13 08:53:31.642418 ignition[654]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 13 08:53:31.642509 ignition[654]: parsed url from cmdline: "" Dec 13 08:53:31.642513 ignition[654]: no config URL provided Dec 13 08:53:31.642518 ignition[654]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 08:53:31.642526 ignition[654]: no config at "/usr/lib/ignition/user.ign" Dec 13 08:53:31.642531 ignition[654]: failed to fetch config: resource requires networking Dec 13 08:53:31.642711 ignition[654]: Ignition finished successfully Dec 13 08:53:31.681540 systemd-networkd[753]: lo: Link UP Dec 13 08:53:31.681552 systemd-networkd[753]: lo: Gained carrier Dec 13 08:53:31.684145 systemd-networkd[753]: Enumeration completed Dec 13 08:53:31.684308 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 08:53:31.685169 systemd-networkd[753]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Dec 13 08:53:31.685173 systemd-networkd[753]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Dec 13 08:53:31.686083 systemd-networkd[753]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 08:53:31.686086 systemd-networkd[753]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 08:53:31.686898 systemd-networkd[753]: eth0: Link UP Dec 13 08:53:31.686903 systemd-networkd[753]: eth0: Gained carrier Dec 13 08:53:31.686910 systemd-networkd[753]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Dec 13 08:53:31.687351 systemd[1]: Reached target network.target - Network. Dec 13 08:53:31.690528 systemd-networkd[753]: eth1: Link UP Dec 13 08:53:31.690533 systemd-networkd[753]: eth1: Gained carrier Dec 13 08:53:31.690545 systemd-networkd[753]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 08:53:31.696671 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 08:53:31.706318 systemd-networkd[753]: eth0: DHCPv4 address 137.184.89.200/20, gateway 137.184.80.1 acquired from 169.254.169.253 Dec 13 08:53:31.711298 systemd-networkd[753]: eth1: DHCPv4 address 10.124.0.3/20, gateway 10.124.0.1 acquired from 169.254.169.253 Dec 13 08:53:31.720617 ignition[756]: Ignition 2.19.0 Dec 13 08:53:31.720637 ignition[756]: Stage: fetch Dec 13 08:53:31.720969 ignition[756]: no configs at "/usr/lib/ignition/base.d" Dec 13 08:53:31.720983 ignition[756]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 13 08:53:31.721087 ignition[756]: parsed url from cmdline: "" Dec 13 08:53:31.721090 ignition[756]: no config URL provided Dec 13 08:53:31.721096 ignition[756]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 08:53:31.721105 ignition[756]: no config at "/usr/lib/ignition/user.ign" Dec 13 08:53:31.721129 ignition[756]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Dec 13 08:53:31.756897 ignition[756]: GET result: OK Dec 13 08:53:31.757017 ignition[756]: parsing config with SHA512: 6f88f725489c68f9dce9d4924af5043a75e59a920d7c07d7e92943782256292f0087d446a4c2a59ee56564bfb6130a14ad931589f061695a20b8e80788a9cb9f Dec 13 08:53:31.762111 unknown[756]: fetched base config from "system" Dec 13 08:53:31.762129 unknown[756]: fetched base config from "system" Dec 13 08:53:31.762592 ignition[756]: fetch: fetch complete Dec 13 08:53:31.762138 unknown[756]: fetched user config from "digitalocean" Dec 13 08:53:31.762604 ignition[756]: fetch: fetch passed Dec 13 08:53:31.762676 ignition[756]: Ignition finished successfully Dec 13 08:53:31.765852 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 08:53:31.769491 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 08:53:31.799925 ignition[763]: Ignition 2.19.0 Dec 13 08:53:31.799941 ignition[763]: Stage: kargs Dec 13 08:53:31.801455 ignition[763]: no configs at "/usr/lib/ignition/base.d" Dec 13 08:53:31.801477 ignition[763]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 13 08:53:31.804520 ignition[763]: kargs: kargs passed Dec 13 08:53:31.805128 ignition[763]: Ignition finished successfully Dec 13 08:53:31.806969 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 08:53:31.813538 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 08:53:31.829982 ignition[770]: Ignition 2.19.0 Dec 13 08:53:31.829997 ignition[770]: Stage: disks Dec 13 08:53:31.832274 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 08:53:31.830216 ignition[770]: no configs at "/usr/lib/ignition/base.d" Dec 13 08:53:31.830228 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 13 08:53:31.833908 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 08:53:31.831023 ignition[770]: disks: disks passed Dec 13 08:53:31.839255 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 08:53:31.831075 ignition[770]: Ignition finished successfully Dec 13 08:53:31.840365 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 08:53:31.841511 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 08:53:31.842537 systemd[1]: Reached target basic.target - Basic System. Dec 13 08:53:31.850365 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 08:53:31.866647 systemd-fsck[778]: ROOT: clean, 14/553520 files, 52654/553472 blocks Dec 13 08:53:31.870722 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 08:53:31.878313 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 08:53:31.987196 kernel: EXT4-fs (vda9): mounted filesystem 390119fa-ab9c-4f50-b046-3b5c76c46193 r/w with ordered data mode. Quota mode: none. Dec 13 08:53:31.988496 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 08:53:31.989755 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 08:53:31.999351 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 08:53:32.002316 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 08:53:32.004362 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Dec 13 08:53:32.012175 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (786) Dec 13 08:53:32.012402 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 13 08:53:32.021670 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 08:53:32.021701 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 08:53:32.021713 kernel: BTRFS info (device vda6): using free space tree Dec 13 08:53:32.021724 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 08:53:32.013904 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 08:53:32.013957 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 08:53:32.032044 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 08:53:32.034174 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 08:53:32.039361 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 08:53:32.113188 initrd-setup-root[818]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 08:53:32.115364 coreos-metadata[789]: Dec 13 08:53:32.114 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Dec 13 08:53:32.122426 initrd-setup-root[825]: cut: /sysroot/etc/group: No such file or directory Dec 13 08:53:32.125602 coreos-metadata[788]: Dec 13 08:53:32.125 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Dec 13 08:53:32.127587 coreos-metadata[789]: Dec 13 08:53:32.126 INFO Fetch successful Dec 13 08:53:32.130766 initrd-setup-root[832]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 08:53:32.131804 coreos-metadata[789]: Dec 13 08:53:32.131 INFO wrote hostname ci-4081.2.1-6-e72ca174b4 to /sysroot/etc/hostname Dec 13 08:53:32.132642 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 08:53:32.137224 coreos-metadata[788]: Dec 13 08:53:32.136 INFO Fetch successful Dec 13 08:53:32.139267 initrd-setup-root[840]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 08:53:32.141996 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Dec 13 08:53:32.142775 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Dec 13 08:53:32.244577 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 08:53:32.250318 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 08:53:32.252343 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 08:53:32.262184 kernel: BTRFS info (device vda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 08:53:32.288450 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 08:53:32.293502 ignition[908]: INFO : Ignition 2.19.0 Dec 13 08:53:32.293502 ignition[908]: INFO : Stage: mount Dec 13 08:53:32.294948 ignition[908]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 08:53:32.294948 ignition[908]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 13 08:53:32.294948 ignition[908]: INFO : mount: mount passed Dec 13 08:53:32.294948 ignition[908]: INFO : Ignition finished successfully Dec 13 08:53:32.295814 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 08:53:32.307377 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 08:53:32.433432 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 08:53:32.440499 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 08:53:32.451615 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (921) Dec 13 08:53:32.451699 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 08:53:32.453184 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 08:53:32.454450 kernel: BTRFS info (device vda6): using free space tree Dec 13 08:53:32.458205 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 08:53:32.460334 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 08:53:32.483902 ignition[938]: INFO : Ignition 2.19.0 Dec 13 08:53:32.484742 ignition[938]: INFO : Stage: files Dec 13 08:53:32.485191 ignition[938]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 08:53:32.485191 ignition[938]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 13 08:53:32.486966 ignition[938]: DEBUG : files: compiled without relabeling support, skipping Dec 13 08:53:32.486966 ignition[938]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 08:53:32.486966 ignition[938]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 08:53:32.490291 ignition[938]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 08:53:32.491119 ignition[938]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 08:53:32.491119 ignition[938]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 08:53:32.490881 unknown[938]: wrote ssh authorized keys file for user: core Dec 13 08:53:32.493868 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Dec 13 08:53:32.493868 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 08:53:32.493868 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 08:53:32.493868 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 08:53:32.493868 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 08:53:32.493868 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 08:53:32.493868 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 08:53:32.493868 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 08:53:32.738441 systemd-networkd[753]: eth1: Gained IPv6LL Dec 13 08:53:32.954481 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Dec 13 08:53:32.994563 systemd-networkd[753]: eth0: Gained IPv6LL Dec 13 08:53:33.224771 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 08:53:33.226143 ignition[938]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 08:53:33.226143 ignition[938]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 08:53:33.226143 ignition[938]: INFO : files: files passed Dec 13 08:53:33.226143 ignition[938]: INFO : Ignition finished successfully Dec 13 08:53:33.227472 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 08:53:33.240541 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 08:53:33.248371 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 08:53:33.250414 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 08:53:33.250520 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 08:53:33.273764 initrd-setup-root-after-ignition[967]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 08:53:33.273764 initrd-setup-root-after-ignition[967]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 08:53:33.276648 initrd-setup-root-after-ignition[970]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 08:53:33.277609 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 08:53:33.278842 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 08:53:33.285477 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 08:53:33.311184 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 08:53:33.311311 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 08:53:33.312195 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 08:53:33.313010 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 08:53:33.314439 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 08:53:33.320434 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 08:53:33.338648 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 08:53:33.345488 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 08:53:33.357350 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 08:53:33.359233 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 08:53:33.360892 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 08:53:33.362007 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 08:53:33.362159 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 08:53:33.363942 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 08:53:33.364609 systemd[1]: Stopped target basic.target - Basic System. Dec 13 08:53:33.365740 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 08:53:33.366722 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 08:53:33.368132 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 08:53:33.369372 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 08:53:33.370377 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 08:53:33.371500 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 08:53:33.372697 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 08:53:33.373843 systemd[1]: Stopped target swap.target - Swaps. Dec 13 08:53:33.374740 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 08:53:33.374872 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 08:53:33.376043 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 08:53:33.376682 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 08:53:33.377845 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 08:53:33.377985 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 08:53:33.378964 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 08:53:33.379137 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 08:53:33.380673 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 08:53:33.380803 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 08:53:33.382322 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 08:53:33.382466 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 08:53:33.383281 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 13 08:53:33.383424 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 08:53:33.390530 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 08:53:33.394728 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 08:53:33.395266 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 08:53:33.395458 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 08:53:33.396534 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 08:53:33.396635 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 08:53:33.402601 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 08:53:33.402719 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 08:53:33.417184 ignition[991]: INFO : Ignition 2.19.0 Dec 13 08:53:33.417184 ignition[991]: INFO : Stage: umount Dec 13 08:53:33.417184 ignition[991]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 08:53:33.417184 ignition[991]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 13 08:53:33.422792 ignition[991]: INFO : umount: umount passed Dec 13 08:53:33.422792 ignition[991]: INFO : Ignition finished successfully Dec 13 08:53:33.419820 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 08:53:33.419934 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 08:53:33.423519 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 08:53:33.423612 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 08:53:33.424819 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 08:53:33.424867 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 08:53:33.426140 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 08:53:33.426222 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 08:53:33.427246 systemd[1]: Stopped target network.target - Network. Dec 13 08:53:33.427697 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 08:53:33.427743 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 08:53:33.428409 systemd[1]: Stopped target paths.target - Path Units. Dec 13 08:53:33.431516 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 08:53:33.437251 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 08:53:33.438731 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 08:53:33.439760 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 08:53:33.441307 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 08:53:33.441388 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 08:53:33.442559 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 08:53:33.442621 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 08:53:33.444362 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 08:53:33.444432 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 08:53:33.445550 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 08:53:33.445603 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 08:53:33.446988 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 08:53:33.448477 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 08:53:33.453652 systemd-networkd[753]: eth1: DHCPv6 lease lost Dec 13 08:53:33.455108 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 08:53:33.456248 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 08:53:33.456355 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 08:53:33.457707 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 08:53:33.457829 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 08:53:33.459336 systemd-networkd[753]: eth0: DHCPv6 lease lost Dec 13 08:53:33.461898 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 08:53:33.462016 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 08:53:33.463374 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 08:53:33.463491 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 08:53:33.467543 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 08:53:33.467598 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 08:53:33.476334 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 08:53:33.476904 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 08:53:33.476975 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 08:53:33.477719 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 08:53:33.477768 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 08:53:33.478405 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 08:53:33.478448 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 08:53:33.479536 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 08:53:33.479592 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 08:53:33.481205 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 08:53:33.494293 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 08:53:33.494428 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 08:53:33.495913 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 08:53:33.496047 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 08:53:33.497614 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 08:53:33.497698 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 08:53:33.498712 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 08:53:33.498786 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 08:53:33.499770 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 08:53:33.499822 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 08:53:33.501619 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 08:53:33.501674 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 08:53:33.502689 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 08:53:33.502754 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 08:53:33.514814 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 08:53:33.515452 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 08:53:33.515522 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 08:53:33.516123 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 13 08:53:33.516204 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 08:53:33.516800 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 08:53:33.516841 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 08:53:33.518191 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 08:53:33.518247 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 08:53:33.523470 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 08:53:33.523610 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 08:53:33.525856 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 08:53:33.535775 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 08:53:33.544106 systemd[1]: Switching root. Dec 13 08:53:33.626706 systemd-journald[184]: Journal stopped Dec 13 08:53:34.756045 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Dec 13 08:53:34.756145 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 08:53:34.757233 kernel: SELinux: policy capability open_perms=1 Dec 13 08:53:34.757280 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 08:53:34.757301 kernel: SELinux: policy capability always_check_network=0 Dec 13 08:53:34.757314 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 08:53:34.757326 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 08:53:34.757337 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 08:53:34.757349 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 08:53:34.757367 systemd[1]: Successfully loaded SELinux policy in 41.163ms. Dec 13 08:53:34.757393 kernel: audit: type=1403 audit(1734080013.771:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 08:53:34.757407 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.672ms. Dec 13 08:53:34.757421 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 08:53:34.757433 systemd[1]: Detected virtualization kvm. Dec 13 08:53:34.757445 systemd[1]: Detected architecture x86-64. Dec 13 08:53:34.757457 systemd[1]: Detected first boot. Dec 13 08:53:34.757469 systemd[1]: Hostname set to . Dec 13 08:53:34.757484 systemd[1]: Initializing machine ID from VM UUID. Dec 13 08:53:34.757496 zram_generator::config[1033]: No configuration found. Dec 13 08:53:34.757510 systemd[1]: Populated /etc with preset unit settings. Dec 13 08:53:34.757527 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 08:53:34.757538 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 08:53:34.757550 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 08:53:34.757564 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 08:53:34.757576 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 08:53:34.757590 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 08:53:34.757603 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 08:53:34.757615 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 08:53:34.757627 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 08:53:34.757644 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 08:53:34.757656 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 08:53:34.757667 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 08:53:34.757679 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 08:53:34.757690 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 08:53:34.757705 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 08:53:34.757716 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 08:53:34.757728 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 08:53:34.757739 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 08:53:34.757751 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 08:53:34.757763 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 08:53:34.757774 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 08:53:34.757788 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 08:53:34.757800 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 08:53:34.757812 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 08:53:34.757823 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 08:53:34.757834 systemd[1]: Reached target slices.target - Slice Units. Dec 13 08:53:34.757847 systemd[1]: Reached target swap.target - Swaps. Dec 13 08:53:34.757860 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 08:53:34.757871 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 08:53:34.757885 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 08:53:34.757897 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 08:53:34.757909 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 08:53:34.757922 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 08:53:34.757933 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 08:53:34.757946 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 08:53:34.757957 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 08:53:34.757969 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 08:53:34.757981 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 08:53:34.757995 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 08:53:34.758007 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 08:53:34.758019 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 08:53:34.758031 systemd[1]: Reached target machines.target - Containers. Dec 13 08:53:34.758042 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 08:53:34.758054 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 08:53:34.758070 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 08:53:34.758081 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 08:53:34.758093 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 08:53:34.758108 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 08:53:34.758120 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 08:53:34.758132 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 08:53:34.758143 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 08:53:34.760232 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 08:53:34.760251 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 08:53:34.760264 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 08:53:34.760276 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 08:53:34.760294 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 08:53:34.760306 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 08:53:34.760318 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 08:53:34.760330 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 08:53:34.760342 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 08:53:34.760354 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 08:53:34.760366 kernel: loop: module loaded Dec 13 08:53:34.760380 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 08:53:34.760391 systemd[1]: Stopped verity-setup.service. Dec 13 08:53:34.760406 kernel: fuse: init (API version 7.39) Dec 13 08:53:34.760418 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 08:53:34.760430 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 08:53:34.760442 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 08:53:34.760453 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 08:53:34.760464 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 08:53:34.760478 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 08:53:34.760490 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 08:53:34.760502 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 08:53:34.760519 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 08:53:34.760534 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 08:53:34.760545 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 08:53:34.760556 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 08:53:34.760568 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 08:53:34.760580 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 08:53:34.760592 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 08:53:34.760604 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 08:53:34.760616 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 08:53:34.760628 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 08:53:34.760642 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 08:53:34.760654 kernel: ACPI: bus type drm_connector registered Dec 13 08:53:34.760665 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 08:53:34.760676 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 08:53:34.760688 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 08:53:34.760700 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 08:53:34.760711 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 08:53:34.760722 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 08:53:34.760735 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 08:53:34.760786 systemd-journald[1116]: Collecting audit messages is disabled. Dec 13 08:53:34.760812 systemd-journald[1116]: Journal started Dec 13 08:53:34.760837 systemd-journald[1116]: Runtime Journal (/run/log/journal/6526abe300d14bbc87537600d5d631c9) is 4.9M, max 39.3M, 34.4M free. Dec 13 08:53:34.765211 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 08:53:34.332136 systemd[1]: Queued start job for default target multi-user.target. Dec 13 08:53:34.351171 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 13 08:53:34.351599 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 08:53:34.771620 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 08:53:34.771697 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 08:53:34.776184 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 08:53:34.783722 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 08:53:34.790387 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 08:53:34.790498 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 08:53:34.801553 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 08:53:34.807201 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 08:53:34.818247 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 08:53:34.825184 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 08:53:34.840641 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 08:53:34.856193 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 08:53:34.873195 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 08:53:34.875207 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 08:53:34.879364 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 08:53:34.880408 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 08:53:34.886758 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 08:53:34.888528 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 08:53:34.891980 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 08:53:34.915566 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 08:53:34.922933 kernel: loop0: detected capacity change from 0 to 8 Dec 13 08:53:34.925455 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 08:53:34.935805 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 08:53:34.944647 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 08:53:34.956427 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 08:53:34.977926 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 08:53:34.986680 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 08:53:34.989027 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 08:53:34.994906 systemd-journald[1116]: Time spent on flushing to /var/log/journal/6526abe300d14bbc87537600d5d631c9 is 55.155ms for 983 entries. Dec 13 08:53:34.994906 systemd-journald[1116]: System Journal (/var/log/journal/6526abe300d14bbc87537600d5d631c9) is 8.0M, max 195.6M, 187.6M free. Dec 13 08:53:35.083842 systemd-journald[1116]: Received client request to flush runtime journal. Dec 13 08:53:35.083912 kernel: loop1: detected capacity change from 0 to 142488 Dec 13 08:53:35.083938 kernel: loop2: detected capacity change from 0 to 211296 Dec 13 08:53:35.009871 udevadm[1164]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 08:53:35.036643 systemd-tmpfiles[1136]: ACLs are not supported, ignoring. Dec 13 08:53:35.036660 systemd-tmpfiles[1136]: ACLs are not supported, ignoring. Dec 13 08:53:35.061872 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 08:53:35.082104 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 08:53:35.088609 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 08:53:35.143200 kernel: loop3: detected capacity change from 0 to 140768 Dec 13 08:53:35.170715 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 08:53:35.184703 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 08:53:35.225198 kernel: loop4: detected capacity change from 0 to 8 Dec 13 08:53:35.229213 kernel: loop5: detected capacity change from 0 to 142488 Dec 13 08:53:35.262373 systemd-tmpfiles[1178]: ACLs are not supported, ignoring. Dec 13 08:53:35.262405 systemd-tmpfiles[1178]: ACLs are not supported, ignoring. Dec 13 08:53:35.267949 kernel: loop6: detected capacity change from 0 to 211296 Dec 13 08:53:35.272537 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 08:53:35.304187 kernel: loop7: detected capacity change from 0 to 140768 Dec 13 08:53:35.350273 (sd-merge)[1180]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Dec 13 08:53:35.350875 (sd-merge)[1180]: Merged extensions into '/usr'. Dec 13 08:53:35.373618 systemd[1]: Reloading requested from client PID 1135 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 08:53:35.373643 systemd[1]: Reloading... Dec 13 08:53:35.544037 ldconfig[1131]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 08:53:35.552192 zram_generator::config[1208]: No configuration found. Dec 13 08:53:35.718017 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 08:53:35.767254 systemd[1]: Reloading finished in 393 ms. Dec 13 08:53:35.814697 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 08:53:35.815717 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 08:53:35.816670 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 08:53:35.826382 systemd[1]: Starting ensure-sysext.service... Dec 13 08:53:35.828332 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 08:53:35.831362 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 08:53:35.847288 systemd[1]: Reloading requested from client PID 1252 ('systemctl') (unit ensure-sysext.service)... Dec 13 08:53:35.847313 systemd[1]: Reloading... Dec 13 08:53:35.862274 systemd-tmpfiles[1253]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 08:53:35.862979 systemd-tmpfiles[1253]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 08:53:35.864031 systemd-tmpfiles[1253]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 08:53:35.864487 systemd-tmpfiles[1253]: ACLs are not supported, ignoring. Dec 13 08:53:35.864602 systemd-tmpfiles[1253]: ACLs are not supported, ignoring. Dec 13 08:53:35.868736 systemd-tmpfiles[1253]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 08:53:35.868881 systemd-tmpfiles[1253]: Skipping /boot Dec 13 08:53:35.882297 systemd-tmpfiles[1253]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 08:53:35.882429 systemd-tmpfiles[1253]: Skipping /boot Dec 13 08:53:35.902364 systemd-udevd[1254]: Using default interface naming scheme 'v255'. Dec 13 08:53:35.987246 zram_generator::config[1285]: No configuration found. Dec 13 08:53:36.026180 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1288) Dec 13 08:53:36.059185 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1284) Dec 13 08:53:36.086470 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1284) Dec 13 08:53:36.137213 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Dec 13 08:53:36.144173 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Dec 13 08:53:36.146910 kernel: ACPI: button: Power Button [PWRF] Dec 13 08:53:36.181252 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 08:53:36.193218 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Dec 13 08:53:36.295487 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 13 08:53:36.295772 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 08:53:36.296695 systemd[1]: Reloading finished in 448 ms. Dec 13 08:53:36.299226 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 08:53:36.311823 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 08:53:36.313306 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 08:53:36.341507 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Dec 13 08:53:36.343269 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 08:53:36.349625 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 08:53:36.353440 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 08:53:36.355437 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 08:53:36.362577 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 08:53:36.380500 kernel: ISO 9660 Extensions: RRIP_1991A Dec 13 08:53:36.382506 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 08:53:36.385468 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 08:53:36.386126 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 08:53:36.390536 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 08:53:36.393443 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 08:53:36.396433 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 08:53:36.400414 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 08:53:36.402379 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 08:53:36.403310 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 08:53:36.417703 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Dec 13 08:53:36.419769 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 08:53:36.419951 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 08:53:36.421655 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 08:53:36.421803 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 08:53:36.445903 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 08:53:36.452522 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 08:53:36.470187 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Dec 13 08:53:36.472256 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Dec 13 08:53:36.477506 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 08:53:36.485596 kernel: Console: switching to colour dummy device 80x25 Dec 13 08:53:36.485687 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Dec 13 08:53:36.485705 kernel: [drm] features: -context_init Dec 13 08:53:36.483443 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 08:53:36.483964 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 08:53:36.485562 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 08:53:36.497821 kernel: [drm] number of scanouts: 1 Dec 13 08:53:36.497895 kernel: [drm] number of cap sets: 0 Dec 13 08:53:36.497246 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 08:53:36.497619 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 08:53:36.500193 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Dec 13 08:53:36.506306 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Dec 13 08:53:36.506396 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 08:53:36.510216 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Dec 13 08:53:36.514051 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 08:53:36.516459 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 08:53:36.528242 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 08:53:36.536522 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 08:53:36.544473 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 08:53:36.544686 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 08:53:36.544799 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 08:53:36.544877 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 08:53:36.546568 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 08:53:36.564131 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 08:53:36.566161 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 08:53:36.566397 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 08:53:36.566528 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 08:53:36.566642 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 08:53:36.569979 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 08:53:36.571829 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 08:53:36.576876 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 08:53:36.593607 augenrules[1397]: No rules Dec 13 08:53:36.593843 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 08:53:36.607341 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 08:53:36.607641 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 08:53:36.617040 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 08:53:36.625192 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 08:53:36.625496 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 08:53:36.626002 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 08:53:36.626130 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 08:53:36.629641 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 08:53:36.630098 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 08:53:36.631696 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 08:53:36.643294 kernel: EDAC MC: Ver: 3.0.0 Dec 13 08:53:36.632433 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 08:53:36.646478 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 08:53:36.665585 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 08:53:36.666252 systemd[1]: Finished ensure-sysext.service. Dec 13 08:53:36.666802 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 08:53:36.666940 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 08:53:36.673845 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 08:53:36.681590 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 08:53:36.691652 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 08:53:36.692303 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 08:53:36.692840 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 08:53:36.693226 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 08:53:36.693833 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 08:53:36.695227 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 08:53:36.700790 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 08:53:36.705636 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 08:53:36.706073 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 08:53:36.717987 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 08:53:36.732462 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 08:53:36.742380 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 08:53:36.747378 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 08:53:36.785324 lvm[1429]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 08:53:36.817737 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 08:53:36.818453 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 08:53:36.827513 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 08:53:36.853559 systemd-networkd[1366]: lo: Link UP Dec 13 08:53:36.853570 systemd-networkd[1366]: lo: Gained carrier Dec 13 08:53:36.856921 lvm[1435]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 08:53:36.858111 systemd-networkd[1366]: Enumeration completed Dec 13 08:53:36.858313 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 08:53:36.859556 systemd-networkd[1366]: eth0: Configuring with /run/systemd/network/10-52:0f:12:76:02:88.network. Dec 13 08:53:36.861043 systemd-networkd[1366]: eth1: Configuring with /run/systemd/network/10-46:3e:ef:36:6b:50.network. Dec 13 08:53:36.862691 systemd-networkd[1366]: eth0: Link UP Dec 13 08:53:36.862700 systemd-networkd[1366]: eth0: Gained carrier Dec 13 08:53:36.866509 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 08:53:36.866549 systemd-networkd[1366]: eth1: Link UP Dec 13 08:53:36.866554 systemd-networkd[1366]: eth1: Gained carrier Dec 13 08:53:36.867012 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 08:53:36.867525 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 08:53:36.874290 systemd-timesyncd[1414]: Network configuration changed, trying to establish connection. Dec 13 08:53:36.888715 systemd-resolved[1367]: Positive Trust Anchors: Dec 13 08:53:36.888731 systemd-resolved[1367]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 08:53:36.888767 systemd-resolved[1367]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 08:53:36.897898 systemd-resolved[1367]: Using system hostname 'ci-4081.2.1-6-e72ca174b4'. Dec 13 08:53:36.899959 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 08:53:36.901613 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 08:53:36.903272 systemd[1]: Reached target network.target - Network. Dec 13 08:53:36.904789 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 08:53:36.905872 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 08:53:36.906439 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 08:53:36.906940 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 08:53:36.910741 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 08:53:36.911424 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 08:53:36.911928 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 08:53:36.914557 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 08:53:36.914615 systemd[1]: Reached target paths.target - Path Units. Dec 13 08:53:36.915188 systemd[1]: Reached target timers.target - Timer Units. Dec 13 08:53:36.917931 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 08:53:36.921020 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 08:53:36.927633 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 08:53:36.931727 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 08:53:36.933711 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 08:53:36.935138 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 08:53:36.939548 systemd[1]: Reached target basic.target - Basic System. Dec 13 08:53:36.940234 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 08:53:36.940275 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 08:53:36.951483 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 08:53:36.954756 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 13 08:53:36.971548 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 08:53:36.979768 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 08:53:36.992119 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 08:53:36.992605 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 08:53:36.996703 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 08:53:37.002580 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 08:53:37.005374 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 08:53:37.014427 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 08:53:37.017011 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 08:53:37.017674 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 08:53:37.022325 jq[1448]: false Dec 13 08:53:37.025477 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 08:53:37.028642 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 08:53:37.036663 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 08:53:37.037435 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 08:53:37.057489 coreos-metadata[1446]: Dec 13 08:53:37.054 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Dec 13 08:53:37.067172 extend-filesystems[1451]: Found loop4 Dec 13 08:53:37.079282 extend-filesystems[1451]: Found loop5 Dec 13 08:53:37.079282 extend-filesystems[1451]: Found loop6 Dec 13 08:53:37.079282 extend-filesystems[1451]: Found loop7 Dec 13 08:53:37.079282 extend-filesystems[1451]: Found vda Dec 13 08:53:37.079282 extend-filesystems[1451]: Found vda1 Dec 13 08:53:37.079282 extend-filesystems[1451]: Found vda2 Dec 13 08:53:37.079282 extend-filesystems[1451]: Found vda3 Dec 13 08:53:37.079282 extend-filesystems[1451]: Found usr Dec 13 08:53:37.079282 extend-filesystems[1451]: Found vda4 Dec 13 08:53:37.079282 extend-filesystems[1451]: Found vda6 Dec 13 08:53:37.079282 extend-filesystems[1451]: Found vda7 Dec 13 08:53:37.079282 extend-filesystems[1451]: Found vda9 Dec 13 08:53:37.079282 extend-filesystems[1451]: Checking size of /dev/vda9 Dec 13 08:53:37.138213 coreos-metadata[1446]: Dec 13 08:53:37.077 INFO Fetch successful Dec 13 08:53:37.073802 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 08:53:37.082617 dbus-daemon[1447]: [system] SELinux support is enabled Dec 13 08:53:37.138854 update_engine[1456]: I20241213 08:53:37.111392 1456 main.cc:92] Flatcar Update Engine starting Dec 13 08:53:37.138854 update_engine[1456]: I20241213 08:53:37.125430 1456 update_check_scheduler.cc:74] Next update check in 7m32s Dec 13 08:53:37.074103 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 08:53:37.152580 jq[1457]: true Dec 13 08:53:37.084043 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 08:53:37.095234 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 08:53:37.095265 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 08:53:37.108731 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 08:53:37.108860 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Dec 13 08:53:37.108885 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 08:53:37.140673 systemd[1]: Started update-engine.service - Update Engine. Dec 13 08:53:37.150398 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 08:53:37.157223 extend-filesystems[1451]: Resized partition /dev/vda9 Dec 13 08:53:37.170186 extend-filesystems[1481]: resize2fs 1.47.1 (20-May-2024) Dec 13 08:53:37.186899 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Dec 13 08:53:37.186930 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1289) Dec 13 08:53:37.173511 (ntainerd)[1477]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 08:53:37.180380 systemd-timesyncd[1414]: Contacted time server 198.23.249.167:123 (0.flatcar.pool.ntp.org). Dec 13 08:53:37.180444 systemd-timesyncd[1414]: Initial clock synchronization to Fri 2024-12-13 08:53:37.473295 UTC. Dec 13 08:53:37.204250 jq[1475]: true Dec 13 08:53:37.223682 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 08:53:37.224268 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 08:53:37.232243 systemd-logind[1455]: New seat seat0. Dec 13 08:53:37.234351 systemd-logind[1455]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 08:53:37.234375 systemd-logind[1455]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 08:53:37.234794 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 08:53:37.245870 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 13 08:53:37.252889 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 08:53:37.357727 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Dec 13 08:53:37.383961 extend-filesystems[1481]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 08:53:37.383961 extend-filesystems[1481]: old_desc_blocks = 1, new_desc_blocks = 8 Dec 13 08:53:37.383961 extend-filesystems[1481]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Dec 13 08:53:37.388290 extend-filesystems[1451]: Resized filesystem in /dev/vda9 Dec 13 08:53:37.388290 extend-filesystems[1451]: Found vdb Dec 13 08:53:37.386254 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 08:53:37.386515 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 08:53:37.423924 bash[1504]: Updated "/home/core/.ssh/authorized_keys" Dec 13 08:53:37.421540 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 08:53:37.433606 systemd[1]: Starting sshkeys.service... Dec 13 08:53:37.473284 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 13 08:53:37.478731 locksmithd[1480]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 08:53:37.486257 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 13 08:53:37.583226 coreos-metadata[1518]: Dec 13 08:53:37.582 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Dec 13 08:53:37.597854 coreos-metadata[1518]: Dec 13 08:53:37.597 INFO Fetch successful Dec 13 08:53:37.606301 unknown[1518]: wrote ssh authorized keys file for user: core Dec 13 08:53:37.633246 update-ssh-keys[1521]: Updated "/home/core/.ssh/authorized_keys" Dec 13 08:53:37.635217 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 13 08:53:37.641826 systemd[1]: Finished sshkeys.service. Dec 13 08:53:37.658046 containerd[1477]: time="2024-12-13T08:53:37.657930981Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 08:53:37.720491 containerd[1477]: time="2024-12-13T08:53:37.720320911Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 08:53:37.722945 containerd[1477]: time="2024-12-13T08:53:37.722798644Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 08:53:37.722945 containerd[1477]: time="2024-12-13T08:53:37.722872736Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 08:53:37.722945 containerd[1477]: time="2024-12-13T08:53:37.722893454Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 08:53:37.723413 containerd[1477]: time="2024-12-13T08:53:37.723298696Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 08:53:37.723413 containerd[1477]: time="2024-12-13T08:53:37.723341806Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 08:53:37.723709 containerd[1477]: time="2024-12-13T08:53:37.723562344Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 08:53:37.723709 containerd[1477]: time="2024-12-13T08:53:37.723582047Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 08:53:37.723958 containerd[1477]: time="2024-12-13T08:53:37.723921385Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 08:53:37.724098 containerd[1477]: time="2024-12-13T08:53:37.723996784Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 08:53:37.724098 containerd[1477]: time="2024-12-13T08:53:37.724028359Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 08:53:37.724098 containerd[1477]: time="2024-12-13T08:53:37.724038672Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 08:53:37.724284 containerd[1477]: time="2024-12-13T08:53:37.724259606Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 08:53:37.724785 containerd[1477]: time="2024-12-13T08:53:37.724641557Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 08:53:37.725009 containerd[1477]: time="2024-12-13T08:53:37.724962475Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 08:53:37.725128 containerd[1477]: time="2024-12-13T08:53:37.725059027Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 08:53:37.725368 containerd[1477]: time="2024-12-13T08:53:37.725306971Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 08:53:37.725523 containerd[1477]: time="2024-12-13T08:53:37.725475030Z" level=info msg="metadata content store policy set" policy=shared Dec 13 08:53:37.732134 containerd[1477]: time="2024-12-13T08:53:37.731945734Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 08:53:37.732134 containerd[1477]: time="2024-12-13T08:53:37.732034826Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 08:53:37.732134 containerd[1477]: time="2024-12-13T08:53:37.732058833Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 08:53:37.732134 containerd[1477]: time="2024-12-13T08:53:37.732074873Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 08:53:37.732134 containerd[1477]: time="2024-12-13T08:53:37.732091125Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 08:53:37.734185 containerd[1477]: time="2024-12-13T08:53:37.732527142Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 08:53:37.734185 containerd[1477]: time="2024-12-13T08:53:37.732856505Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 08:53:37.734185 containerd[1477]: time="2024-12-13T08:53:37.732981423Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 08:53:37.734185 containerd[1477]: time="2024-12-13T08:53:37.732997799Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 08:53:37.734185 containerd[1477]: time="2024-12-13T08:53:37.733010518Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 08:53:37.734185 containerd[1477]: time="2024-12-13T08:53:37.733024634Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 08:53:37.734185 containerd[1477]: time="2024-12-13T08:53:37.733037654Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 08:53:37.734185 containerd[1477]: time="2024-12-13T08:53:37.733063824Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 08:53:37.734185 containerd[1477]: time="2024-12-13T08:53:37.733080822Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 08:53:37.734185 containerd[1477]: time="2024-12-13T08:53:37.733125280Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 08:53:37.734185 containerd[1477]: time="2024-12-13T08:53:37.733142528Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 08:53:37.734185 containerd[1477]: time="2024-12-13T08:53:37.733171362Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 08:53:37.734185 containerd[1477]: time="2024-12-13T08:53:37.733183342Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 08:53:37.734185 containerd[1477]: time="2024-12-13T08:53:37.733202759Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 08:53:37.734514 containerd[1477]: time="2024-12-13T08:53:37.733217614Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 08:53:37.734514 containerd[1477]: time="2024-12-13T08:53:37.733360550Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 08:53:37.734514 containerd[1477]: time="2024-12-13T08:53:37.733385423Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 08:53:37.734514 containerd[1477]: time="2024-12-13T08:53:37.733402623Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 08:53:37.734514 containerd[1477]: time="2024-12-13T08:53:37.733422155Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 08:53:37.734514 containerd[1477]: time="2024-12-13T08:53:37.733459290Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 08:53:37.734514 containerd[1477]: time="2024-12-13T08:53:37.733479217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 08:53:37.734514 containerd[1477]: time="2024-12-13T08:53:37.733492762Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 08:53:37.734514 containerd[1477]: time="2024-12-13T08:53:37.733513352Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 08:53:37.734514 containerd[1477]: time="2024-12-13T08:53:37.733527092Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 08:53:37.734514 containerd[1477]: time="2024-12-13T08:53:37.733540174Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 08:53:37.734514 containerd[1477]: time="2024-12-13T08:53:37.733553415Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 08:53:37.734514 containerd[1477]: time="2024-12-13T08:53:37.733571600Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 08:53:37.734514 containerd[1477]: time="2024-12-13T08:53:37.733594714Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 08:53:37.734514 containerd[1477]: time="2024-12-13T08:53:37.733607383Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 08:53:37.734818 containerd[1477]: time="2024-12-13T08:53:37.733617725Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 08:53:37.734818 containerd[1477]: time="2024-12-13T08:53:37.733657347Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 08:53:37.734818 containerd[1477]: time="2024-12-13T08:53:37.733676913Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 08:53:37.734818 containerd[1477]: time="2024-12-13T08:53:37.733693005Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 08:53:37.734818 containerd[1477]: time="2024-12-13T08:53:37.733711283Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 08:53:37.734818 containerd[1477]: time="2024-12-13T08:53:37.733725525Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 08:53:37.734818 containerd[1477]: time="2024-12-13T08:53:37.733742672Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 08:53:37.734818 containerd[1477]: time="2024-12-13T08:53:37.733762932Z" level=info msg="NRI interface is disabled by configuration." Dec 13 08:53:37.734818 containerd[1477]: time="2024-12-13T08:53:37.733778162Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 08:53:37.734982 containerd[1477]: time="2024-12-13T08:53:37.734088722Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 08:53:37.735378 containerd[1477]: time="2024-12-13T08:53:37.735353705Z" level=info msg="Connect containerd service" Dec 13 08:53:37.735472 containerd[1477]: time="2024-12-13T08:53:37.735460911Z" level=info msg="using legacy CRI server" Dec 13 08:53:37.735547 containerd[1477]: time="2024-12-13T08:53:37.735530579Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 08:53:37.735759 containerd[1477]: time="2024-12-13T08:53:37.735739004Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 08:53:37.736690 containerd[1477]: time="2024-12-13T08:53:37.736655378Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 08:53:37.736994 containerd[1477]: time="2024-12-13T08:53:37.736919020Z" level=info msg="Start subscribing containerd event" Dec 13 08:53:37.737048 containerd[1477]: time="2024-12-13T08:53:37.737030392Z" level=info msg="Start recovering state" Dec 13 08:53:37.737184 containerd[1477]: time="2024-12-13T08:53:37.737141171Z" level=info msg="Start event monitor" Dec 13 08:53:37.737184 containerd[1477]: time="2024-12-13T08:53:37.737182394Z" level=info msg="Start snapshots syncer" Dec 13 08:53:37.737328 containerd[1477]: time="2024-12-13T08:53:37.737196459Z" level=info msg="Start cni network conf syncer for default" Dec 13 08:53:37.737328 containerd[1477]: time="2024-12-13T08:53:37.737209290Z" level=info msg="Start streaming server" Dec 13 08:53:37.737650 containerd[1477]: time="2024-12-13T08:53:37.737629676Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 08:53:37.737745 containerd[1477]: time="2024-12-13T08:53:37.737733486Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 08:53:37.737961 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 08:53:37.740544 containerd[1477]: time="2024-12-13T08:53:37.740494251Z" level=info msg="containerd successfully booted in 0.083811s" Dec 13 08:53:37.799932 sshd_keygen[1471]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 08:53:37.830052 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 08:53:37.838591 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 08:53:37.850742 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 08:53:37.850968 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 08:53:37.858593 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 08:53:37.878573 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 08:53:37.892687 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 08:53:37.896849 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 08:53:37.899611 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 08:53:38.052283 systemd-networkd[1366]: eth0: Gained IPv6LL Dec 13 08:53:38.054456 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 08:53:38.056971 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 08:53:38.078746 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 08:53:38.082237 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 08:53:38.117061 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 08:53:38.243490 systemd-networkd[1366]: eth1: Gained IPv6LL Dec 13 08:53:39.087782 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 08:53:39.090075 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 08:53:39.093489 systemd[1]: Startup finished in 1.152s (kernel) + 5.052s (initrd) + 5.362s (userspace) = 11.568s. Dec 13 08:53:39.094668 (kubelet)[1561]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 08:53:39.982611 kubelet[1561]: E1213 08:53:39.982388 1561 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 08:53:39.985919 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 08:53:39.986122 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 08:53:39.986568 systemd[1]: kubelet.service: Consumed 1.509s CPU time. Dec 13 08:53:40.432909 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 08:53:40.439889 systemd[1]: Started sshd@0-137.184.89.200:22-147.75.109.163:59440.service - OpenSSH per-connection server daemon (147.75.109.163:59440). Dec 13 08:53:40.534001 sshd[1574]: Accepted publickey for core from 147.75.109.163 port 59440 ssh2: RSA SHA256:GmRBCjv5DLbtT++ktFQz5R9M6+onrAQ9dTcgZ+NRPZM Dec 13 08:53:40.538436 sshd[1574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:53:40.552439 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 08:53:40.571899 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 08:53:40.579642 systemd-logind[1455]: New session 1 of user core. Dec 13 08:53:40.597341 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 08:53:40.607877 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 08:53:40.634421 (systemd)[1578]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 08:53:40.820655 systemd[1578]: Queued start job for default target default.target. Dec 13 08:53:40.829868 systemd[1578]: Created slice app.slice - User Application Slice. Dec 13 08:53:40.829948 systemd[1578]: Reached target paths.target - Paths. Dec 13 08:53:40.829972 systemd[1578]: Reached target timers.target - Timers. Dec 13 08:53:40.832900 systemd[1578]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 08:53:40.859745 systemd[1578]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 08:53:40.859973 systemd[1578]: Reached target sockets.target - Sockets. Dec 13 08:53:40.859998 systemd[1578]: Reached target basic.target - Basic System. Dec 13 08:53:40.860076 systemd[1578]: Reached target default.target - Main User Target. Dec 13 08:53:40.860117 systemd[1578]: Startup finished in 213ms. Dec 13 08:53:40.860795 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 08:53:40.872707 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 08:53:40.958173 systemd[1]: Started sshd@1-137.184.89.200:22-147.75.109.163:59446.service - OpenSSH per-connection server daemon (147.75.109.163:59446). Dec 13 08:53:41.033984 sshd[1589]: Accepted publickey for core from 147.75.109.163 port 59446 ssh2: RSA SHA256:GmRBCjv5DLbtT++ktFQz5R9M6+onrAQ9dTcgZ+NRPZM Dec 13 08:53:41.037162 sshd[1589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:53:41.046869 systemd-logind[1455]: New session 2 of user core. Dec 13 08:53:41.055720 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 08:53:41.130224 sshd[1589]: pam_unix(sshd:session): session closed for user core Dec 13 08:53:41.141962 systemd[1]: sshd@1-137.184.89.200:22-147.75.109.163:59446.service: Deactivated successfully. Dec 13 08:53:41.145263 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 08:53:41.148449 systemd-logind[1455]: Session 2 logged out. Waiting for processes to exit. Dec 13 08:53:41.154919 systemd[1]: Started sshd@2-137.184.89.200:22-147.75.109.163:59450.service - OpenSSH per-connection server daemon (147.75.109.163:59450). Dec 13 08:53:41.157031 systemd-logind[1455]: Removed session 2. Dec 13 08:53:41.233841 sshd[1596]: Accepted publickey for core from 147.75.109.163 port 59450 ssh2: RSA SHA256:GmRBCjv5DLbtT++ktFQz5R9M6+onrAQ9dTcgZ+NRPZM Dec 13 08:53:41.236458 sshd[1596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:53:41.249687 systemd-logind[1455]: New session 3 of user core. Dec 13 08:53:41.260873 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 08:53:41.328713 sshd[1596]: pam_unix(sshd:session): session closed for user core Dec 13 08:53:41.348478 systemd[1]: sshd@2-137.184.89.200:22-147.75.109.163:59450.service: Deactivated successfully. Dec 13 08:53:41.351650 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 08:53:41.355818 systemd-logind[1455]: Session 3 logged out. Waiting for processes to exit. Dec 13 08:53:41.363194 systemd[1]: Started sshd@3-137.184.89.200:22-147.75.109.163:59462.service - OpenSSH per-connection server daemon (147.75.109.163:59462). Dec 13 08:53:41.366292 systemd-logind[1455]: Removed session 3. Dec 13 08:53:41.436177 sshd[1603]: Accepted publickey for core from 147.75.109.163 port 59462 ssh2: RSA SHA256:GmRBCjv5DLbtT++ktFQz5R9M6+onrAQ9dTcgZ+NRPZM Dec 13 08:53:41.440429 sshd[1603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:53:41.450766 systemd-logind[1455]: New session 4 of user core. Dec 13 08:53:41.460776 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 08:53:41.533585 sshd[1603]: pam_unix(sshd:session): session closed for user core Dec 13 08:53:41.544471 systemd[1]: sshd@3-137.184.89.200:22-147.75.109.163:59462.service: Deactivated successfully. Dec 13 08:53:41.547797 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 08:53:41.550457 systemd-logind[1455]: Session 4 logged out. Waiting for processes to exit. Dec 13 08:53:41.557814 systemd[1]: Started sshd@4-137.184.89.200:22-147.75.109.163:59474.service - OpenSSH per-connection server daemon (147.75.109.163:59474). Dec 13 08:53:41.562549 systemd-logind[1455]: Removed session 4. Dec 13 08:53:41.617442 sshd[1610]: Accepted publickey for core from 147.75.109.163 port 59474 ssh2: RSA SHA256:GmRBCjv5DLbtT++ktFQz5R9M6+onrAQ9dTcgZ+NRPZM Dec 13 08:53:41.620329 sshd[1610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:53:41.628960 systemd-logind[1455]: New session 5 of user core. Dec 13 08:53:41.635702 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 08:53:41.722091 sudo[1613]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 08:53:41.722663 sudo[1613]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 08:53:41.746670 sudo[1613]: pam_unix(sudo:session): session closed for user root Dec 13 08:53:41.751768 sshd[1610]: pam_unix(sshd:session): session closed for user core Dec 13 08:53:41.766282 systemd[1]: sshd@4-137.184.89.200:22-147.75.109.163:59474.service: Deactivated successfully. Dec 13 08:53:41.769394 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 08:53:41.772516 systemd-logind[1455]: Session 5 logged out. Waiting for processes to exit. Dec 13 08:53:41.787978 systemd[1]: Started sshd@5-137.184.89.200:22-147.75.109.163:59476.service - OpenSSH per-connection server daemon (147.75.109.163:59476). Dec 13 08:53:41.790679 systemd-logind[1455]: Removed session 5. Dec 13 08:53:41.850301 sshd[1618]: Accepted publickey for core from 147.75.109.163 port 59476 ssh2: RSA SHA256:GmRBCjv5DLbtT++ktFQz5R9M6+onrAQ9dTcgZ+NRPZM Dec 13 08:53:41.853962 sshd[1618]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:53:41.863142 systemd-logind[1455]: New session 6 of user core. Dec 13 08:53:41.868613 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 08:53:41.940323 sudo[1622]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 08:53:41.940902 sudo[1622]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 08:53:41.948204 sudo[1622]: pam_unix(sudo:session): session closed for user root Dec 13 08:53:41.957975 sudo[1621]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 08:53:41.958487 sudo[1621]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 08:53:41.985755 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 13 08:53:41.989770 auditctl[1625]: No rules Dec 13 08:53:41.990328 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 08:53:41.990791 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 13 08:53:41.999349 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 08:53:42.056683 augenrules[1643]: No rules Dec 13 08:53:42.058920 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 08:53:42.061374 sudo[1621]: pam_unix(sudo:session): session closed for user root Dec 13 08:53:42.066913 sshd[1618]: pam_unix(sshd:session): session closed for user core Dec 13 08:53:42.079938 systemd[1]: sshd@5-137.184.89.200:22-147.75.109.163:59476.service: Deactivated successfully. Dec 13 08:53:42.082553 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 08:53:42.084341 systemd-logind[1455]: Session 6 logged out. Waiting for processes to exit. Dec 13 08:53:42.091990 systemd[1]: Started sshd@6-137.184.89.200:22-147.75.109.163:59490.service - OpenSSH per-connection server daemon (147.75.109.163:59490). Dec 13 08:53:42.094407 systemd-logind[1455]: Removed session 6. Dec 13 08:53:42.161784 sshd[1651]: Accepted publickey for core from 147.75.109.163 port 59490 ssh2: RSA SHA256:GmRBCjv5DLbtT++ktFQz5R9M6+onrAQ9dTcgZ+NRPZM Dec 13 08:53:42.164742 sshd[1651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:53:42.173824 systemd-logind[1455]: New session 7 of user core. Dec 13 08:53:42.185652 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 08:53:42.252112 sudo[1654]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 08:53:42.253718 sudo[1654]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 08:53:43.362202 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 08:53:43.363038 systemd[1]: kubelet.service: Consumed 1.509s CPU time. Dec 13 08:53:43.380448 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 08:53:43.418290 systemd[1]: Reloading requested from client PID 1693 ('systemctl') (unit session-7.scope)... Dec 13 08:53:43.418483 systemd[1]: Reloading... Dec 13 08:53:43.636229 zram_generator::config[1732]: No configuration found. Dec 13 08:53:43.820086 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 08:53:43.948913 systemd[1]: Reloading finished in 529 ms. Dec 13 08:53:44.033737 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 08:53:44.034208 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 08:53:44.048510 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 08:53:44.240575 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 08:53:44.243418 (kubelet)[1786]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 08:53:44.332630 kubelet[1786]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 08:53:44.332630 kubelet[1786]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 08:53:44.332630 kubelet[1786]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 08:53:44.333290 kubelet[1786]: I1213 08:53:44.332714 1786 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 08:53:44.809429 kubelet[1786]: I1213 08:53:44.809344 1786 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 08:53:44.809429 kubelet[1786]: I1213 08:53:44.809411 1786 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 08:53:44.809841 kubelet[1786]: I1213 08:53:44.809798 1786 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 08:53:44.834158 kubelet[1786]: I1213 08:53:44.833970 1786 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 08:53:44.854196 kubelet[1786]: I1213 08:53:44.854101 1786 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 08:53:44.858556 kubelet[1786]: I1213 08:53:44.858471 1786 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 08:53:44.858879 kubelet[1786]: I1213 08:53:44.858825 1786 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 08:53:44.858879 kubelet[1786]: I1213 08:53:44.858868 1786 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 08:53:44.858879 kubelet[1786]: I1213 08:53:44.858880 1786 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 08:53:44.859177 kubelet[1786]: I1213 08:53:44.859048 1786 state_mem.go:36] "Initialized new in-memory state store" Dec 13 08:53:44.859261 kubelet[1786]: I1213 08:53:44.859206 1786 kubelet.go:396] "Attempting to sync node with API server" Dec 13 08:53:44.859261 kubelet[1786]: I1213 08:53:44.859225 1786 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 08:53:44.859261 kubelet[1786]: I1213 08:53:44.859261 1786 kubelet.go:312] "Adding apiserver pod source" Dec 13 08:53:44.861085 kubelet[1786]: I1213 08:53:44.859279 1786 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 08:53:44.861085 kubelet[1786]: E1213 08:53:44.859808 1786 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 08:53:44.861085 kubelet[1786]: E1213 08:53:44.859884 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 08:53:44.861823 kubelet[1786]: I1213 08:53:44.861793 1786 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 08:53:44.865774 kubelet[1786]: I1213 08:53:44.865695 1786 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 08:53:44.869264 kubelet[1786]: W1213 08:53:44.867368 1786 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 08:53:44.869264 kubelet[1786]: I1213 08:53:44.868605 1786 server.go:1256] "Started kubelet" Dec 13 08:53:44.872666 kubelet[1786]: W1213 08:53:44.872614 1786 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes "137.184.89.200" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Dec 13 08:53:44.872889 kubelet[1786]: E1213 08:53:44.872857 1786 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes "137.184.89.200" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Dec 13 08:53:44.873058 kubelet[1786]: W1213 08:53:44.873040 1786 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 13 08:53:44.873387 kubelet[1786]: E1213 08:53:44.873362 1786 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 13 08:53:44.873622 kubelet[1786]: I1213 08:53:44.873580 1786 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 08:53:44.874380 kubelet[1786]: I1213 08:53:44.874349 1786 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 08:53:44.874864 kubelet[1786]: I1213 08:53:44.874800 1786 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 08:53:44.877654 kubelet[1786]: I1213 08:53:44.877584 1786 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 08:53:44.889290 kubelet[1786]: E1213 08:53:44.889240 1786 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{137.184.89.200.1810b09856641665 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:137.184.89.200,UID:137.184.89.200,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:137.184.89.200,},FirstTimestamp:2024-12-13 08:53:44.868558437 +0000 UTC m=+0.613154268,LastTimestamp:2024-12-13 08:53:44.868558437 +0000 UTC m=+0.613154268,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:137.184.89.200,}" Dec 13 08:53:44.892077 kubelet[1786]: I1213 08:53:44.877573 1786 server.go:461] "Adding debug handlers to kubelet server" Dec 13 08:53:44.894110 kubelet[1786]: E1213 08:53:44.893553 1786 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 08:53:44.894110 kubelet[1786]: E1213 08:53:44.893906 1786 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"137.184.89.200\" not found" Dec 13 08:53:44.894110 kubelet[1786]: I1213 08:53:44.893936 1786 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 08:53:44.894110 kubelet[1786]: I1213 08:53:44.894045 1786 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 08:53:44.894110 kubelet[1786]: I1213 08:53:44.894098 1786 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 08:53:44.897294 kubelet[1786]: I1213 08:53:44.897210 1786 factory.go:221] Registration of the systemd container factory successfully Dec 13 08:53:44.905951 kubelet[1786]: I1213 08:53:44.902951 1786 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 08:53:44.913247 kubelet[1786]: I1213 08:53:44.908891 1786 factory.go:221] Registration of the containerd container factory successfully Dec 13 08:53:44.920211 kubelet[1786]: E1213 08:53:44.919211 1786 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"137.184.89.200\" not found" node="137.184.89.200" Dec 13 08:53:44.938295 kubelet[1786]: I1213 08:53:44.937783 1786 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 08:53:44.938295 kubelet[1786]: I1213 08:53:44.937818 1786 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 08:53:44.938295 kubelet[1786]: I1213 08:53:44.937847 1786 state_mem.go:36] "Initialized new in-memory state store" Dec 13 08:53:44.943964 kubelet[1786]: I1213 08:53:44.943775 1786 policy_none.go:49] "None policy: Start" Dec 13 08:53:44.946146 kubelet[1786]: I1213 08:53:44.946084 1786 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 08:53:44.946146 kubelet[1786]: I1213 08:53:44.946131 1786 state_mem.go:35] "Initializing new in-memory state store" Dec 13 08:53:44.958558 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 08:53:44.986951 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 08:53:44.994214 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 08:53:44.998134 kubelet[1786]: I1213 08:53:44.998082 1786 kubelet_node_status.go:73] "Attempting to register node" node="137.184.89.200" Dec 13 08:53:45.004841 kubelet[1786]: I1213 08:53:45.004796 1786 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 08:53:45.012465 kubelet[1786]: I1213 08:53:45.012422 1786 kubelet_node_status.go:76] "Successfully registered node" node="137.184.89.200" Dec 13 08:53:45.016241 kubelet[1786]: I1213 08:53:45.016023 1786 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 08:53:45.022131 kubelet[1786]: E1213 08:53:45.022025 1786 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"137.184.89.200\" not found" Dec 13 08:53:45.025710 kubelet[1786]: I1213 08:53:45.025663 1786 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 08:53:45.028643 kubelet[1786]: I1213 08:53:45.028597 1786 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 08:53:45.028889 kubelet[1786]: I1213 08:53:45.028867 1786 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 08:53:45.028997 kubelet[1786]: I1213 08:53:45.028984 1786 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 08:53:45.029479 kubelet[1786]: E1213 08:53:45.029452 1786 kubelet.go:2353] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Dec 13 08:53:45.057040 kubelet[1786]: E1213 08:53:45.056993 1786 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"137.184.89.200\" not found" Dec 13 08:53:45.157802 kubelet[1786]: E1213 08:53:45.157619 1786 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"137.184.89.200\" not found" Dec 13 08:53:45.258481 kubelet[1786]: E1213 08:53:45.258416 1786 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"137.184.89.200\" not found" Dec 13 08:53:45.359107 kubelet[1786]: E1213 08:53:45.359028 1786 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"137.184.89.200\" not found" Dec 13 08:53:45.460315 kubelet[1786]: E1213 08:53:45.460092 1786 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"137.184.89.200\" not found" Dec 13 08:53:45.560687 kubelet[1786]: E1213 08:53:45.560624 1786 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"137.184.89.200\" not found" Dec 13 08:53:45.661817 kubelet[1786]: E1213 08:53:45.661745 1786 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"137.184.89.200\" not found" Dec 13 08:53:45.763119 kubelet[1786]: E1213 08:53:45.762960 1786 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"137.184.89.200\" not found" Dec 13 08:53:45.770979 sudo[1654]: pam_unix(sudo:session): session closed for user root Dec 13 08:53:45.775453 sshd[1651]: pam_unix(sshd:session): session closed for user core Dec 13 08:53:45.779350 systemd[1]: sshd@6-137.184.89.200:22-147.75.109.163:59490.service: Deactivated successfully. Dec 13 08:53:45.781715 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 08:53:45.784767 systemd-logind[1455]: Session 7 logged out. Waiting for processes to exit. Dec 13 08:53:45.786261 systemd-logind[1455]: Removed session 7. Dec 13 08:53:45.813659 kubelet[1786]: I1213 08:53:45.813561 1786 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 13 08:53:45.813922 kubelet[1786]: W1213 08:53:45.813859 1786 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.CSIDriver ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Dec 13 08:53:45.813922 kubelet[1786]: W1213 08:53:45.813920 1786 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.RuntimeClass ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Dec 13 08:53:45.860842 kubelet[1786]: E1213 08:53:45.860729 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 08:53:45.864133 kubelet[1786]: E1213 08:53:45.864056 1786 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"137.184.89.200\" not found" Dec 13 08:53:45.965027 kubelet[1786]: E1213 08:53:45.964962 1786 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"137.184.89.200\" not found" Dec 13 08:53:46.065922 kubelet[1786]: E1213 08:53:46.065671 1786 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"137.184.89.200\" not found" Dec 13 08:53:46.166398 kubelet[1786]: E1213 08:53:46.166328 1786 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"137.184.89.200\" not found" Dec 13 08:53:46.267677 kubelet[1786]: E1213 08:53:46.267503 1786 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"137.184.89.200\" not found" Dec 13 08:53:46.369535 kubelet[1786]: I1213 08:53:46.368934 1786 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Dec 13 08:53:46.370815 kubelet[1786]: I1213 08:53:46.369807 1786 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Dec 13 08:53:46.370851 containerd[1477]: time="2024-12-13T08:53:46.369511326Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 08:53:46.861577 kubelet[1786]: E1213 08:53:46.861404 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 08:53:46.861577 kubelet[1786]: I1213 08:53:46.861418 1786 apiserver.go:52] "Watching apiserver" Dec 13 08:53:46.867023 kubelet[1786]: I1213 08:53:46.866964 1786 topology_manager.go:215] "Topology Admit Handler" podUID="ba5aff5f-e7db-4e55-ac8c-e5253f3d7000" podNamespace="calico-system" podName="csi-node-driver-wfm6q" Dec 13 08:53:46.867170 kubelet[1786]: I1213 08:53:46.867097 1786 topology_manager.go:215] "Topology Admit Handler" podUID="63bfed31-4125-4ae4-ac86-a239f3436051" podNamespace="kube-system" podName="kube-proxy-j8cjx" Dec 13 08:53:46.867170 kubelet[1786]: I1213 08:53:46.867144 1786 topology_manager.go:215] "Topology Admit Handler" podUID="1ff97c96-2af5-41d3-9ef0-7306f2c03a63" podNamespace="calico-system" podName="calico-node-kh5gb" Dec 13 08:53:46.867823 kubelet[1786]: E1213 08:53:46.867533 1786 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wfm6q" podUID="ba5aff5f-e7db-4e55-ac8c-e5253f3d7000" Dec 13 08:53:46.877332 systemd[1]: Created slice kubepods-besteffort-pod1ff97c96_2af5_41d3_9ef0_7306f2c03a63.slice - libcontainer container kubepods-besteffort-pod1ff97c96_2af5_41d3_9ef0_7306f2c03a63.slice. Dec 13 08:53:46.895004 kubelet[1786]: I1213 08:53:46.894907 1786 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 08:53:46.899398 systemd[1]: Created slice kubepods-besteffort-pod63bfed31_4125_4ae4_ac86_a239f3436051.slice - libcontainer container kubepods-besteffort-pod63bfed31_4125_4ae4_ac86_a239f3436051.slice. Dec 13 08:53:46.906081 kubelet[1786]: I1213 08:53:46.906013 1786 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1ff97c96-2af5-41d3-9ef0-7306f2c03a63-lib-modules\") pod \"calico-node-kh5gb\" (UID: \"1ff97c96-2af5-41d3-9ef0-7306f2c03a63\") " pod="calico-system/calico-node-kh5gb" Dec 13 08:53:46.906081 kubelet[1786]: I1213 08:53:46.906089 1786 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1ff97c96-2af5-41d3-9ef0-7306f2c03a63-var-lib-calico\") pod \"calico-node-kh5gb\" (UID: \"1ff97c96-2af5-41d3-9ef0-7306f2c03a63\") " pod="calico-system/calico-node-kh5gb" Dec 13 08:53:46.906338 kubelet[1786]: I1213 08:53:46.906123 1786 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1ff97c96-2af5-41d3-9ef0-7306f2c03a63-cni-net-dir\") pod \"calico-node-kh5gb\" (UID: \"1ff97c96-2af5-41d3-9ef0-7306f2c03a63\") " pod="calico-system/calico-node-kh5gb" Dec 13 08:53:46.906338 kubelet[1786]: I1213 08:53:46.906247 1786 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1ff97c96-2af5-41d3-9ef0-7306f2c03a63-cni-log-dir\") pod \"calico-node-kh5gb\" (UID: \"1ff97c96-2af5-41d3-9ef0-7306f2c03a63\") " pod="calico-system/calico-node-kh5gb" Dec 13 08:53:46.906338 kubelet[1786]: I1213 08:53:46.906274 1786 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1ff97c96-2af5-41d3-9ef0-7306f2c03a63-flexvol-driver-host\") pod \"calico-node-kh5gb\" (UID: \"1ff97c96-2af5-41d3-9ef0-7306f2c03a63\") " pod="calico-system/calico-node-kh5gb" Dec 13 08:53:46.906338 kubelet[1786]: I1213 08:53:46.906298 1786 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ba5aff5f-e7db-4e55-ac8c-e5253f3d7000-kubelet-dir\") pod \"csi-node-driver-wfm6q\" (UID: \"ba5aff5f-e7db-4e55-ac8c-e5253f3d7000\") " pod="calico-system/csi-node-driver-wfm6q" Dec 13 08:53:46.906338 kubelet[1786]: I1213 08:53:46.906318 1786 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ba5aff5f-e7db-4e55-ac8c-e5253f3d7000-registration-dir\") pod \"csi-node-driver-wfm6q\" (UID: \"ba5aff5f-e7db-4e55-ac8c-e5253f3d7000\") " pod="calico-system/csi-node-driver-wfm6q" Dec 13 08:53:46.906482 kubelet[1786]: I1213 08:53:46.906340 1786 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qq25c\" (UniqueName: \"kubernetes.io/projected/ba5aff5f-e7db-4e55-ac8c-e5253f3d7000-kube-api-access-qq25c\") pod \"csi-node-driver-wfm6q\" (UID: \"ba5aff5f-e7db-4e55-ac8c-e5253f3d7000\") " pod="calico-system/csi-node-driver-wfm6q" Dec 13 08:53:46.906482 kubelet[1786]: I1213 08:53:46.906361 1786 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/63bfed31-4125-4ae4-ac86-a239f3436051-xtables-lock\") pod \"kube-proxy-j8cjx\" (UID: \"63bfed31-4125-4ae4-ac86-a239f3436051\") " pod="kube-system/kube-proxy-j8cjx" Dec 13 08:53:46.906482 kubelet[1786]: I1213 08:53:46.906382 1786 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1ff97c96-2af5-41d3-9ef0-7306f2c03a63-node-certs\") pod \"calico-node-kh5gb\" (UID: \"1ff97c96-2af5-41d3-9ef0-7306f2c03a63\") " pod="calico-system/calico-node-kh5gb" Dec 13 08:53:46.906482 kubelet[1786]: I1213 08:53:46.906400 1786 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1ff97c96-2af5-41d3-9ef0-7306f2c03a63-var-run-calico\") pod \"calico-node-kh5gb\" (UID: \"1ff97c96-2af5-41d3-9ef0-7306f2c03a63\") " pod="calico-system/calico-node-kh5gb" Dec 13 08:53:46.906482 kubelet[1786]: I1213 08:53:46.906419 1786 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/63bfed31-4125-4ae4-ac86-a239f3436051-lib-modules\") pod \"kube-proxy-j8cjx\" (UID: \"63bfed31-4125-4ae4-ac86-a239f3436051\") " pod="kube-system/kube-proxy-j8cjx" Dec 13 08:53:46.906603 kubelet[1786]: I1213 08:53:46.906437 1786 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1ff97c96-2af5-41d3-9ef0-7306f2c03a63-tigera-ca-bundle\") pod \"calico-node-kh5gb\" (UID: \"1ff97c96-2af5-41d3-9ef0-7306f2c03a63\") " pod="calico-system/calico-node-kh5gb" Dec 13 08:53:46.906603 kubelet[1786]: I1213 08:53:46.906480 1786 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlq74\" (UniqueName: \"kubernetes.io/projected/63bfed31-4125-4ae4-ac86-a239f3436051-kube-api-access-vlq74\") pod \"kube-proxy-j8cjx\" (UID: \"63bfed31-4125-4ae4-ac86-a239f3436051\") " pod="kube-system/kube-proxy-j8cjx" Dec 13 08:53:46.906603 kubelet[1786]: I1213 08:53:46.906513 1786 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1ff97c96-2af5-41d3-9ef0-7306f2c03a63-xtables-lock\") pod \"calico-node-kh5gb\" (UID: \"1ff97c96-2af5-41d3-9ef0-7306f2c03a63\") " pod="calico-system/calico-node-kh5gb" Dec 13 08:53:46.906603 kubelet[1786]: I1213 08:53:46.906540 1786 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1ff97c96-2af5-41d3-9ef0-7306f2c03a63-policysync\") pod \"calico-node-kh5gb\" (UID: \"1ff97c96-2af5-41d3-9ef0-7306f2c03a63\") " pod="calico-system/calico-node-kh5gb" Dec 13 08:53:46.906603 kubelet[1786]: I1213 08:53:46.906563 1786 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1ff97c96-2af5-41d3-9ef0-7306f2c03a63-cni-bin-dir\") pod \"calico-node-kh5gb\" (UID: \"1ff97c96-2af5-41d3-9ef0-7306f2c03a63\") " pod="calico-system/calico-node-kh5gb" Dec 13 08:53:46.906735 kubelet[1786]: I1213 08:53:46.906585 1786 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnmqr\" (UniqueName: \"kubernetes.io/projected/1ff97c96-2af5-41d3-9ef0-7306f2c03a63-kube-api-access-tnmqr\") pod \"calico-node-kh5gb\" (UID: \"1ff97c96-2af5-41d3-9ef0-7306f2c03a63\") " pod="calico-system/calico-node-kh5gb" Dec 13 08:53:46.906735 kubelet[1786]: I1213 08:53:46.906613 1786 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/ba5aff5f-e7db-4e55-ac8c-e5253f3d7000-varrun\") pod \"csi-node-driver-wfm6q\" (UID: \"ba5aff5f-e7db-4e55-ac8c-e5253f3d7000\") " pod="calico-system/csi-node-driver-wfm6q" Dec 13 08:53:46.906735 kubelet[1786]: I1213 08:53:46.906632 1786 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ba5aff5f-e7db-4e55-ac8c-e5253f3d7000-socket-dir\") pod \"csi-node-driver-wfm6q\" (UID: \"ba5aff5f-e7db-4e55-ac8c-e5253f3d7000\") " pod="calico-system/csi-node-driver-wfm6q" Dec 13 08:53:46.906735 kubelet[1786]: I1213 08:53:46.906662 1786 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/63bfed31-4125-4ae4-ac86-a239f3436051-kube-proxy\") pod \"kube-proxy-j8cjx\" (UID: \"63bfed31-4125-4ae4-ac86-a239f3436051\") " pod="kube-system/kube-proxy-j8cjx" Dec 13 08:53:47.018248 kubelet[1786]: E1213 08:53:47.017907 1786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:53:47.018248 kubelet[1786]: W1213 08:53:47.017941 1786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:53:47.018248 kubelet[1786]: E1213 08:53:47.017968 1786 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:53:47.029515 kubelet[1786]: E1213 08:53:47.029014 1786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:53:47.029515 kubelet[1786]: W1213 08:53:47.029046 1786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:53:47.029515 kubelet[1786]: E1213 08:53:47.029089 1786 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:53:47.034694 kubelet[1786]: E1213 08:53:47.034585 1786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:53:47.034694 kubelet[1786]: W1213 08:53:47.034618 1786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:53:47.034694 kubelet[1786]: E1213 08:53:47.034644 1786 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:53:47.043796 kubelet[1786]: E1213 08:53:47.043749 1786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:53:47.043796 kubelet[1786]: W1213 08:53:47.043783 1786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:53:47.043954 kubelet[1786]: E1213 08:53:47.043818 1786 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:53:47.195407 kubelet[1786]: E1213 08:53:47.194757 1786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 08:53:47.197081 containerd[1477]: time="2024-12-13T08:53:47.196588093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kh5gb,Uid:1ff97c96-2af5-41d3-9ef0-7306f2c03a63,Namespace:calico-system,Attempt:0,}" Dec 13 08:53:47.203119 kubelet[1786]: E1213 08:53:47.202798 1786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 08:53:47.203513 containerd[1477]: time="2024-12-13T08:53:47.203467052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-j8cjx,Uid:63bfed31-4125-4ae4-ac86-a239f3436051,Namespace:kube-system,Attempt:0,}" Dec 13 08:53:47.801285 containerd[1477]: time="2024-12-13T08:53:47.801015072Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Dec 13 08:53:47.802010 containerd[1477]: time="2024-12-13T08:53:47.801689366Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 08:53:47.802772 containerd[1477]: time="2024-12-13T08:53:47.802713038Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 08:53:47.803846 containerd[1477]: time="2024-12-13T08:53:47.803803513Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 08:53:47.804951 containerd[1477]: time="2024-12-13T08:53:47.804568528Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 08:53:47.806885 containerd[1477]: time="2024-12-13T08:53:47.806838876Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 08:53:47.809291 containerd[1477]: time="2024-12-13T08:53:47.809255264Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 612.559727ms" Dec 13 08:53:47.812422 containerd[1477]: time="2024-12-13T08:53:47.812326435Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 608.764923ms" Dec 13 08:53:47.862703 kubelet[1786]: E1213 08:53:47.862641 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 08:53:47.976283 containerd[1477]: time="2024-12-13T08:53:47.974123594Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 08:53:47.976283 containerd[1477]: time="2024-12-13T08:53:47.975351320Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 08:53:47.976283 containerd[1477]: time="2024-12-13T08:53:47.975374404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:53:47.976283 containerd[1477]: time="2024-12-13T08:53:47.975485159Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:53:47.982110 containerd[1477]: time="2024-12-13T08:53:47.979968742Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 08:53:47.982110 containerd[1477]: time="2024-12-13T08:53:47.980185998Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 08:53:47.982726 containerd[1477]: time="2024-12-13T08:53:47.980290797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:53:47.984314 containerd[1477]: time="2024-12-13T08:53:47.983467814Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:53:48.022453 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2301240541.mount: Deactivated successfully. Dec 13 08:53:48.109659 systemd[1]: Started cri-containerd-07b28c9a968025307ee257178e73a99717f0a70ca4b0e8367bb4637869e4b29d.scope - libcontainer container 07b28c9a968025307ee257178e73a99717f0a70ca4b0e8367bb4637869e4b29d. Dec 13 08:53:48.112530 systemd[1]: Started cri-containerd-4fdc8c616454f858fa3edd648bef09a50208dd7b6875afa5ba5face28cdf05f0.scope - libcontainer container 4fdc8c616454f858fa3edd648bef09a50208dd7b6875afa5ba5face28cdf05f0. Dec 13 08:53:48.160726 containerd[1477]: time="2024-12-13T08:53:48.160569864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kh5gb,Uid:1ff97c96-2af5-41d3-9ef0-7306f2c03a63,Namespace:calico-system,Attempt:0,} returns sandbox id \"4fdc8c616454f858fa3edd648bef09a50208dd7b6875afa5ba5face28cdf05f0\"" Dec 13 08:53:48.164205 kubelet[1786]: E1213 08:53:48.163341 1786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 08:53:48.166852 containerd[1477]: time="2024-12-13T08:53:48.166787254Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Dec 13 08:53:48.174744 containerd[1477]: time="2024-12-13T08:53:48.174632233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-j8cjx,Uid:63bfed31-4125-4ae4-ac86-a239f3436051,Namespace:kube-system,Attempt:0,} returns sandbox id \"07b28c9a968025307ee257178e73a99717f0a70ca4b0e8367bb4637869e4b29d\"" Dec 13 08:53:48.176056 kubelet[1786]: E1213 08:53:48.175812 1786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 08:53:48.863260 kubelet[1786]: E1213 08:53:48.863189 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 08:53:49.030769 kubelet[1786]: E1213 08:53:49.030315 1786 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wfm6q" podUID="ba5aff5f-e7db-4e55-ac8c-e5253f3d7000" Dec 13 08:53:49.554755 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3552404633.mount: Deactivated successfully. Dec 13 08:53:49.666277 containerd[1477]: time="2024-12-13T08:53:49.666222196Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:53:49.667765 containerd[1477]: time="2024-12-13T08:53:49.667600811Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Dec 13 08:53:49.669560 containerd[1477]: time="2024-12-13T08:53:49.668371039Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:53:49.670226 containerd[1477]: time="2024-12-13T08:53:49.670197220Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:53:49.671717 containerd[1477]: time="2024-12-13T08:53:49.671667141Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.50481722s" Dec 13 08:53:49.671717 containerd[1477]: time="2024-12-13T08:53:49.671711592Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Dec 13 08:53:49.673046 containerd[1477]: time="2024-12-13T08:53:49.672999775Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 08:53:49.674289 containerd[1477]: time="2024-12-13T08:53:49.674112938Z" level=info msg="CreateContainer within sandbox \"4fdc8c616454f858fa3edd648bef09a50208dd7b6875afa5ba5face28cdf05f0\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 08:53:49.697459 containerd[1477]: time="2024-12-13T08:53:49.697391802Z" level=info msg="CreateContainer within sandbox \"4fdc8c616454f858fa3edd648bef09a50208dd7b6875afa5ba5face28cdf05f0\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"db669c744c2dff74dbbeb987f48897e19041cc21469d62a2ca7aac796db62ca2\"" Dec 13 08:53:49.698946 containerd[1477]: time="2024-12-13T08:53:49.698639452Z" level=info msg="StartContainer for \"db669c744c2dff74dbbeb987f48897e19041cc21469d62a2ca7aac796db62ca2\"" Dec 13 08:53:49.750494 systemd[1]: Started cri-containerd-db669c744c2dff74dbbeb987f48897e19041cc21469d62a2ca7aac796db62ca2.scope - libcontainer container db669c744c2dff74dbbeb987f48897e19041cc21469d62a2ca7aac796db62ca2. Dec 13 08:53:49.794875 containerd[1477]: time="2024-12-13T08:53:49.793864409Z" level=info msg="StartContainer for \"db669c744c2dff74dbbeb987f48897e19041cc21469d62a2ca7aac796db62ca2\" returns successfully" Dec 13 08:53:49.812239 systemd[1]: cri-containerd-db669c744c2dff74dbbeb987f48897e19041cc21469d62a2ca7aac796db62ca2.scope: Deactivated successfully. Dec 13 08:53:49.863417 kubelet[1786]: E1213 08:53:49.863326 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 08:53:49.883853 containerd[1477]: time="2024-12-13T08:53:49.883603843Z" level=info msg="shim disconnected" id=db669c744c2dff74dbbeb987f48897e19041cc21469d62a2ca7aac796db62ca2 namespace=k8s.io Dec 13 08:53:49.883853 containerd[1477]: time="2024-12-13T08:53:49.883676430Z" level=warning msg="cleaning up after shim disconnected" id=db669c744c2dff74dbbeb987f48897e19041cc21469d62a2ca7aac796db62ca2 namespace=k8s.io Dec 13 08:53:49.883853 containerd[1477]: time="2024-12-13T08:53:49.883686610Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 08:53:50.054776 kubelet[1786]: E1213 08:53:50.054736 1786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 08:53:50.513465 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-db669c744c2dff74dbbeb987f48897e19041cc21469d62a2ca7aac796db62ca2-rootfs.mount: Deactivated successfully. Dec 13 08:53:50.862612 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1104470842.mount: Deactivated successfully. Dec 13 08:53:50.864882 kubelet[1786]: E1213 08:53:50.864772 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 08:53:51.030434 kubelet[1786]: E1213 08:53:51.029998 1786 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wfm6q" podUID="ba5aff5f-e7db-4e55-ac8c-e5253f3d7000" Dec 13 08:53:51.409185 containerd[1477]: time="2024-12-13T08:53:51.409095472Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:53:51.410567 containerd[1477]: time="2024-12-13T08:53:51.409672327Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=28619958" Dec 13 08:53:51.410567 containerd[1477]: time="2024-12-13T08:53:51.410519133Z" level=info msg="ImageCreate event name:\"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:53:51.412735 containerd[1477]: time="2024-12-13T08:53:51.412693433Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:53:51.413516 containerd[1477]: time="2024-12-13T08:53:51.413487448Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"28618977\" in 1.740451015s" Dec 13 08:53:51.413589 containerd[1477]: time="2024-12-13T08:53:51.413522421Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Dec 13 08:53:51.414278 containerd[1477]: time="2024-12-13T08:53:51.414247357Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Dec 13 08:53:51.417432 containerd[1477]: time="2024-12-13T08:53:51.417402623Z" level=info msg="CreateContainer within sandbox \"07b28c9a968025307ee257178e73a99717f0a70ca4b0e8367bb4637869e4b29d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 08:53:51.439745 containerd[1477]: time="2024-12-13T08:53:51.439701142Z" level=info msg="CreateContainer within sandbox \"07b28c9a968025307ee257178e73a99717f0a70ca4b0e8367bb4637869e4b29d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"235fe8443f89ef96406f0d7fa356c4a08faad6143557de9ac6771f1814eb7b3b\"" Dec 13 08:53:51.440609 containerd[1477]: time="2024-12-13T08:53:51.440509851Z" level=info msg="StartContainer for \"235fe8443f89ef96406f0d7fa356c4a08faad6143557de9ac6771f1814eb7b3b\"" Dec 13 08:53:51.480542 systemd[1]: Started cri-containerd-235fe8443f89ef96406f0d7fa356c4a08faad6143557de9ac6771f1814eb7b3b.scope - libcontainer container 235fe8443f89ef96406f0d7fa356c4a08faad6143557de9ac6771f1814eb7b3b. Dec 13 08:53:51.522243 containerd[1477]: time="2024-12-13T08:53:51.522193540Z" level=info msg="StartContainer for \"235fe8443f89ef96406f0d7fa356c4a08faad6143557de9ac6771f1814eb7b3b\" returns successfully" Dec 13 08:53:51.865985 kubelet[1786]: E1213 08:53:51.865819 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 08:53:52.060614 kubelet[1786]: E1213 08:53:52.059845 1786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 08:53:52.080335 kubelet[1786]: I1213 08:53:52.080253 1786 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-j8cjx" podStartSLOduration=3.842585266 podStartE2EDuration="7.08001078s" podCreationTimestamp="2024-12-13 08:53:45 +0000 UTC" firstStartedPulling="2024-12-13 08:53:48.17655645 +0000 UTC m=+3.921152203" lastFinishedPulling="2024-12-13 08:53:51.413981965 +0000 UTC m=+7.158577717" observedRunningTime="2024-12-13 08:53:52.079876723 +0000 UTC m=+7.824472493" watchObservedRunningTime="2024-12-13 08:53:52.08001078 +0000 UTC m=+7.824606546" Dec 13 08:53:52.868184 kubelet[1786]: E1213 08:53:52.867616 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 08:53:53.030988 kubelet[1786]: E1213 08:53:53.030304 1786 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wfm6q" podUID="ba5aff5f-e7db-4e55-ac8c-e5253f3d7000" Dec 13 08:53:53.062369 kubelet[1786]: E1213 08:53:53.062332 1786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 08:53:53.867852 kubelet[1786]: E1213 08:53:53.867784 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 08:53:54.869067 kubelet[1786]: E1213 08:53:54.869005 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 08:53:55.030212 kubelet[1786]: E1213 08:53:55.029334 1786 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wfm6q" podUID="ba5aff5f-e7db-4e55-ac8c-e5253f3d7000" Dec 13 08:53:55.178052 containerd[1477]: time="2024-12-13T08:53:55.177615199Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:53:55.178807 containerd[1477]: time="2024-12-13T08:53:55.178755860Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Dec 13 08:53:55.180092 containerd[1477]: time="2024-12-13T08:53:55.180056670Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:53:55.189436 containerd[1477]: time="2024-12-13T08:53:55.189359995Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:53:55.190178 containerd[1477]: time="2024-12-13T08:53:55.190026627Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 3.775626831s" Dec 13 08:53:55.190178 containerd[1477]: time="2024-12-13T08:53:55.190066972Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Dec 13 08:53:55.192889 containerd[1477]: time="2024-12-13T08:53:55.192849674Z" level=info msg="CreateContainer within sandbox \"4fdc8c616454f858fa3edd648bef09a50208dd7b6875afa5ba5face28cdf05f0\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 08:53:55.210930 containerd[1477]: time="2024-12-13T08:53:55.210788259Z" level=info msg="CreateContainer within sandbox \"4fdc8c616454f858fa3edd648bef09a50208dd7b6875afa5ba5face28cdf05f0\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"66a9c2bacfc59574773ec71dc42711071f531456c13aa9c9afb87e028d805665\"" Dec 13 08:53:55.212999 containerd[1477]: time="2024-12-13T08:53:55.211603520Z" level=info msg="StartContainer for \"66a9c2bacfc59574773ec71dc42711071f531456c13aa9c9afb87e028d805665\"" Dec 13 08:53:55.245715 systemd[1]: run-containerd-runc-k8s.io-66a9c2bacfc59574773ec71dc42711071f531456c13aa9c9afb87e028d805665-runc.TrBcBt.mount: Deactivated successfully. Dec 13 08:53:55.253397 systemd[1]: Started cri-containerd-66a9c2bacfc59574773ec71dc42711071f531456c13aa9c9afb87e028d805665.scope - libcontainer container 66a9c2bacfc59574773ec71dc42711071f531456c13aa9c9afb87e028d805665. Dec 13 08:53:55.294576 containerd[1477]: time="2024-12-13T08:53:55.294448424Z" level=info msg="StartContainer for \"66a9c2bacfc59574773ec71dc42711071f531456c13aa9c9afb87e028d805665\" returns successfully" Dec 13 08:53:55.869595 kubelet[1786]: E1213 08:53:55.869534 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 08:53:55.957594 systemd[1]: cri-containerd-66a9c2bacfc59574773ec71dc42711071f531456c13aa9c9afb87e028d805665.scope: Deactivated successfully. Dec 13 08:53:55.969790 kubelet[1786]: I1213 08:53:55.969749 1786 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 08:53:55.988509 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-66a9c2bacfc59574773ec71dc42711071f531456c13aa9c9afb87e028d805665-rootfs.mount: Deactivated successfully. Dec 13 08:53:56.098878 kubelet[1786]: E1213 08:53:56.098820 1786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 08:53:56.124888 containerd[1477]: time="2024-12-13T08:53:56.124701034Z" level=info msg="shim disconnected" id=66a9c2bacfc59574773ec71dc42711071f531456c13aa9c9afb87e028d805665 namespace=k8s.io Dec 13 08:53:56.124888 containerd[1477]: time="2024-12-13T08:53:56.124761279Z" level=warning msg="cleaning up after shim disconnected" id=66a9c2bacfc59574773ec71dc42711071f531456c13aa9c9afb87e028d805665 namespace=k8s.io Dec 13 08:53:56.124888 containerd[1477]: time="2024-12-13T08:53:56.124770879Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 08:53:56.139990 containerd[1477]: time="2024-12-13T08:53:56.139917128Z" level=warning msg="cleanup warnings time=\"2024-12-13T08:53:56Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 08:53:56.870191 kubelet[1786]: E1213 08:53:56.870119 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 08:53:57.036088 systemd[1]: Created slice kubepods-besteffort-podba5aff5f_e7db_4e55_ac8c_e5253f3d7000.slice - libcontainer container kubepods-besteffort-podba5aff5f_e7db_4e55_ac8c_e5253f3d7000.slice. Dec 13 08:53:57.039115 containerd[1477]: time="2024-12-13T08:53:57.039071919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wfm6q,Uid:ba5aff5f-e7db-4e55-ac8c-e5253f3d7000,Namespace:calico-system,Attempt:0,}" Dec 13 08:53:57.103578 kubelet[1786]: E1213 08:53:57.102835 1786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 08:53:57.103836 containerd[1477]: time="2024-12-13T08:53:57.103801328Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Dec 13 08:53:57.106431 systemd-resolved[1367]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Dec 13 08:53:57.119284 containerd[1477]: time="2024-12-13T08:53:57.119230119Z" level=error msg="Failed to destroy network for sandbox \"ab4ba6cffaabde87a606e0ecfec01fad7b23113e345989233f5efbc695d02f30\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:53:57.121271 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ab4ba6cffaabde87a606e0ecfec01fad7b23113e345989233f5efbc695d02f30-shm.mount: Deactivated successfully. Dec 13 08:53:57.122044 containerd[1477]: time="2024-12-13T08:53:57.121712278Z" level=error msg="encountered an error cleaning up failed sandbox \"ab4ba6cffaabde87a606e0ecfec01fad7b23113e345989233f5efbc695d02f30\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:53:57.122044 containerd[1477]: time="2024-12-13T08:53:57.121909928Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wfm6q,Uid:ba5aff5f-e7db-4e55-ac8c-e5253f3d7000,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ab4ba6cffaabde87a606e0ecfec01fad7b23113e345989233f5efbc695d02f30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:53:57.123675 kubelet[1786]: E1213 08:53:57.123468 1786 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab4ba6cffaabde87a606e0ecfec01fad7b23113e345989233f5efbc695d02f30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:53:57.123675 kubelet[1786]: E1213 08:53:57.123552 1786 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab4ba6cffaabde87a606e0ecfec01fad7b23113e345989233f5efbc695d02f30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wfm6q" Dec 13 08:53:57.123675 kubelet[1786]: E1213 08:53:57.123579 1786 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab4ba6cffaabde87a606e0ecfec01fad7b23113e345989233f5efbc695d02f30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wfm6q" Dec 13 08:53:57.123815 kubelet[1786]: E1213 08:53:57.123637 1786 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wfm6q_calico-system(ba5aff5f-e7db-4e55-ac8c-e5253f3d7000)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wfm6q_calico-system(ba5aff5f-e7db-4e55-ac8c-e5253f3d7000)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ab4ba6cffaabde87a606e0ecfec01fad7b23113e345989233f5efbc695d02f30\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wfm6q" podUID="ba5aff5f-e7db-4e55-ac8c-e5253f3d7000" Dec 13 08:53:57.862891 kubelet[1786]: I1213 08:53:57.862738 1786 topology_manager.go:215] "Topology Admit Handler" podUID="28d52e1e-e1cd-4d9b-8b76-884c4325f94f" podNamespace="default" podName="nginx-deployment-6d5f899847-fr59d" Dec 13 08:53:57.870258 systemd[1]: Created slice kubepods-besteffort-pod28d52e1e_e1cd_4d9b_8b76_884c4325f94f.slice - libcontainer container kubepods-besteffort-pod28d52e1e_e1cd_4d9b_8b76_884c4325f94f.slice. Dec 13 08:53:57.870507 kubelet[1786]: E1213 08:53:57.870478 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 08:53:58.001801 kubelet[1786]: I1213 08:53:58.001699 1786 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77f6c\" (UniqueName: \"kubernetes.io/projected/28d52e1e-e1cd-4d9b-8b76-884c4325f94f-kube-api-access-77f6c\") pod \"nginx-deployment-6d5f899847-fr59d\" (UID: \"28d52e1e-e1cd-4d9b-8b76-884c4325f94f\") " pod="default/nginx-deployment-6d5f899847-fr59d" Dec 13 08:53:58.105211 kubelet[1786]: I1213 08:53:58.105172 1786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab4ba6cffaabde87a606e0ecfec01fad7b23113e345989233f5efbc695d02f30" Dec 13 08:53:58.106427 containerd[1477]: time="2024-12-13T08:53:58.106383648Z" level=info msg="StopPodSandbox for \"ab4ba6cffaabde87a606e0ecfec01fad7b23113e345989233f5efbc695d02f30\"" Dec 13 08:53:58.107650 containerd[1477]: time="2024-12-13T08:53:58.107246995Z" level=info msg="Ensure that sandbox ab4ba6cffaabde87a606e0ecfec01fad7b23113e345989233f5efbc695d02f30 in task-service has been cleanup successfully" Dec 13 08:53:58.158029 containerd[1477]: time="2024-12-13T08:53:58.157817266Z" level=error msg="StopPodSandbox for \"ab4ba6cffaabde87a606e0ecfec01fad7b23113e345989233f5efbc695d02f30\" failed" error="failed to destroy network for sandbox \"ab4ba6cffaabde87a606e0ecfec01fad7b23113e345989233f5efbc695d02f30\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:53:58.158678 kubelet[1786]: E1213 08:53:58.158203 1786 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ab4ba6cffaabde87a606e0ecfec01fad7b23113e345989233f5efbc695d02f30\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ab4ba6cffaabde87a606e0ecfec01fad7b23113e345989233f5efbc695d02f30" Dec 13 08:53:58.158678 kubelet[1786]: E1213 08:53:58.158293 1786 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ab4ba6cffaabde87a606e0ecfec01fad7b23113e345989233f5efbc695d02f30"} Dec 13 08:53:58.158678 kubelet[1786]: E1213 08:53:58.158334 1786 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ba5aff5f-e7db-4e55-ac8c-e5253f3d7000\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ab4ba6cffaabde87a606e0ecfec01fad7b23113e345989233f5efbc695d02f30\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 08:53:58.158678 kubelet[1786]: E1213 08:53:58.158366 1786 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ba5aff5f-e7db-4e55-ac8c-e5253f3d7000\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ab4ba6cffaabde87a606e0ecfec01fad7b23113e345989233f5efbc695d02f30\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wfm6q" podUID="ba5aff5f-e7db-4e55-ac8c-e5253f3d7000" Dec 13 08:53:58.174932 containerd[1477]: time="2024-12-13T08:53:58.174838851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-fr59d,Uid:28d52e1e-e1cd-4d9b-8b76-884c4325f94f,Namespace:default,Attempt:0,}" Dec 13 08:53:58.262119 containerd[1477]: time="2024-12-13T08:53:58.262031722Z" level=error msg="Failed to destroy network for sandbox \"a8dcdb74807b6bddcf932d4393ec3431ed5e671a66a0330a45d8165c36624611\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:53:58.264585 containerd[1477]: time="2024-12-13T08:53:58.264525280Z" level=error msg="encountered an error cleaning up failed sandbox \"a8dcdb74807b6bddcf932d4393ec3431ed5e671a66a0330a45d8165c36624611\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:53:58.264704 containerd[1477]: time="2024-12-13T08:53:58.264622564Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-fr59d,Uid:28d52e1e-e1cd-4d9b-8b76-884c4325f94f,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a8dcdb74807b6bddcf932d4393ec3431ed5e671a66a0330a45d8165c36624611\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:53:58.265376 kubelet[1786]: E1213 08:53:58.264970 1786 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8dcdb74807b6bddcf932d4393ec3431ed5e671a66a0330a45d8165c36624611\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:53:58.265376 kubelet[1786]: E1213 08:53:58.265055 1786 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8dcdb74807b6bddcf932d4393ec3431ed5e671a66a0330a45d8165c36624611\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-fr59d" Dec 13 08:53:58.265376 kubelet[1786]: E1213 08:53:58.265093 1786 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8dcdb74807b6bddcf932d4393ec3431ed5e671a66a0330a45d8165c36624611\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-fr59d" Dec 13 08:53:58.266282 kubelet[1786]: E1213 08:53:58.265208 1786 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-fr59d_default(28d52e1e-e1cd-4d9b-8b76-884c4325f94f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-fr59d_default(28d52e1e-e1cd-4d9b-8b76-884c4325f94f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a8dcdb74807b6bddcf932d4393ec3431ed5e671a66a0330a45d8165c36624611\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-fr59d" podUID="28d52e1e-e1cd-4d9b-8b76-884c4325f94f" Dec 13 08:53:58.265763 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a8dcdb74807b6bddcf932d4393ec3431ed5e671a66a0330a45d8165c36624611-shm.mount: Deactivated successfully. Dec 13 08:53:58.870916 kubelet[1786]: E1213 08:53:58.870865 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 08:53:59.108765 kubelet[1786]: I1213 08:53:59.108181 1786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8dcdb74807b6bddcf932d4393ec3431ed5e671a66a0330a45d8165c36624611" Dec 13 08:53:59.110168 containerd[1477]: time="2024-12-13T08:53:59.109292003Z" level=info msg="StopPodSandbox for \"a8dcdb74807b6bddcf932d4393ec3431ed5e671a66a0330a45d8165c36624611\"" Dec 13 08:53:59.110168 containerd[1477]: time="2024-12-13T08:53:59.109543887Z" level=info msg="Ensure that sandbox a8dcdb74807b6bddcf932d4393ec3431ed5e671a66a0330a45d8165c36624611 in task-service has been cleanup successfully" Dec 13 08:53:59.156882 containerd[1477]: time="2024-12-13T08:53:59.156533389Z" level=error msg="StopPodSandbox for \"a8dcdb74807b6bddcf932d4393ec3431ed5e671a66a0330a45d8165c36624611\" failed" error="failed to destroy network for sandbox \"a8dcdb74807b6bddcf932d4393ec3431ed5e671a66a0330a45d8165c36624611\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:53:59.157398 kubelet[1786]: E1213 08:53:59.157363 1786 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a8dcdb74807b6bddcf932d4393ec3431ed5e671a66a0330a45d8165c36624611\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a8dcdb74807b6bddcf932d4393ec3431ed5e671a66a0330a45d8165c36624611" Dec 13 08:53:59.157612 kubelet[1786]: E1213 08:53:59.157588 1786 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a8dcdb74807b6bddcf932d4393ec3431ed5e671a66a0330a45d8165c36624611"} Dec 13 08:53:59.157710 kubelet[1786]: E1213 08:53:59.157698 1786 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"28d52e1e-e1cd-4d9b-8b76-884c4325f94f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a8dcdb74807b6bddcf932d4393ec3431ed5e671a66a0330a45d8165c36624611\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 08:53:59.158064 kubelet[1786]: E1213 08:53:59.158027 1786 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"28d52e1e-e1cd-4d9b-8b76-884c4325f94f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a8dcdb74807b6bddcf932d4393ec3431ed5e671a66a0330a45d8165c36624611\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-fr59d" podUID="28d52e1e-e1cd-4d9b-8b76-884c4325f94f" Dec 13 08:53:59.871976 kubelet[1786]: E1213 08:53:59.871907 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 08:54:00.194464 systemd-resolved[1367]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Dec 13 08:54:00.872679 kubelet[1786]: E1213 08:54:00.872591 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 08:54:01.873943 kubelet[1786]: E1213 08:54:01.873717 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 08:54:02.874255 kubelet[1786]: E1213 08:54:02.874203 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 08:54:03.353743 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2158225824.mount: Deactivated successfully. Dec 13 08:54:03.406593 containerd[1477]: time="2024-12-13T08:54:03.406479585Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:54:03.408263 containerd[1477]: time="2024-12-13T08:54:03.408186864Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Dec 13 08:54:03.409467 containerd[1477]: time="2024-12-13T08:54:03.409381443Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:54:03.411905 containerd[1477]: time="2024-12-13T08:54:03.411830148Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:54:03.412963 containerd[1477]: time="2024-12-13T08:54:03.412507825Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 6.308660488s" Dec 13 08:54:03.412963 containerd[1477]: time="2024-12-13T08:54:03.412547930Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Dec 13 08:54:03.447687 containerd[1477]: time="2024-12-13T08:54:03.447638708Z" level=info msg="CreateContainer within sandbox \"4fdc8c616454f858fa3edd648bef09a50208dd7b6875afa5ba5face28cdf05f0\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 08:54:03.524950 containerd[1477]: time="2024-12-13T08:54:03.524885596Z" level=info msg="CreateContainer within sandbox \"4fdc8c616454f858fa3edd648bef09a50208dd7b6875afa5ba5face28cdf05f0\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"9f586bf129f68c53679d8990c398a9f1e642af0ed77b08737752e0b6699ecedc\"" Dec 13 08:54:03.527177 containerd[1477]: time="2024-12-13T08:54:03.525661463Z" level=info msg="StartContainer for \"9f586bf129f68c53679d8990c398a9f1e642af0ed77b08737752e0b6699ecedc\"" Dec 13 08:54:03.599366 systemd[1]: Started cri-containerd-9f586bf129f68c53679d8990c398a9f1e642af0ed77b08737752e0b6699ecedc.scope - libcontainer container 9f586bf129f68c53679d8990c398a9f1e642af0ed77b08737752e0b6699ecedc. Dec 13 08:54:03.690446 containerd[1477]: time="2024-12-13T08:54:03.690278923Z" level=info msg="StartContainer for \"9f586bf129f68c53679d8990c398a9f1e642af0ed77b08737752e0b6699ecedc\" returns successfully" Dec 13 08:54:03.774621 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 13 08:54:03.775588 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 13 08:54:03.875180 kubelet[1786]: E1213 08:54:03.875070 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 08:54:04.125322 kubelet[1786]: E1213 08:54:04.125290 1786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 08:54:04.860282 kubelet[1786]: E1213 08:54:04.860203 1786 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 08:54:04.876099 kubelet[1786]: E1213 08:54:04.876038 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 08:54:05.128649 kubelet[1786]: E1213 08:54:05.127752 1786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 08:54:05.452197 kernel: bpftool[2551]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Dec 13 08:54:05.752930 systemd-networkd[1366]: vxlan.calico: Link UP Dec 13 08:54:05.752946 systemd-networkd[1366]: vxlan.calico: Gained carrier Dec 13 08:54:05.876466 kubelet[1786]: E1213 08:54:05.876317 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 08:54:06.877202 kubelet[1786]: E1213 08:54:06.877068 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 08:54:07.363002 systemd-networkd[1366]: vxlan.calico: Gained IPv6LL Dec 13 08:54:07.878417 kubelet[1786]: E1213 08:54:07.878323 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 08:54:08.878953 kubelet[1786]: E1213 08:54:08.878779 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 08:54:09.879558 kubelet[1786]: E1213 08:54:09.879470 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 08:54:10.879998 kubelet[1786]: E1213 08:54:10.879932 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 08:54:11.880649 kubelet[1786]: E1213 08:54:11.880565 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 08:54:12.881498 kubelet[1786]: E1213 08:54:12.881429 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 08:54:13.031952 containerd[1477]: time="2024-12-13T08:54:13.030972135Z" level=info msg="StopPodSandbox for \"ab4ba6cffaabde87a606e0ecfec01fad7b23113e345989233f5efbc695d02f30\"" Dec 13 08:54:13.032413 containerd[1477]: time="2024-12-13T08:54:13.032225930Z" level=info msg="StopPodSandbox for \"a8dcdb74807b6bddcf932d4393ec3431ed5e671a66a0330a45d8165c36624611\"" Dec 13 08:54:13.109364 kubelet[1786]: I1213 08:54:13.108896 1786 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-kh5gb" podStartSLOduration=12.860908305 podStartE2EDuration="28.108844648s" podCreationTimestamp="2024-12-13 08:53:45 +0000 UTC" firstStartedPulling="2024-12-13 08:53:48.164904353 +0000 UTC m=+3.909500106" lastFinishedPulling="2024-12-13 08:54:03.412840709 +0000 UTC m=+19.157436449" observedRunningTime="2024-12-13 08:54:04.149107101 +0000 UTC m=+19.893702864" watchObservedRunningTime="2024-12-13 08:54:13.108844648 +0000 UTC m=+28.853440405" Dec 13 08:54:13.181637 containerd[1477]: 2024-12-13 08:54:13.109 [INFO][2677] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a8dcdb74807b6bddcf932d4393ec3431ed5e671a66a0330a45d8165c36624611" Dec 13 08:54:13.181637 containerd[1477]: 2024-12-13 08:54:13.109 [INFO][2677] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a8dcdb74807b6bddcf932d4393ec3431ed5e671a66a0330a45d8165c36624611" iface="eth0" netns="/var/run/netns/cni-fe2bfb77-550f-5970-efcd-9128594a1171" Dec 13 08:54:13.181637 containerd[1477]: 2024-12-13 08:54:13.110 [INFO][2677] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a8dcdb74807b6bddcf932d4393ec3431ed5e671a66a0330a45d8165c36624611" iface="eth0" netns="/var/run/netns/cni-fe2bfb77-550f-5970-efcd-9128594a1171" Dec 13 08:54:13.181637 containerd[1477]: 2024-12-13 08:54:13.112 [INFO][2677] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a8dcdb74807b6bddcf932d4393ec3431ed5e671a66a0330a45d8165c36624611" iface="eth0" netns="/var/run/netns/cni-fe2bfb77-550f-5970-efcd-9128594a1171" Dec 13 08:54:13.181637 containerd[1477]: 2024-12-13 08:54:13.112 [INFO][2677] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a8dcdb74807b6bddcf932d4393ec3431ed5e671a66a0330a45d8165c36624611" Dec 13 08:54:13.181637 containerd[1477]: 2024-12-13 08:54:13.112 [INFO][2677] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a8dcdb74807b6bddcf932d4393ec3431ed5e671a66a0330a45d8165c36624611" Dec 13 08:54:13.181637 containerd[1477]: 2024-12-13 08:54:13.161 [INFO][2688] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a8dcdb74807b6bddcf932d4393ec3431ed5e671a66a0330a45d8165c36624611" HandleID="k8s-pod-network.a8dcdb74807b6bddcf932d4393ec3431ed5e671a66a0330a45d8165c36624611" Workload="137.184.89.200-k8s-nginx--deployment--6d5f899847--fr59d-eth0" Dec 13 08:54:13.181637 containerd[1477]: 2024-12-13 08:54:13.162 [INFO][2688] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:54:13.181637 containerd[1477]: 2024-12-13 08:54:13.162 [INFO][2688] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:54:13.181637 containerd[1477]: 2024-12-13 08:54:13.174 [WARNING][2688] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a8dcdb74807b6bddcf932d4393ec3431ed5e671a66a0330a45d8165c36624611" HandleID="k8s-pod-network.a8dcdb74807b6bddcf932d4393ec3431ed5e671a66a0330a45d8165c36624611" Workload="137.184.89.200-k8s-nginx--deployment--6d5f899847--fr59d-eth0" Dec 13 08:54:13.181637 containerd[1477]: 2024-12-13 08:54:13.174 [INFO][2688] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a8dcdb74807b6bddcf932d4393ec3431ed5e671a66a0330a45d8165c36624611" HandleID="k8s-pod-network.a8dcdb74807b6bddcf932d4393ec3431ed5e671a66a0330a45d8165c36624611" Workload="137.184.89.200-k8s-nginx--deployment--6d5f899847--fr59d-eth0" Dec 13 08:54:13.181637 containerd[1477]: 2024-12-13 08:54:13.177 [INFO][2688] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:54:13.181637 containerd[1477]: 2024-12-13 08:54:13.180 [INFO][2677] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a8dcdb74807b6bddcf932d4393ec3431ed5e671a66a0330a45d8165c36624611" Dec 13 08:54:13.185088 containerd[1477]: time="2024-12-13T08:54:13.183937887Z" level=info msg="TearDown network for sandbox \"a8dcdb74807b6bddcf932d4393ec3431ed5e671a66a0330a45d8165c36624611\" successfully" Dec 13 08:54:13.185088 containerd[1477]: time="2024-12-13T08:54:13.183987545Z" level=info msg="StopPodSandbox for \"a8dcdb74807b6bddcf932d4393ec3431ed5e671a66a0330a45d8165c36624611\" returns successfully" Dec 13 08:54:13.185431 containerd[1477]: time="2024-12-13T08:54:13.185264046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-fr59d,Uid:28d52e1e-e1cd-4d9b-8b76-884c4325f94f,Namespace:default,Attempt:1,}" Dec 13 08:54:13.187122 systemd[1]: run-netns-cni\x2dfe2bfb77\x2d550f\x2d5970\x2defcd\x2d9128594a1171.mount: Deactivated successfully. Dec 13 08:54:13.194998 containerd[1477]: 2024-12-13 08:54:13.112 [INFO][2673] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ab4ba6cffaabde87a606e0ecfec01fad7b23113e345989233f5efbc695d02f30" Dec 13 08:54:13.194998 containerd[1477]: 2024-12-13 08:54:13.112 [INFO][2673] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ab4ba6cffaabde87a606e0ecfec01fad7b23113e345989233f5efbc695d02f30" iface="eth0" netns="/var/run/netns/cni-35f35369-052b-cc22-0d1e-d37f4fcc56e0" Dec 13 08:54:13.194998 containerd[1477]: 2024-12-13 08:54:13.113 [INFO][2673] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ab4ba6cffaabde87a606e0ecfec01fad7b23113e345989233f5efbc695d02f30" iface="eth0" netns="/var/run/netns/cni-35f35369-052b-cc22-0d1e-d37f4fcc56e0" Dec 13 08:54:13.194998 containerd[1477]: 2024-12-13 08:54:13.113 [INFO][2673] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ab4ba6cffaabde87a606e0ecfec01fad7b23113e345989233f5efbc695d02f30" iface="eth0" netns="/var/run/netns/cni-35f35369-052b-cc22-0d1e-d37f4fcc56e0" Dec 13 08:54:13.194998 containerd[1477]: 2024-12-13 08:54:13.113 [INFO][2673] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ab4ba6cffaabde87a606e0ecfec01fad7b23113e345989233f5efbc695d02f30" Dec 13 08:54:13.194998 containerd[1477]: 2024-12-13 08:54:13.114 [INFO][2673] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ab4ba6cffaabde87a606e0ecfec01fad7b23113e345989233f5efbc695d02f30" Dec 13 08:54:13.194998 containerd[1477]: 2024-12-13 08:54:13.169 [INFO][2689] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ab4ba6cffaabde87a606e0ecfec01fad7b23113e345989233f5efbc695d02f30" HandleID="k8s-pod-network.ab4ba6cffaabde87a606e0ecfec01fad7b23113e345989233f5efbc695d02f30" Workload="137.184.89.200-k8s-csi--node--driver--wfm6q-eth0" Dec 13 08:54:13.194998 containerd[1477]: 2024-12-13 08:54:13.170 [INFO][2689] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:54:13.194998 containerd[1477]: 2024-12-13 08:54:13.177 [INFO][2689] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:54:13.194998 containerd[1477]: 2024-12-13 08:54:13.188 [WARNING][2689] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ab4ba6cffaabde87a606e0ecfec01fad7b23113e345989233f5efbc695d02f30" HandleID="k8s-pod-network.ab4ba6cffaabde87a606e0ecfec01fad7b23113e345989233f5efbc695d02f30" Workload="137.184.89.200-k8s-csi--node--driver--wfm6q-eth0" Dec 13 08:54:13.194998 containerd[1477]: 2024-12-13 08:54:13.189 [INFO][2689] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ab4ba6cffaabde87a606e0ecfec01fad7b23113e345989233f5efbc695d02f30" HandleID="k8s-pod-network.ab4ba6cffaabde87a606e0ecfec01fad7b23113e345989233f5efbc695d02f30" Workload="137.184.89.200-k8s-csi--node--driver--wfm6q-eth0" Dec 13 08:54:13.194998 containerd[1477]: 2024-12-13 08:54:13.191 [INFO][2689] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:54:13.194998 containerd[1477]: 2024-12-13 08:54:13.193 [INFO][2673] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ab4ba6cffaabde87a606e0ecfec01fad7b23113e345989233f5efbc695d02f30" Dec 13 08:54:13.194998 containerd[1477]: time="2024-12-13T08:54:13.194937137Z" level=info msg="TearDown network for sandbox \"ab4ba6cffaabde87a606e0ecfec01fad7b23113e345989233f5efbc695d02f30\" successfully" Dec 13 08:54:13.194998 containerd[1477]: time="2024-12-13T08:54:13.194988107Z" level=info msg="StopPodSandbox for \"ab4ba6cffaabde87a606e0ecfec01fad7b23113e345989233f5efbc695d02f30\" returns successfully" Dec 13 08:54:13.196986 containerd[1477]: time="2024-12-13T08:54:13.196755990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wfm6q,Uid:ba5aff5f-e7db-4e55-ac8c-e5253f3d7000,Namespace:calico-system,Attempt:1,}" Dec 13 08:54:13.199669 systemd[1]: run-netns-cni\x2d35f35369\x2d052b\x2dcc22\x2d0d1e\x2dd37f4fcc56e0.mount: Deactivated successfully. Dec 13 08:54:13.383628 systemd-networkd[1366]: cali627a9bd1ab5: Link UP Dec 13 08:54:13.383889 systemd-networkd[1366]: cali627a9bd1ab5: Gained carrier Dec 13 08:54:13.400174 containerd[1477]: 2024-12-13 08:54:13.269 [INFO][2701] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {137.184.89.200-k8s-nginx--deployment--6d5f899847--fr59d-eth0 nginx-deployment-6d5f899847- default 28d52e1e-e1cd-4d9b-8b76-884c4325f94f 1144 0 2024-12-13 08:53:57 +0000 UTC map[app:nginx pod-template-hash:6d5f899847 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 137.184.89.200 nginx-deployment-6d5f899847-fr59d eth0 default [] [] [kns.default ksa.default.default] cali627a9bd1ab5 [] []}} ContainerID="1c61394dee998b55d85cf89c2fb4c131d8c4cbbc0dae380b0af4ac646364d76b" Namespace="default" Pod="nginx-deployment-6d5f899847-fr59d" WorkloadEndpoint="137.184.89.200-k8s-nginx--deployment--6d5f899847--fr59d-" Dec 13 08:54:13.400174 containerd[1477]: 2024-12-13 08:54:13.269 [INFO][2701] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1c61394dee998b55d85cf89c2fb4c131d8c4cbbc0dae380b0af4ac646364d76b" Namespace="default" Pod="nginx-deployment-6d5f899847-fr59d" WorkloadEndpoint="137.184.89.200-k8s-nginx--deployment--6d5f899847--fr59d-eth0" Dec 13 08:54:13.400174 containerd[1477]: 2024-12-13 08:54:13.309 [INFO][2724] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1c61394dee998b55d85cf89c2fb4c131d8c4cbbc0dae380b0af4ac646364d76b" HandleID="k8s-pod-network.1c61394dee998b55d85cf89c2fb4c131d8c4cbbc0dae380b0af4ac646364d76b" Workload="137.184.89.200-k8s-nginx--deployment--6d5f899847--fr59d-eth0" Dec 13 08:54:13.400174 containerd[1477]: 2024-12-13 08:54:13.327 [INFO][2724] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1c61394dee998b55d85cf89c2fb4c131d8c4cbbc0dae380b0af4ac646364d76b" HandleID="k8s-pod-network.1c61394dee998b55d85cf89c2fb4c131d8c4cbbc0dae380b0af4ac646364d76b" Workload="137.184.89.200-k8s-nginx--deployment--6d5f899847--fr59d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290b70), Attrs:map[string]string{"namespace":"default", "node":"137.184.89.200", "pod":"nginx-deployment-6d5f899847-fr59d", "timestamp":"2024-12-13 08:54:13.309114718 +0000 UTC"}, Hostname:"137.184.89.200", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 08:54:13.400174 containerd[1477]: 2024-12-13 08:54:13.327 [INFO][2724] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:54:13.400174 containerd[1477]: 2024-12-13 08:54:13.327 [INFO][2724] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:54:13.400174 containerd[1477]: 2024-12-13 08:54:13.327 [INFO][2724] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '137.184.89.200' Dec 13 08:54:13.400174 containerd[1477]: 2024-12-13 08:54:13.331 [INFO][2724] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1c61394dee998b55d85cf89c2fb4c131d8c4cbbc0dae380b0af4ac646364d76b" host="137.184.89.200" Dec 13 08:54:13.400174 containerd[1477]: 2024-12-13 08:54:13.340 [INFO][2724] ipam/ipam.go 372: Looking up existing affinities for host host="137.184.89.200" Dec 13 08:54:13.400174 containerd[1477]: 2024-12-13 08:54:13.347 [INFO][2724] ipam/ipam.go 489: Trying affinity for 192.168.124.192/26 host="137.184.89.200" Dec 13 08:54:13.400174 containerd[1477]: 2024-12-13 08:54:13.350 [INFO][2724] ipam/ipam.go 155: Attempting to load block cidr=192.168.124.192/26 host="137.184.89.200" Dec 13 08:54:13.400174 containerd[1477]: 2024-12-13 08:54:13.354 [INFO][2724] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.124.192/26 host="137.184.89.200" Dec 13 08:54:13.400174 containerd[1477]: 2024-12-13 08:54:13.354 [INFO][2724] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.124.192/26 handle="k8s-pod-network.1c61394dee998b55d85cf89c2fb4c131d8c4cbbc0dae380b0af4ac646364d76b" host="137.184.89.200" Dec 13 08:54:13.400174 containerd[1477]: 2024-12-13 08:54:13.357 [INFO][2724] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1c61394dee998b55d85cf89c2fb4c131d8c4cbbc0dae380b0af4ac646364d76b Dec 13 08:54:13.400174 containerd[1477]: 2024-12-13 08:54:13.367 [INFO][2724] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.124.192/26 handle="k8s-pod-network.1c61394dee998b55d85cf89c2fb4c131d8c4cbbc0dae380b0af4ac646364d76b" host="137.184.89.200" Dec 13 08:54:13.400174 containerd[1477]: 2024-12-13 08:54:13.376 [INFO][2724] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.124.193/26] block=192.168.124.192/26 handle="k8s-pod-network.1c61394dee998b55d85cf89c2fb4c131d8c4cbbc0dae380b0af4ac646364d76b" host="137.184.89.200" Dec 13 08:54:13.400174 containerd[1477]: 2024-12-13 08:54:13.376 [INFO][2724] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.124.193/26] handle="k8s-pod-network.1c61394dee998b55d85cf89c2fb4c131d8c4cbbc0dae380b0af4ac646364d76b" host="137.184.89.200" Dec 13 08:54:13.400174 containerd[1477]: 2024-12-13 08:54:13.376 [INFO][2724] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:54:13.400174 containerd[1477]: 2024-12-13 08:54:13.376 [INFO][2724] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.124.193/26] IPv6=[] ContainerID="1c61394dee998b55d85cf89c2fb4c131d8c4cbbc0dae380b0af4ac646364d76b" HandleID="k8s-pod-network.1c61394dee998b55d85cf89c2fb4c131d8c4cbbc0dae380b0af4ac646364d76b" Workload="137.184.89.200-k8s-nginx--deployment--6d5f899847--fr59d-eth0" Dec 13 08:54:13.402628 containerd[1477]: 2024-12-13 08:54:13.378 [INFO][2701] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1c61394dee998b55d85cf89c2fb4c131d8c4cbbc0dae380b0af4ac646364d76b" Namespace="default" Pod="nginx-deployment-6d5f899847-fr59d" WorkloadEndpoint="137.184.89.200-k8s-nginx--deployment--6d5f899847--fr59d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"137.184.89.200-k8s-nginx--deployment--6d5f899847--fr59d-eth0", GenerateName:"nginx-deployment-6d5f899847-", Namespace:"default", SelfLink:"", UID:"28d52e1e-e1cd-4d9b-8b76-884c4325f94f", ResourceVersion:"1144", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 53, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"6d5f899847", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"137.184.89.200", ContainerID:"", Pod:"nginx-deployment-6d5f899847-fr59d", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.124.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali627a9bd1ab5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:54:13.402628 containerd[1477]: 2024-12-13 08:54:13.378 [INFO][2701] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.124.193/32] ContainerID="1c61394dee998b55d85cf89c2fb4c131d8c4cbbc0dae380b0af4ac646364d76b" Namespace="default" Pod="nginx-deployment-6d5f899847-fr59d" WorkloadEndpoint="137.184.89.200-k8s-nginx--deployment--6d5f899847--fr59d-eth0" Dec 13 08:54:13.402628 containerd[1477]: 2024-12-13 08:54:13.378 [INFO][2701] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali627a9bd1ab5 ContainerID="1c61394dee998b55d85cf89c2fb4c131d8c4cbbc0dae380b0af4ac646364d76b" Namespace="default" Pod="nginx-deployment-6d5f899847-fr59d" WorkloadEndpoint="137.184.89.200-k8s-nginx--deployment--6d5f899847--fr59d-eth0" Dec 13 08:54:13.402628 containerd[1477]: 2024-12-13 08:54:13.385 [INFO][2701] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1c61394dee998b55d85cf89c2fb4c131d8c4cbbc0dae380b0af4ac646364d76b" Namespace="default" Pod="nginx-deployment-6d5f899847-fr59d" WorkloadEndpoint="137.184.89.200-k8s-nginx--deployment--6d5f899847--fr59d-eth0" Dec 13 08:54:13.402628 containerd[1477]: 2024-12-13 08:54:13.386 [INFO][2701] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1c61394dee998b55d85cf89c2fb4c131d8c4cbbc0dae380b0af4ac646364d76b" Namespace="default" Pod="nginx-deployment-6d5f899847-fr59d" WorkloadEndpoint="137.184.89.200-k8s-nginx--deployment--6d5f899847--fr59d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"137.184.89.200-k8s-nginx--deployment--6d5f899847--fr59d-eth0", GenerateName:"nginx-deployment-6d5f899847-", Namespace:"default", SelfLink:"", UID:"28d52e1e-e1cd-4d9b-8b76-884c4325f94f", ResourceVersion:"1144", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 53, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"6d5f899847", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"137.184.89.200", ContainerID:"1c61394dee998b55d85cf89c2fb4c131d8c4cbbc0dae380b0af4ac646364d76b", Pod:"nginx-deployment-6d5f899847-fr59d", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.124.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali627a9bd1ab5", MAC:"d6:72:b2:26:3c:2a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:54:13.402628 containerd[1477]: 2024-12-13 08:54:13.397 [INFO][2701] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1c61394dee998b55d85cf89c2fb4c131d8c4cbbc0dae380b0af4ac646364d76b" Namespace="default" Pod="nginx-deployment-6d5f899847-fr59d" WorkloadEndpoint="137.184.89.200-k8s-nginx--deployment--6d5f899847--fr59d-eth0" Dec 13 08:54:13.463686 containerd[1477]: time="2024-12-13T08:54:13.463353175Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 08:54:13.465761 containerd[1477]: time="2024-12-13T08:54:13.463610454Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 08:54:13.465761 containerd[1477]: time="2024-12-13T08:54:13.463636021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:54:13.466610 containerd[1477]: time="2024-12-13T08:54:13.466006108Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:54:13.476492 systemd-networkd[1366]: calia3749974770: Link UP Dec 13 08:54:13.478673 systemd-networkd[1366]: calia3749974770: Gained carrier Dec 13 08:54:13.503866 containerd[1477]: 2024-12-13 08:54:13.280 [INFO][2710] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {137.184.89.200-k8s-csi--node--driver--wfm6q-eth0 csi-node-driver- calico-system ba5aff5f-e7db-4e55-ac8c-e5253f3d7000 1145 0 2024-12-13 08:53:45 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 137.184.89.200 csi-node-driver-wfm6q eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calia3749974770 [] []}} ContainerID="8e3e1f4a87258268b1f2d13e9953e7fb10b285ad3e3010df2765dbfc84b787d8" Namespace="calico-system" Pod="csi-node-driver-wfm6q" WorkloadEndpoint="137.184.89.200-k8s-csi--node--driver--wfm6q-" Dec 13 08:54:13.503866 containerd[1477]: 2024-12-13 08:54:13.280 [INFO][2710] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8e3e1f4a87258268b1f2d13e9953e7fb10b285ad3e3010df2765dbfc84b787d8" Namespace="calico-system" Pod="csi-node-driver-wfm6q" WorkloadEndpoint="137.184.89.200-k8s-csi--node--driver--wfm6q-eth0" Dec 13 08:54:13.503866 containerd[1477]: 2024-12-13 08:54:13.331 [INFO][2729] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8e3e1f4a87258268b1f2d13e9953e7fb10b285ad3e3010df2765dbfc84b787d8" HandleID="k8s-pod-network.8e3e1f4a87258268b1f2d13e9953e7fb10b285ad3e3010df2765dbfc84b787d8" Workload="137.184.89.200-k8s-csi--node--driver--wfm6q-eth0" Dec 13 08:54:13.503866 containerd[1477]: 2024-12-13 08:54:13.345 [INFO][2729] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8e3e1f4a87258268b1f2d13e9953e7fb10b285ad3e3010df2765dbfc84b787d8" HandleID="k8s-pod-network.8e3e1f4a87258268b1f2d13e9953e7fb10b285ad3e3010df2765dbfc84b787d8" Workload="137.184.89.200-k8s-csi--node--driver--wfm6q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000334de0), Attrs:map[string]string{"namespace":"calico-system", "node":"137.184.89.200", "pod":"csi-node-driver-wfm6q", "timestamp":"2024-12-13 08:54:13.330986337 +0000 UTC"}, Hostname:"137.184.89.200", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 08:54:13.503866 containerd[1477]: 2024-12-13 08:54:13.345 [INFO][2729] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:54:13.503866 containerd[1477]: 2024-12-13 08:54:13.376 [INFO][2729] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:54:13.503866 containerd[1477]: 2024-12-13 08:54:13.376 [INFO][2729] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '137.184.89.200' Dec 13 08:54:13.503866 containerd[1477]: 2024-12-13 08:54:13.385 [INFO][2729] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8e3e1f4a87258268b1f2d13e9953e7fb10b285ad3e3010df2765dbfc84b787d8" host="137.184.89.200" Dec 13 08:54:13.503866 containerd[1477]: 2024-12-13 08:54:13.396 [INFO][2729] ipam/ipam.go 372: Looking up existing affinities for host host="137.184.89.200" Dec 13 08:54:13.503866 containerd[1477]: 2024-12-13 08:54:13.424 [INFO][2729] ipam/ipam.go 489: Trying affinity for 192.168.124.192/26 host="137.184.89.200" Dec 13 08:54:13.503866 containerd[1477]: 2024-12-13 08:54:13.430 [INFO][2729] ipam/ipam.go 155: Attempting to load block cidr=192.168.124.192/26 host="137.184.89.200" Dec 13 08:54:13.503866 containerd[1477]: 2024-12-13 08:54:13.437 [INFO][2729] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.124.192/26 host="137.184.89.200" Dec 13 08:54:13.503866 containerd[1477]: 2024-12-13 08:54:13.437 [INFO][2729] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.124.192/26 handle="k8s-pod-network.8e3e1f4a87258268b1f2d13e9953e7fb10b285ad3e3010df2765dbfc84b787d8" host="137.184.89.200" Dec 13 08:54:13.503866 containerd[1477]: 2024-12-13 08:54:13.441 [INFO][2729] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8e3e1f4a87258268b1f2d13e9953e7fb10b285ad3e3010df2765dbfc84b787d8 Dec 13 08:54:13.503866 containerd[1477]: 2024-12-13 08:54:13.452 [INFO][2729] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.124.192/26 handle="k8s-pod-network.8e3e1f4a87258268b1f2d13e9953e7fb10b285ad3e3010df2765dbfc84b787d8" host="137.184.89.200" Dec 13 08:54:13.503866 containerd[1477]: 2024-12-13 08:54:13.465 [INFO][2729] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.124.194/26] block=192.168.124.192/26 handle="k8s-pod-network.8e3e1f4a87258268b1f2d13e9953e7fb10b285ad3e3010df2765dbfc84b787d8" host="137.184.89.200" Dec 13 08:54:13.503866 containerd[1477]: 2024-12-13 08:54:13.465 [INFO][2729] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.124.194/26] handle="k8s-pod-network.8e3e1f4a87258268b1f2d13e9953e7fb10b285ad3e3010df2765dbfc84b787d8" host="137.184.89.200" Dec 13 08:54:13.503866 containerd[1477]: 2024-12-13 08:54:13.465 [INFO][2729] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:54:13.503866 containerd[1477]: 2024-12-13 08:54:13.465 [INFO][2729] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.124.194/26] IPv6=[] ContainerID="8e3e1f4a87258268b1f2d13e9953e7fb10b285ad3e3010df2765dbfc84b787d8" HandleID="k8s-pod-network.8e3e1f4a87258268b1f2d13e9953e7fb10b285ad3e3010df2765dbfc84b787d8" Workload="137.184.89.200-k8s-csi--node--driver--wfm6q-eth0" Dec 13 08:54:13.504922 containerd[1477]: 2024-12-13 08:54:13.469 [INFO][2710] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8e3e1f4a87258268b1f2d13e9953e7fb10b285ad3e3010df2765dbfc84b787d8" Namespace="calico-system" Pod="csi-node-driver-wfm6q" WorkloadEndpoint="137.184.89.200-k8s-csi--node--driver--wfm6q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"137.184.89.200-k8s-csi--node--driver--wfm6q-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ba5aff5f-e7db-4e55-ac8c-e5253f3d7000", ResourceVersion:"1145", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 53, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"137.184.89.200", ContainerID:"", Pod:"csi-node-driver-wfm6q", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.124.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia3749974770", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:54:13.504922 containerd[1477]: 2024-12-13 08:54:13.470 [INFO][2710] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.124.194/32] ContainerID="8e3e1f4a87258268b1f2d13e9953e7fb10b285ad3e3010df2765dbfc84b787d8" Namespace="calico-system" Pod="csi-node-driver-wfm6q" WorkloadEndpoint="137.184.89.200-k8s-csi--node--driver--wfm6q-eth0" Dec 13 08:54:13.504922 containerd[1477]: 2024-12-13 08:54:13.470 [INFO][2710] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia3749974770 ContainerID="8e3e1f4a87258268b1f2d13e9953e7fb10b285ad3e3010df2765dbfc84b787d8" Namespace="calico-system" Pod="csi-node-driver-wfm6q" WorkloadEndpoint="137.184.89.200-k8s-csi--node--driver--wfm6q-eth0" Dec 13 08:54:13.504922 containerd[1477]: 2024-12-13 08:54:13.476 [INFO][2710] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8e3e1f4a87258268b1f2d13e9953e7fb10b285ad3e3010df2765dbfc84b787d8" Namespace="calico-system" Pod="csi-node-driver-wfm6q" WorkloadEndpoint="137.184.89.200-k8s-csi--node--driver--wfm6q-eth0" Dec 13 08:54:13.504922 containerd[1477]: 2024-12-13 08:54:13.477 [INFO][2710] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8e3e1f4a87258268b1f2d13e9953e7fb10b285ad3e3010df2765dbfc84b787d8" Namespace="calico-system" Pod="csi-node-driver-wfm6q" WorkloadEndpoint="137.184.89.200-k8s-csi--node--driver--wfm6q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"137.184.89.200-k8s-csi--node--driver--wfm6q-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ba5aff5f-e7db-4e55-ac8c-e5253f3d7000", ResourceVersion:"1145", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 53, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"137.184.89.200", ContainerID:"8e3e1f4a87258268b1f2d13e9953e7fb10b285ad3e3010df2765dbfc84b787d8", Pod:"csi-node-driver-wfm6q", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.124.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia3749974770", MAC:"26:0a:3d:50:7c:60", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:54:13.504922 containerd[1477]: 2024-12-13 08:54:13.490 [INFO][2710] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8e3e1f4a87258268b1f2d13e9953e7fb10b285ad3e3010df2765dbfc84b787d8" Namespace="calico-system" Pod="csi-node-driver-wfm6q" WorkloadEndpoint="137.184.89.200-k8s-csi--node--driver--wfm6q-eth0" Dec 13 08:54:13.528435 systemd[1]: Started cri-containerd-1c61394dee998b55d85cf89c2fb4c131d8c4cbbc0dae380b0af4ac646364d76b.scope - libcontainer container 1c61394dee998b55d85cf89c2fb4c131d8c4cbbc0dae380b0af4ac646364d76b. Dec 13 08:54:13.561121 containerd[1477]: time="2024-12-13T08:54:13.560680953Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 08:54:13.561121 containerd[1477]: time="2024-12-13T08:54:13.560771671Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 08:54:13.561121 containerd[1477]: time="2024-12-13T08:54:13.560797024Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:54:13.561121 containerd[1477]: time="2024-12-13T08:54:13.560922337Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:54:13.591981 systemd[1]: Started cri-containerd-8e3e1f4a87258268b1f2d13e9953e7fb10b285ad3e3010df2765dbfc84b787d8.scope - libcontainer container 8e3e1f4a87258268b1f2d13e9953e7fb10b285ad3e3010df2765dbfc84b787d8. Dec 13 08:54:13.615849 containerd[1477]: time="2024-12-13T08:54:13.615397360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-fr59d,Uid:28d52e1e-e1cd-4d9b-8b76-884c4325f94f,Namespace:default,Attempt:1,} returns sandbox id \"1c61394dee998b55d85cf89c2fb4c131d8c4cbbc0dae380b0af4ac646364d76b\"" Dec 13 08:54:13.618398 containerd[1477]: time="2024-12-13T08:54:13.618358538Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 08:54:13.621372 systemd-resolved[1367]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. Dec 13 08:54:13.645028 containerd[1477]: time="2024-12-13T08:54:13.644851577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wfm6q,Uid:ba5aff5f-e7db-4e55-ac8c-e5253f3d7000,Namespace:calico-system,Attempt:1,} returns sandbox id \"8e3e1f4a87258268b1f2d13e9953e7fb10b285ad3e3010df2765dbfc84b787d8\"" Dec 13 08:54:13.881956 kubelet[1786]: E1213 08:54:13.881885 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 08:54:14.722623 systemd-networkd[1366]: calia3749974770: Gained IPv6LL Dec 13 08:54:14.884207 kubelet[1786]: E1213 08:54:14.883034 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 08:54:15.298802 systemd-networkd[1366]: cali627a9bd1ab5: Gained IPv6LL Dec 13 08:54:15.883689 kubelet[1786]: E1213 08:54:15.883648 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 08:54:16.258905 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4044368175.mount: Deactivated successfully. Dec 13 08:54:16.884459 kubelet[1786]: E1213 08:54:16.884401 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 08:54:17.672008 containerd[1477]: time="2024-12-13T08:54:17.671533991Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:54:17.673719 containerd[1477]: time="2024-12-13T08:54:17.673633634Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=71036027" Dec 13 08:54:17.674647 containerd[1477]: time="2024-12-13T08:54:17.674585320Z" level=info msg="ImageCreate event name:\"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:54:17.677125 containerd[1477]: time="2024-12-13T08:54:17.677089828Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:54:17.680513 containerd[1477]: time="2024-12-13T08:54:17.679294401Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1\", size \"71035905\" in 4.060671475s" Dec 13 08:54:17.680513 containerd[1477]: time="2024-12-13T08:54:17.679337143Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 08:54:17.681076 containerd[1477]: time="2024-12-13T08:54:17.680816166Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Dec 13 08:54:17.682897 containerd[1477]: time="2024-12-13T08:54:17.682865015Z" level=info msg="CreateContainer within sandbox \"1c61394dee998b55d85cf89c2fb4c131d8c4cbbc0dae380b0af4ac646364d76b\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Dec 13 08:54:17.699466 containerd[1477]: time="2024-12-13T08:54:17.699386504Z" level=info msg="CreateContainer within sandbox \"1c61394dee998b55d85cf89c2fb4c131d8c4cbbc0dae380b0af4ac646364d76b\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"a984d69765ebc1387be14c407dbb945a94e3d6c7af984a1d5ffad740ebf003d7\"" Dec 13 08:54:17.701252 containerd[1477]: time="2024-12-13T08:54:17.700522714Z" level=info msg="StartContainer for \"a984d69765ebc1387be14c407dbb945a94e3d6c7af984a1d5ffad740ebf003d7\"" Dec 13 08:54:17.741094 systemd[1]: run-containerd-runc-k8s.io-a984d69765ebc1387be14c407dbb945a94e3d6c7af984a1d5ffad740ebf003d7-runc.kGag0J.mount: Deactivated successfully. Dec 13 08:54:17.752449 systemd[1]: Started cri-containerd-a984d69765ebc1387be14c407dbb945a94e3d6c7af984a1d5ffad740ebf003d7.scope - libcontainer container a984d69765ebc1387be14c407dbb945a94e3d6c7af984a1d5ffad740ebf003d7. Dec 13 08:54:17.786633 containerd[1477]: time="2024-12-13T08:54:17.786588711Z" level=info msg="StartContainer for \"a984d69765ebc1387be14c407dbb945a94e3d6c7af984a1d5ffad740ebf003d7\" returns successfully" Dec 13 08:54:17.884715 kubelet[1786]: E1213 08:54:17.884627 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 08:54:18.885098 kubelet[1786]: E1213 08:54:18.885010 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 08:54:19.124530 containerd[1477]: time="2024-12-13T08:54:19.124411705Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:54:19.125564 containerd[1477]: time="2024-12-13T08:54:19.125408549Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Dec 13 08:54:19.126871 containerd[1477]: time="2024-12-13T08:54:19.126809844Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:54:19.130413 containerd[1477]: time="2024-12-13T08:54:19.130342265Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:54:19.131425 containerd[1477]: time="2024-12-13T08:54:19.131232112Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.450374353s" Dec 13 08:54:19.131425 containerd[1477]: time="2024-12-13T08:54:19.131284678Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Dec 13 08:54:19.134312 containerd[1477]: time="2024-12-13T08:54:19.134260878Z" level=info msg="CreateContainer within sandbox \"8e3e1f4a87258268b1f2d13e9953e7fb10b285ad3e3010df2765dbfc84b787d8\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Dec 13 08:54:19.160804 containerd[1477]: time="2024-12-13T08:54:19.160540940Z" level=info msg="CreateContainer within sandbox \"8e3e1f4a87258268b1f2d13e9953e7fb10b285ad3e3010df2765dbfc84b787d8\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"7650e961146a1ea1e45672c7222aadcb1750edb0f9edc3bb8b90991bde8eb7c5\"" Dec 13 08:54:19.161780 containerd[1477]: time="2024-12-13T08:54:19.161670910Z" level=info msg="StartContainer for \"7650e961146a1ea1e45672c7222aadcb1750edb0f9edc3bb8b90991bde8eb7c5\"" Dec 13 08:54:19.202420 systemd[1]: Started cri-containerd-7650e961146a1ea1e45672c7222aadcb1750edb0f9edc3bb8b90991bde8eb7c5.scope - libcontainer container 7650e961146a1ea1e45672c7222aadcb1750edb0f9edc3bb8b90991bde8eb7c5. Dec 13 08:54:19.244545 containerd[1477]: time="2024-12-13T08:54:19.244482568Z" level=info msg="StartContainer for \"7650e961146a1ea1e45672c7222aadcb1750edb0f9edc3bb8b90991bde8eb7c5\" returns successfully" Dec 13 08:54:19.247119 containerd[1477]: time="2024-12-13T08:54:19.247069939Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Dec 13 08:54:19.885751 kubelet[1786]: E1213 08:54:19.885673 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 08:54:20.816897 containerd[1477]: time="2024-12-13T08:54:20.816813147Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:54:20.818085 containerd[1477]: time="2024-12-13T08:54:20.817883806Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Dec 13 08:54:20.819045 containerd[1477]: time="2024-12-13T08:54:20.818974416Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:54:20.821419 containerd[1477]: time="2024-12-13T08:54:20.821289546Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:54:20.822673 containerd[1477]: time="2024-12-13T08:54:20.822253536Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.57514176s" Dec 13 08:54:20.822673 containerd[1477]: time="2024-12-13T08:54:20.822305331Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Dec 13 08:54:20.829182 containerd[1477]: time="2024-12-13T08:54:20.828414547Z" level=info msg="CreateContainer within sandbox \"8e3e1f4a87258268b1f2d13e9953e7fb10b285ad3e3010df2765dbfc84b787d8\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Dec 13 08:54:20.852680 containerd[1477]: time="2024-12-13T08:54:20.852197386Z" level=info msg="CreateContainer within sandbox \"8e3e1f4a87258268b1f2d13e9953e7fb10b285ad3e3010df2765dbfc84b787d8\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"faad78f367385d5f105cae1fb6a3c70d208c3a88982d2fb6e4cb7ebccdfa48b8\"" Dec 13 08:54:20.853181 containerd[1477]: time="2024-12-13T08:54:20.853088276Z" level=info msg="StartContainer for \"faad78f367385d5f105cae1fb6a3c70d208c3a88982d2fb6e4cb7ebccdfa48b8\"" Dec 13 08:54:20.887355 kubelet[1786]: E1213 08:54:20.887285 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 08:54:20.897471 systemd[1]: Started cri-containerd-faad78f367385d5f105cae1fb6a3c70d208c3a88982d2fb6e4cb7ebccdfa48b8.scope - libcontainer container faad78f367385d5f105cae1fb6a3c70d208c3a88982d2fb6e4cb7ebccdfa48b8. Dec 13 08:54:20.928885 containerd[1477]: time="2024-12-13T08:54:20.928713662Z" level=info msg="StartContainer for \"faad78f367385d5f105cae1fb6a3c70d208c3a88982d2fb6e4cb7ebccdfa48b8\" returns successfully" Dec 13 08:54:21.046347 kubelet[1786]: I1213 08:54:21.046296 1786 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Dec 13 08:54:21.047750 kubelet[1786]: I1213 08:54:21.047716 1786 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Dec 13 08:54:21.200108 kubelet[1786]: I1213 08:54:21.199808 1786 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-6d5f899847-fr59d" podStartSLOduration=20.137575154 podStartE2EDuration="24.199754685s" podCreationTimestamp="2024-12-13 08:53:57 +0000 UTC" firstStartedPulling="2024-12-13 08:54:13.6175897 +0000 UTC m=+29.362185464" lastFinishedPulling="2024-12-13 08:54:17.679769243 +0000 UTC m=+33.424364995" observedRunningTime="2024-12-13 08:54:18.180923731 +0000 UTC m=+33.925519493" watchObservedRunningTime="2024-12-13 08:54:21.199754685 +0000 UTC m=+36.944350447" Dec 13 08:54:21.887915 kubelet[1786]: E1213 08:54:21.887821 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 08:54:22.346192 kubelet[1786]: I1213 08:54:22.345496 1786 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-wfm6q" podStartSLOduration=30.16935878 podStartE2EDuration="37.345448269s" podCreationTimestamp="2024-12-13 08:53:45 +0000 UTC" firstStartedPulling="2024-12-13 08:54:13.646688897 +0000 UTC m=+29.391284635" lastFinishedPulling="2024-12-13 08:54:20.822778379 +0000 UTC m=+36.567374124" observedRunningTime="2024-12-13 08:54:21.201836521 +0000 UTC m=+36.946432283" watchObservedRunningTime="2024-12-13 08:54:22.345448269 +0000 UTC m=+38.090044031" Dec 13 08:54:22.346192 kubelet[1786]: I1213 08:54:22.345684 1786 topology_manager.go:215] "Topology Admit Handler" podUID="714983a6-692b-4737-8e7c-89384fc1e960" podNamespace="default" podName="nfs-server-provisioner-0" Dec 13 08:54:22.353293 systemd[1]: Created slice kubepods-besteffort-pod714983a6_692b_4737_8e7c_89384fc1e960.slice - libcontainer container kubepods-besteffort-pod714983a6_692b_4737_8e7c_89384fc1e960.slice. Dec 13 08:54:22.527521 update_engine[1456]: I20241213 08:54:22.527369 1456 update_attempter.cc:509] Updating boot flags... Dec 13 08:54:22.546490 kubelet[1786]: I1213 08:54:22.546314 1786 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcnk8\" (UniqueName: \"kubernetes.io/projected/714983a6-692b-4737-8e7c-89384fc1e960-kube-api-access-gcnk8\") pod \"nfs-server-provisioner-0\" (UID: \"714983a6-692b-4737-8e7c-89384fc1e960\") " pod="default/nfs-server-provisioner-0" Dec 13 08:54:22.546490 kubelet[1786]: I1213 08:54:22.546363 1786 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/714983a6-692b-4737-8e7c-89384fc1e960-data\") pod \"nfs-server-provisioner-0\" (UID: \"714983a6-692b-4737-8e7c-89384fc1e960\") " pod="default/nfs-server-provisioner-0" Dec 13 08:54:22.570036 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (3027) Dec 13 08:54:22.622492 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (3031) Dec 13 08:54:22.698670 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (3031) Dec 13 08:54:22.888785 kubelet[1786]: E1213 08:54:22.888675 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 08:54:22.957280 containerd[1477]: time="2024-12-13T08:54:22.957074992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:714983a6-692b-4737-8e7c-89384fc1e960,Namespace:default,Attempt:0,}" Dec 13 08:54:23.169727 systemd-networkd[1366]: cali60e51b789ff: Link UP Dec 13 08:54:23.171040 systemd-networkd[1366]: cali60e51b789ff: Gained carrier Dec 13 08:54:23.192458 containerd[1477]: 2024-12-13 08:54:23.020 [INFO][3036] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {137.184.89.200-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 714983a6-692b-4737-8e7c-89384fc1e960 1206 0 2024-12-13 08:54:22 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 137.184.89.200 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="5b5b12772b285668e28115676d472cda3b26a2775f5b68c756cc19702127c9e8" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="137.184.89.200-k8s-nfs--server--provisioner--0-" Dec 13 08:54:23.192458 containerd[1477]: 2024-12-13 08:54:23.020 [INFO][3036] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5b5b12772b285668e28115676d472cda3b26a2775f5b68c756cc19702127c9e8" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="137.184.89.200-k8s-nfs--server--provisioner--0-eth0" Dec 13 08:54:23.192458 containerd[1477]: 2024-12-13 08:54:23.069 [INFO][3047] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5b5b12772b285668e28115676d472cda3b26a2775f5b68c756cc19702127c9e8" HandleID="k8s-pod-network.5b5b12772b285668e28115676d472cda3b26a2775f5b68c756cc19702127c9e8" Workload="137.184.89.200-k8s-nfs--server--provisioner--0-eth0" Dec 13 08:54:23.192458 containerd[1477]: 2024-12-13 08:54:23.093 [INFO][3047] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5b5b12772b285668e28115676d472cda3b26a2775f5b68c756cc19702127c9e8" HandleID="k8s-pod-network.5b5b12772b285668e28115676d472cda3b26a2775f5b68c756cc19702127c9e8" Workload="137.184.89.200-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000332360), Attrs:map[string]string{"namespace":"default", "node":"137.184.89.200", "pod":"nfs-server-provisioner-0", "timestamp":"2024-12-13 08:54:23.069632397 +0000 UTC"}, Hostname:"137.184.89.200", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 08:54:23.192458 containerd[1477]: 2024-12-13 08:54:23.093 [INFO][3047] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:54:23.192458 containerd[1477]: 2024-12-13 08:54:23.094 [INFO][3047] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:54:23.192458 containerd[1477]: 2024-12-13 08:54:23.094 [INFO][3047] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '137.184.89.200' Dec 13 08:54:23.192458 containerd[1477]: 2024-12-13 08:54:23.097 [INFO][3047] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5b5b12772b285668e28115676d472cda3b26a2775f5b68c756cc19702127c9e8" host="137.184.89.200" Dec 13 08:54:23.192458 containerd[1477]: 2024-12-13 08:54:23.109 [INFO][3047] ipam/ipam.go 372: Looking up existing affinities for host host="137.184.89.200" Dec 13 08:54:23.192458 containerd[1477]: 2024-12-13 08:54:23.126 [INFO][3047] ipam/ipam.go 489: Trying affinity for 192.168.124.192/26 host="137.184.89.200" Dec 13 08:54:23.192458 containerd[1477]: 2024-12-13 08:54:23.130 [INFO][3047] ipam/ipam.go 155: Attempting to load block cidr=192.168.124.192/26 host="137.184.89.200" Dec 13 08:54:23.192458 containerd[1477]: 2024-12-13 08:54:23.136 [INFO][3047] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.124.192/26 host="137.184.89.200" Dec 13 08:54:23.192458 containerd[1477]: 2024-12-13 08:54:23.136 [INFO][3047] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.124.192/26 handle="k8s-pod-network.5b5b12772b285668e28115676d472cda3b26a2775f5b68c756cc19702127c9e8" host="137.184.89.200" Dec 13 08:54:23.192458 containerd[1477]: 2024-12-13 08:54:23.144 [INFO][3047] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.5b5b12772b285668e28115676d472cda3b26a2775f5b68c756cc19702127c9e8 Dec 13 08:54:23.192458 containerd[1477]: 2024-12-13 08:54:23.152 [INFO][3047] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.124.192/26 handle="k8s-pod-network.5b5b12772b285668e28115676d472cda3b26a2775f5b68c756cc19702127c9e8" host="137.184.89.200" Dec 13 08:54:23.192458 containerd[1477]: 2024-12-13 08:54:23.162 [INFO][3047] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.124.195/26] block=192.168.124.192/26 handle="k8s-pod-network.5b5b12772b285668e28115676d472cda3b26a2775f5b68c756cc19702127c9e8" host="137.184.89.200" Dec 13 08:54:23.192458 containerd[1477]: 2024-12-13 08:54:23.162 [INFO][3047] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.124.195/26] handle="k8s-pod-network.5b5b12772b285668e28115676d472cda3b26a2775f5b68c756cc19702127c9e8" host="137.184.89.200" Dec 13 08:54:23.192458 containerd[1477]: 2024-12-13 08:54:23.162 [INFO][3047] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:54:23.192458 containerd[1477]: 2024-12-13 08:54:23.162 [INFO][3047] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.124.195/26] IPv6=[] ContainerID="5b5b12772b285668e28115676d472cda3b26a2775f5b68c756cc19702127c9e8" HandleID="k8s-pod-network.5b5b12772b285668e28115676d472cda3b26a2775f5b68c756cc19702127c9e8" Workload="137.184.89.200-k8s-nfs--server--provisioner--0-eth0" Dec 13 08:54:23.193684 containerd[1477]: 2024-12-13 08:54:23.164 [INFO][3036] cni-plugin/k8s.go 386: Populated endpoint ContainerID="5b5b12772b285668e28115676d472cda3b26a2775f5b68c756cc19702127c9e8" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="137.184.89.200-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"137.184.89.200-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"714983a6-692b-4737-8e7c-89384fc1e960", ResourceVersion:"1206", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 54, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"137.184.89.200", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.124.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:54:23.193684 containerd[1477]: 2024-12-13 08:54:23.164 [INFO][3036] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.124.195/32] ContainerID="5b5b12772b285668e28115676d472cda3b26a2775f5b68c756cc19702127c9e8" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="137.184.89.200-k8s-nfs--server--provisioner--0-eth0" Dec 13 08:54:23.193684 containerd[1477]: 2024-12-13 08:54:23.164 [INFO][3036] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="5b5b12772b285668e28115676d472cda3b26a2775f5b68c756cc19702127c9e8" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="137.184.89.200-k8s-nfs--server--provisioner--0-eth0" Dec 13 08:54:23.193684 containerd[1477]: 2024-12-13 08:54:23.169 [INFO][3036] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5b5b12772b285668e28115676d472cda3b26a2775f5b68c756cc19702127c9e8" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="137.184.89.200-k8s-nfs--server--provisioner--0-eth0" Dec 13 08:54:23.194005 containerd[1477]: 2024-12-13 08:54:23.170 [INFO][3036] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5b5b12772b285668e28115676d472cda3b26a2775f5b68c756cc19702127c9e8" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="137.184.89.200-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"137.184.89.200-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"714983a6-692b-4737-8e7c-89384fc1e960", ResourceVersion:"1206", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 54, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"137.184.89.200", ContainerID:"5b5b12772b285668e28115676d472cda3b26a2775f5b68c756cc19702127c9e8", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.124.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"de:56:8d:e0:5a:30", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:54:23.194005 containerd[1477]: 2024-12-13 08:54:23.189 [INFO][3036] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="5b5b12772b285668e28115676d472cda3b26a2775f5b68c756cc19702127c9e8" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="137.184.89.200-k8s-nfs--server--provisioner--0-eth0" Dec 13 08:54:23.225969 containerd[1477]: time="2024-12-13T08:54:23.225733032Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 08:54:23.225969 containerd[1477]: time="2024-12-13T08:54:23.225804600Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 08:54:23.225969 containerd[1477]: time="2024-12-13T08:54:23.225816283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:54:23.226380 containerd[1477]: time="2024-12-13T08:54:23.225917508Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:54:23.258433 systemd[1]: Started cri-containerd-5b5b12772b285668e28115676d472cda3b26a2775f5b68c756cc19702127c9e8.scope - libcontainer container 5b5b12772b285668e28115676d472cda3b26a2775f5b68c756cc19702127c9e8. Dec 13 08:54:23.311090 containerd[1477]: time="2024-12-13T08:54:23.311048254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:714983a6-692b-4737-8e7c-89384fc1e960,Namespace:default,Attempt:0,} returns sandbox id \"5b5b12772b285668e28115676d472cda3b26a2775f5b68c756cc19702127c9e8\"" Dec 13 08:54:23.313692 containerd[1477]: time="2024-12-13T08:54:23.313635224Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Dec 13 08:54:23.369532 kubelet[1786]: I1213 08:54:23.369429 1786 topology_manager.go:215] "Topology Admit Handler" podUID="697a9cc4-461d-47b6-8a95-2c8ed88dda29" podNamespace="calico-system" podName="calico-typha-554cbf74f9-qhts9" Dec 13 08:54:23.375873 systemd[1]: Created slice kubepods-besteffort-pod697a9cc4_461d_47b6_8a95_2c8ed88dda29.slice - libcontainer container kubepods-besteffort-pod697a9cc4_461d_47b6_8a95_2c8ed88dda29.slice. Dec 13 08:54:23.539996 kubelet[1786]: E1213 08:54:23.539418 1786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 08:54:23.554385 kubelet[1786]: I1213 08:54:23.553716 1786 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/697a9cc4-461d-47b6-8a95-2c8ed88dda29-typha-certs\") pod \"calico-typha-554cbf74f9-qhts9\" (UID: \"697a9cc4-461d-47b6-8a95-2c8ed88dda29\") " pod="calico-system/calico-typha-554cbf74f9-qhts9" Dec 13 08:54:23.554385 kubelet[1786]: I1213 08:54:23.553781 1786 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/697a9cc4-461d-47b6-8a95-2c8ed88dda29-tigera-ca-bundle\") pod \"calico-typha-554cbf74f9-qhts9\" (UID: \"697a9cc4-461d-47b6-8a95-2c8ed88dda29\") " pod="calico-system/calico-typha-554cbf74f9-qhts9" Dec 13 08:54:23.554385 kubelet[1786]: I1213 08:54:23.553836 1786 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cnl5\" (UniqueName: \"kubernetes.io/projected/697a9cc4-461d-47b6-8a95-2c8ed88dda29-kube-api-access-8cnl5\") pod \"calico-typha-554cbf74f9-qhts9\" (UID: \"697a9cc4-461d-47b6-8a95-2c8ed88dda29\") " pod="calico-system/calico-typha-554cbf74f9-qhts9" Dec 13 08:54:23.679367 kubelet[1786]: E1213 08:54:23.679182 1786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 08:54:23.680176 containerd[1477]: time="2024-12-13T08:54:23.680088520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-554cbf74f9-qhts9,Uid:697a9cc4-461d-47b6-8a95-2c8ed88dda29,Namespace:calico-system,Attempt:0,}" Dec 13 08:54:23.715008 containerd[1477]: time="2024-12-13T08:54:23.714848314Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 08:54:23.715251 containerd[1477]: time="2024-12-13T08:54:23.715028160Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 08:54:23.715251 containerd[1477]: time="2024-12-13T08:54:23.715056167Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:54:23.715424 containerd[1477]: time="2024-12-13T08:54:23.715251905Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:54:23.743401 systemd[1]: Started cri-containerd-c60f067a0ac669cc1dcb1399e32962e5c5f6b082dad199ab0fa598c0d7d8ff21.scope - libcontainer container c60f067a0ac669cc1dcb1399e32962e5c5f6b082dad199ab0fa598c0d7d8ff21. Dec 13 08:54:23.798483 containerd[1477]: time="2024-12-13T08:54:23.797999616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-554cbf74f9-qhts9,Uid:697a9cc4-461d-47b6-8a95-2c8ed88dda29,Namespace:calico-system,Attempt:0,} returns sandbox id \"c60f067a0ac669cc1dcb1399e32962e5c5f6b082dad199ab0fa598c0d7d8ff21\"" Dec 13 08:54:23.799478 kubelet[1786]: E1213 08:54:23.799423 1786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 08:54:23.889744 kubelet[1786]: E1213 08:54:23.889685 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 08:54:24.578499 systemd-networkd[1366]: cali60e51b789ff: Gained IPv6LL Dec 13 08:54:24.860406 kubelet[1786]: E1213 08:54:24.859972 1786 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 08:54:24.890667 kubelet[1786]: E1213 08:54:24.890599 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 08:54:24.947498 kubelet[1786]: I1213 08:54:24.946680 1786 topology_manager.go:215] "Topology Admit Handler" podUID="21f2ccb5-d28e-441f-a4a0-61bf744e137a" podNamespace="calico-system" podName="calico-kube-controllers-6dd9bbcfc9-bvvgs" Dec 13 08:54:24.955709 systemd[1]: Created slice kubepods-besteffort-pod21f2ccb5_d28e_441f_a4a0_61bf744e137a.slice - libcontainer container kubepods-besteffort-pod21f2ccb5_d28e_441f_a4a0_61bf744e137a.slice. Dec 13 08:54:24.964869 kubelet[1786]: I1213 08:54:24.964610 1786 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21f2ccb5-d28e-441f-a4a0-61bf744e137a-tigera-ca-bundle\") pod \"calico-kube-controllers-6dd9bbcfc9-bvvgs\" (UID: \"21f2ccb5-d28e-441f-a4a0-61bf744e137a\") " pod="calico-system/calico-kube-controllers-6dd9bbcfc9-bvvgs" Dec 13 08:54:24.964869 kubelet[1786]: I1213 08:54:24.964715 1786 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tm8hw\" (UniqueName: \"kubernetes.io/projected/21f2ccb5-d28e-441f-a4a0-61bf744e137a-kube-api-access-tm8hw\") pod \"calico-kube-controllers-6dd9bbcfc9-bvvgs\" (UID: \"21f2ccb5-d28e-441f-a4a0-61bf744e137a\") " pod="calico-system/calico-kube-controllers-6dd9bbcfc9-bvvgs" Dec 13 08:54:25.261825 containerd[1477]: time="2024-12-13T08:54:25.261234110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6dd9bbcfc9-bvvgs,Uid:21f2ccb5-d28e-441f-a4a0-61bf744e137a,Namespace:calico-system,Attempt:0,}" Dec 13 08:54:25.495792 systemd-networkd[1366]: cali875f7e02632: Link UP Dec 13 08:54:25.498059 systemd-networkd[1366]: cali875f7e02632: Gained carrier Dec 13 08:54:25.518281 containerd[1477]: 2024-12-13 08:54:25.356 [INFO][3184] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {137.184.89.200-k8s-calico--kube--controllers--6dd9bbcfc9--bvvgs-eth0 calico-kube-controllers-6dd9bbcfc9- calico-system 21f2ccb5-d28e-441f-a4a0-61bf744e137a 1316 0 2024-12-13 08:54:24 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6dd9bbcfc9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 137.184.89.200 calico-kube-controllers-6dd9bbcfc9-bvvgs eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali875f7e02632 [] []}} ContainerID="e826373324d1ef8d309ed974556ce79511852996e67c9af3b778d32d6b8d9591" Namespace="calico-system" Pod="calico-kube-controllers-6dd9bbcfc9-bvvgs" WorkloadEndpoint="137.184.89.200-k8s-calico--kube--controllers--6dd9bbcfc9--bvvgs-" Dec 13 08:54:25.518281 containerd[1477]: 2024-12-13 08:54:25.356 [INFO][3184] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e826373324d1ef8d309ed974556ce79511852996e67c9af3b778d32d6b8d9591" Namespace="calico-system" Pod="calico-kube-controllers-6dd9bbcfc9-bvvgs" WorkloadEndpoint="137.184.89.200-k8s-calico--kube--controllers--6dd9bbcfc9--bvvgs-eth0" Dec 13 08:54:25.518281 containerd[1477]: 2024-12-13 08:54:25.407 [INFO][3196] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e826373324d1ef8d309ed974556ce79511852996e67c9af3b778d32d6b8d9591" HandleID="k8s-pod-network.e826373324d1ef8d309ed974556ce79511852996e67c9af3b778d32d6b8d9591" Workload="137.184.89.200-k8s-calico--kube--controllers--6dd9bbcfc9--bvvgs-eth0" Dec 13 08:54:25.518281 containerd[1477]: 2024-12-13 08:54:25.426 [INFO][3196] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e826373324d1ef8d309ed974556ce79511852996e67c9af3b778d32d6b8d9591" HandleID="k8s-pod-network.e826373324d1ef8d309ed974556ce79511852996e67c9af3b778d32d6b8d9591" Workload="137.184.89.200-k8s-calico--kube--controllers--6dd9bbcfc9--bvvgs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000319420), Attrs:map[string]string{"namespace":"calico-system", "node":"137.184.89.200", "pod":"calico-kube-controllers-6dd9bbcfc9-bvvgs", "timestamp":"2024-12-13 08:54:25.407821795 +0000 UTC"}, Hostname:"137.184.89.200", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 08:54:25.518281 containerd[1477]: 2024-12-13 08:54:25.426 [INFO][3196] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:54:25.518281 containerd[1477]: 2024-12-13 08:54:25.426 [INFO][3196] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:54:25.518281 containerd[1477]: 2024-12-13 08:54:25.426 [INFO][3196] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '137.184.89.200' Dec 13 08:54:25.518281 containerd[1477]: 2024-12-13 08:54:25.430 [INFO][3196] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e826373324d1ef8d309ed974556ce79511852996e67c9af3b778d32d6b8d9591" host="137.184.89.200" Dec 13 08:54:25.518281 containerd[1477]: 2024-12-13 08:54:25.438 [INFO][3196] ipam/ipam.go 372: Looking up existing affinities for host host="137.184.89.200" Dec 13 08:54:25.518281 containerd[1477]: 2024-12-13 08:54:25.448 [INFO][3196] ipam/ipam.go 489: Trying affinity for 192.168.124.192/26 host="137.184.89.200" Dec 13 08:54:25.518281 containerd[1477]: 2024-12-13 08:54:25.451 [INFO][3196] ipam/ipam.go 155: Attempting to load block cidr=192.168.124.192/26 host="137.184.89.200" Dec 13 08:54:25.518281 containerd[1477]: 2024-12-13 08:54:25.456 [INFO][3196] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.124.192/26 host="137.184.89.200" Dec 13 08:54:25.518281 containerd[1477]: 2024-12-13 08:54:25.456 [INFO][3196] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.124.192/26 handle="k8s-pod-network.e826373324d1ef8d309ed974556ce79511852996e67c9af3b778d32d6b8d9591" host="137.184.89.200" Dec 13 08:54:25.518281 containerd[1477]: 2024-12-13 08:54:25.459 [INFO][3196] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e826373324d1ef8d309ed974556ce79511852996e67c9af3b778d32d6b8d9591 Dec 13 08:54:25.518281 containerd[1477]: 2024-12-13 08:54:25.466 [INFO][3196] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.124.192/26 handle="k8s-pod-network.e826373324d1ef8d309ed974556ce79511852996e67c9af3b778d32d6b8d9591" host="137.184.89.200" Dec 13 08:54:25.518281 containerd[1477]: 2024-12-13 08:54:25.479 [INFO][3196] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.124.196/26] block=192.168.124.192/26 handle="k8s-pod-network.e826373324d1ef8d309ed974556ce79511852996e67c9af3b778d32d6b8d9591" host="137.184.89.200" Dec 13 08:54:25.518281 containerd[1477]: 2024-12-13 08:54:25.479 [INFO][3196] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.124.196/26] handle="k8s-pod-network.e826373324d1ef8d309ed974556ce79511852996e67c9af3b778d32d6b8d9591" host="137.184.89.200" Dec 13 08:54:25.518281 containerd[1477]: 2024-12-13 08:54:25.479 [INFO][3196] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:54:25.518281 containerd[1477]: 2024-12-13 08:54:25.479 [INFO][3196] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.124.196/26] IPv6=[] ContainerID="e826373324d1ef8d309ed974556ce79511852996e67c9af3b778d32d6b8d9591" HandleID="k8s-pod-network.e826373324d1ef8d309ed974556ce79511852996e67c9af3b778d32d6b8d9591" Workload="137.184.89.200-k8s-calico--kube--controllers--6dd9bbcfc9--bvvgs-eth0" Dec 13 08:54:25.519109 containerd[1477]: 2024-12-13 08:54:25.487 [INFO][3184] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e826373324d1ef8d309ed974556ce79511852996e67c9af3b778d32d6b8d9591" Namespace="calico-system" Pod="calico-kube-controllers-6dd9bbcfc9-bvvgs" WorkloadEndpoint="137.184.89.200-k8s-calico--kube--controllers--6dd9bbcfc9--bvvgs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"137.184.89.200-k8s-calico--kube--controllers--6dd9bbcfc9--bvvgs-eth0", GenerateName:"calico-kube-controllers-6dd9bbcfc9-", Namespace:"calico-system", SelfLink:"", UID:"21f2ccb5-d28e-441f-a4a0-61bf744e137a", ResourceVersion:"1316", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 54, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6dd9bbcfc9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"137.184.89.200", ContainerID:"", Pod:"calico-kube-controllers-6dd9bbcfc9-bvvgs", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.124.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali875f7e02632", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:54:25.519109 containerd[1477]: 2024-12-13 08:54:25.487 [INFO][3184] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.124.196/32] ContainerID="e826373324d1ef8d309ed974556ce79511852996e67c9af3b778d32d6b8d9591" Namespace="calico-system" Pod="calico-kube-controllers-6dd9bbcfc9-bvvgs" WorkloadEndpoint="137.184.89.200-k8s-calico--kube--controllers--6dd9bbcfc9--bvvgs-eth0" Dec 13 08:54:25.519109 containerd[1477]: 2024-12-13 08:54:25.487 [INFO][3184] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali875f7e02632 ContainerID="e826373324d1ef8d309ed974556ce79511852996e67c9af3b778d32d6b8d9591" Namespace="calico-system" Pod="calico-kube-controllers-6dd9bbcfc9-bvvgs" WorkloadEndpoint="137.184.89.200-k8s-calico--kube--controllers--6dd9bbcfc9--bvvgs-eth0" Dec 13 08:54:25.519109 containerd[1477]: 2024-12-13 08:54:25.499 [INFO][3184] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e826373324d1ef8d309ed974556ce79511852996e67c9af3b778d32d6b8d9591" Namespace="calico-system" Pod="calico-kube-controllers-6dd9bbcfc9-bvvgs" WorkloadEndpoint="137.184.89.200-k8s-calico--kube--controllers--6dd9bbcfc9--bvvgs-eth0" Dec 13 08:54:25.519109 containerd[1477]: 2024-12-13 08:54:25.499 [INFO][3184] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e826373324d1ef8d309ed974556ce79511852996e67c9af3b778d32d6b8d9591" Namespace="calico-system" Pod="calico-kube-controllers-6dd9bbcfc9-bvvgs" WorkloadEndpoint="137.184.89.200-k8s-calico--kube--controllers--6dd9bbcfc9--bvvgs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"137.184.89.200-k8s-calico--kube--controllers--6dd9bbcfc9--bvvgs-eth0", GenerateName:"calico-kube-controllers-6dd9bbcfc9-", Namespace:"calico-system", SelfLink:"", UID:"21f2ccb5-d28e-441f-a4a0-61bf744e137a", ResourceVersion:"1316", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 54, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6dd9bbcfc9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"137.184.89.200", ContainerID:"e826373324d1ef8d309ed974556ce79511852996e67c9af3b778d32d6b8d9591", Pod:"calico-kube-controllers-6dd9bbcfc9-bvvgs", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.124.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali875f7e02632", MAC:"a6:95:e3:06:af:60", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:54:25.519109 containerd[1477]: 2024-12-13 08:54:25.513 [INFO][3184] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e826373324d1ef8d309ed974556ce79511852996e67c9af3b778d32d6b8d9591" Namespace="calico-system" Pod="calico-kube-controllers-6dd9bbcfc9-bvvgs" WorkloadEndpoint="137.184.89.200-k8s-calico--kube--controllers--6dd9bbcfc9--bvvgs-eth0" Dec 13 08:54:25.597557 containerd[1477]: time="2024-12-13T08:54:25.596928051Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 08:54:25.597557 containerd[1477]: time="2024-12-13T08:54:25.597070808Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 08:54:25.597557 containerd[1477]: time="2024-12-13T08:54:25.597104463Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:54:25.600341 containerd[1477]: time="2024-12-13T08:54:25.598473944Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:54:25.648940 systemd[1]: Started cri-containerd-e826373324d1ef8d309ed974556ce79511852996e67c9af3b778d32d6b8d9591.scope - libcontainer container e826373324d1ef8d309ed974556ce79511852996e67c9af3b778d32d6b8d9591. Dec 13 08:54:25.712271 containerd[1477]: time="2024-12-13T08:54:25.711805162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6dd9bbcfc9-bvvgs,Uid:21f2ccb5-d28e-441f-a4a0-61bf744e137a,Namespace:calico-system,Attempt:0,} returns sandbox id \"e826373324d1ef8d309ed974556ce79511852996e67c9af3b778d32d6b8d9591\"" Dec 13 08:54:25.770065 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1126531408.mount: Deactivated successfully. Dec 13 08:54:25.891131 kubelet[1786]: E1213 08:54:25.890948 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 08:54:26.756274 systemd-networkd[1366]: cali875f7e02632: Gained IPv6LL Dec 13 08:54:26.891552 kubelet[1786]: E1213 08:54:26.891338 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 08:54:27.891867 kubelet[1786]: E1213 08:54:27.891767 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 08:54:28.130088 containerd[1477]: time="2024-12-13T08:54:28.128598630Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Dec 13 08:54:28.130088 containerd[1477]: time="2024-12-13T08:54:28.129879317Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:54:28.133014 containerd[1477]: time="2024-12-13T08:54:28.132768719Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:54:28.136182 containerd[1477]: time="2024-12-13T08:54:28.135107944Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:54:28.156605 containerd[1477]: time="2024-12-13T08:54:28.156394273Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 4.842680049s" Dec 13 08:54:28.156605 containerd[1477]: time="2024-12-13T08:54:28.156476680Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Dec 13 08:54:28.158195 containerd[1477]: time="2024-12-13T08:54:28.158001991Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Dec 13 08:54:28.159371 containerd[1477]: time="2024-12-13T08:54:28.159336356Z" level=info msg="CreateContainer within sandbox \"5b5b12772b285668e28115676d472cda3b26a2775f5b68c756cc19702127c9e8\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Dec 13 08:54:28.185380 containerd[1477]: time="2024-12-13T08:54:28.184931552Z" level=info msg="CreateContainer within sandbox \"5b5b12772b285668e28115676d472cda3b26a2775f5b68c756cc19702127c9e8\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"2b53a1e4ee85bccab54eb9f3328a760d355499339538500be8bad1f8645e208f\"" Dec 13 08:54:28.186290 containerd[1477]: time="2024-12-13T08:54:28.186245228Z" level=info msg="StartContainer for \"2b53a1e4ee85bccab54eb9f3328a760d355499339538500be8bad1f8645e208f\"" Dec 13 08:54:28.232589 systemd[1]: run-containerd-runc-k8s.io-2b53a1e4ee85bccab54eb9f3328a760d355499339538500be8bad1f8645e208f-runc.Tb3h87.mount: Deactivated successfully. Dec 13 08:54:28.242660 systemd[1]: Started cri-containerd-2b53a1e4ee85bccab54eb9f3328a760d355499339538500be8bad1f8645e208f.scope - libcontainer container 2b53a1e4ee85bccab54eb9f3328a760d355499339538500be8bad1f8645e208f. Dec 13 08:54:28.309737 containerd[1477]: time="2024-12-13T08:54:28.309486174Z" level=info msg="StartContainer for \"2b53a1e4ee85bccab54eb9f3328a760d355499339538500be8bad1f8645e208f\" returns successfully" Dec 13 08:54:28.892975 kubelet[1786]: E1213 08:54:28.892907 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 08:54:29.303991 kubelet[1786]: I1213 08:54:29.303836 1786 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.459759005 podStartE2EDuration="7.303758794s" podCreationTimestamp="2024-12-13 08:54:22 +0000 UTC" firstStartedPulling="2024-12-13 08:54:23.312960361 +0000 UTC m=+39.057556113" lastFinishedPulling="2024-12-13 08:54:28.156960145 +0000 UTC m=+43.901555902" observedRunningTime="2024-12-13 08:54:29.303646498 +0000 UTC m=+45.048242264" watchObservedRunningTime="2024-12-13 08:54:29.303758794 +0000 UTC m=+45.048354568" Dec 13 08:54:29.893420 kubelet[1786]: E1213 08:54:29.893338 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 08:54:30.711726 containerd[1477]: time="2024-12-13T08:54:30.710709805Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:54:30.714311 containerd[1477]: time="2024-12-13T08:54:30.714203249Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29850141" Dec 13 08:54:30.718193 containerd[1477]: time="2024-12-13T08:54:30.717637317Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:54:30.726800 containerd[1477]: time="2024-12-13T08:54:30.726704074Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:54:30.728225 containerd[1477]: time="2024-12-13T08:54:30.728174171Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.5701315s" Dec 13 08:54:30.728636 containerd[1477]: time="2024-12-13T08:54:30.728507297Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Dec 13 08:54:30.730711 containerd[1477]: time="2024-12-13T08:54:30.730561262Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Dec 13 08:54:30.768652 containerd[1477]: time="2024-12-13T08:54:30.768331743Z" level=info msg="CreateContainer within sandbox \"c60f067a0ac669cc1dcb1399e32962e5c5f6b082dad199ab0fa598c0d7d8ff21\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 13 08:54:30.800619 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3692731046.mount: Deactivated successfully. Dec 13 08:54:30.812213 containerd[1477]: time="2024-12-13T08:54:30.812104992Z" level=info msg="CreateContainer within sandbox \"c60f067a0ac669cc1dcb1399e32962e5c5f6b082dad199ab0fa598c0d7d8ff21\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"e2453431631ebd3525eaba0e3d2322842b2d17a0c84df6c89f71c13f76d23888\"" Dec 13 08:54:30.814651 containerd[1477]: time="2024-12-13T08:54:30.814589531Z" level=info msg="StartContainer for \"e2453431631ebd3525eaba0e3d2322842b2d17a0c84df6c89f71c13f76d23888\"" Dec 13 08:54:30.888558 systemd[1]: Started cri-containerd-e2453431631ebd3525eaba0e3d2322842b2d17a0c84df6c89f71c13f76d23888.scope - libcontainer container e2453431631ebd3525eaba0e3d2322842b2d17a0c84df6c89f71c13f76d23888. Dec 13 08:54:30.894558 kubelet[1786]: E1213 08:54:30.894459 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 08:54:30.965889 containerd[1477]: time="2024-12-13T08:54:30.964861057Z" level=info msg="StartContainer for \"e2453431631ebd3525eaba0e3d2322842b2d17a0c84df6c89f71c13f76d23888\" returns successfully" Dec 13 08:54:31.293642 kubelet[1786]: E1213 08:54:31.292877 1786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 08:54:31.344832 kubelet[1786]: I1213 08:54:31.344699 1786 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-554cbf74f9-qhts9" podStartSLOduration=1.4168499620000001 podStartE2EDuration="8.344639481s" podCreationTimestamp="2024-12-13 08:54:23 +0000 UTC" firstStartedPulling="2024-12-13 08:54:23.801430689 +0000 UTC m=+39.546026447" lastFinishedPulling="2024-12-13 08:54:30.72922021 +0000 UTC m=+46.473815966" observedRunningTime="2024-12-13 08:54:31.314664284 +0000 UTC m=+47.059260050" watchObservedRunningTime="2024-12-13 08:54:31.344639481 +0000 UTC m=+47.089235244" Dec 13 08:54:31.895122 kubelet[1786]: E1213 08:54:31.895064 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 08:54:32.301395 kubelet[1786]: E1213 08:54:32.301226 1786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 08:54:32.896472 kubelet[1786]: E1213 08:54:32.896426 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 08:54:33.298904 kubelet[1786]: E1213 08:54:33.298534 1786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 08:54:33.873129 containerd[1477]: time="2024-12-13T08:54:33.873012851Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:54:33.876689 containerd[1477]: time="2024-12-13T08:54:33.875321533Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Dec 13 08:54:33.877942 containerd[1477]: time="2024-12-13T08:54:33.877828400Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:54:33.882302 containerd[1477]: time="2024-12-13T08:54:33.882238487Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:54:33.884322 containerd[1477]: time="2024-12-13T08:54:33.883685577Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 3.153010324s" Dec 13 08:54:33.884322 containerd[1477]: time="2024-12-13T08:54:33.883740379Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Dec 13 08:54:33.903292 kubelet[1786]: E1213 08:54:33.903244 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 08:54:33.904402 containerd[1477]: time="2024-12-13T08:54:33.903237254Z" level=info msg="CreateContainer within sandbox \"e826373324d1ef8d309ed974556ce79511852996e67c9af3b778d32d6b8d9591\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Dec 13 08:54:33.925598 containerd[1477]: time="2024-12-13T08:54:33.925499005Z" level=info msg="CreateContainer within sandbox \"e826373324d1ef8d309ed974556ce79511852996e67c9af3b778d32d6b8d9591\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"02ca0a558355690b430c7d5a9d6315d9db32cd3020075c2489965dee955e8548\"" Dec 13 08:54:33.927886 containerd[1477]: time="2024-12-13T08:54:33.926432102Z" level=info msg="StartContainer for \"02ca0a558355690b430c7d5a9d6315d9db32cd3020075c2489965dee955e8548\"" Dec 13 08:54:33.971659 systemd[1]: Started cri-containerd-02ca0a558355690b430c7d5a9d6315d9db32cd3020075c2489965dee955e8548.scope - libcontainer container 02ca0a558355690b430c7d5a9d6315d9db32cd3020075c2489965dee955e8548. Dec 13 08:54:34.044681 containerd[1477]: time="2024-12-13T08:54:34.044552950Z" level=info msg="StartContainer for \"02ca0a558355690b430c7d5a9d6315d9db32cd3020075c2489965dee955e8548\" returns successfully" Dec 13 08:54:34.334906 kubelet[1786]: I1213 08:54:34.334319 1786 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6dd9bbcfc9-bvvgs" podStartSLOduration=2.164835059 podStartE2EDuration="10.334272503s" podCreationTimestamp="2024-12-13 08:54:24 +0000 UTC" firstStartedPulling="2024-12-13 08:54:25.714613693 +0000 UTC m=+41.459209430" lastFinishedPulling="2024-12-13 08:54:33.884051121 +0000 UTC m=+49.628646874" observedRunningTime="2024-12-13 08:54:34.329979137 +0000 UTC m=+50.074574899" watchObservedRunningTime="2024-12-13 08:54:34.334272503 +0000 UTC m=+50.078868269" Dec 13 08:54:34.904108 kubelet[1786]: E1213 08:54:34.904038 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 08:54:35.904545 kubelet[1786]: E1213 08:54:35.904423 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 08:54:36.905077 kubelet[1786]: E1213 08:54:36.905003 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 08:54:37.905606 kubelet[1786]: E1213 08:54:37.905533 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 08:54:38.128989 kubelet[1786]: I1213 08:54:38.128915 1786 topology_manager.go:215] "Topology Admit Handler" podUID="f2d9b2fc-027d-4611-a77e-323e820abdbd" podNamespace="default" podName="test-pod-1" Dec 13 08:54:38.154800 systemd[1]: Created slice kubepods-besteffort-podf2d9b2fc_027d_4611_a77e_323e820abdbd.slice - libcontainer container kubepods-besteffort-podf2d9b2fc_027d_4611_a77e_323e820abdbd.slice. Dec 13 08:54:38.272352 kubelet[1786]: I1213 08:54:38.272101 1786 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9a06b883-94e6-404a-a0bf-48ce321ef941\" (UniqueName: \"kubernetes.io/nfs/f2d9b2fc-027d-4611-a77e-323e820abdbd-pvc-9a06b883-94e6-404a-a0bf-48ce321ef941\") pod \"test-pod-1\" (UID: \"f2d9b2fc-027d-4611-a77e-323e820abdbd\") " pod="default/test-pod-1" Dec 13 08:54:38.272352 kubelet[1786]: I1213 08:54:38.272205 1786 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2jfb\" (UniqueName: \"kubernetes.io/projected/f2d9b2fc-027d-4611-a77e-323e820abdbd-kube-api-access-b2jfb\") pod \"test-pod-1\" (UID: \"f2d9b2fc-027d-4611-a77e-323e820abdbd\") " pod="default/test-pod-1" Dec 13 08:54:38.419199 kernel: FS-Cache: Loaded Dec 13 08:54:38.497517 kernel: RPC: Registered named UNIX socket transport module. Dec 13 08:54:38.497670 kernel: RPC: Registered udp transport module. Dec 13 08:54:38.498410 kernel: RPC: Registered tcp transport module. Dec 13 08:54:38.498512 kernel: RPC: Registered tcp-with-tls transport module. Dec 13 08:54:38.499247 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Dec 13 08:54:38.789348 kernel: NFS: Registering the id_resolver key type Dec 13 08:54:38.791418 kernel: Key type id_resolver registered Dec 13 08:54:38.791662 kernel: Key type id_legacy registered Dec 13 08:54:38.831084 nfsidmap[3709]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '2.1-6-e72ca174b4' Dec 13 08:54:38.837684 nfsidmap[3710]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '2.1-6-e72ca174b4' Dec 13 08:54:38.906661 kubelet[1786]: E1213 08:54:38.906585 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 08:54:39.059315 containerd[1477]: time="2024-12-13T08:54:39.059042862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:f2d9b2fc-027d-4611-a77e-323e820abdbd,Namespace:default,Attempt:0,}" Dec 13 08:54:39.251331 systemd-networkd[1366]: cali5ec59c6bf6e: Link UP Dec 13 08:54:39.251492 systemd-networkd[1366]: cali5ec59c6bf6e: Gained carrier Dec 13 08:54:39.269705 containerd[1477]: 2024-12-13 08:54:39.150 [INFO][3712] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {137.184.89.200-k8s-test--pod--1-eth0 default f2d9b2fc-027d-4611-a77e-323e820abdbd 1420 0 2024-12-13 08:54:23 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 137.184.89.200 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="00a509500a64252e8df1514c0cd43741b8b0d861391fe6c2709bf8ed259a65ee" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="137.184.89.200-k8s-test--pod--1-" Dec 13 08:54:39.269705 containerd[1477]: 2024-12-13 08:54:39.150 [INFO][3712] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="00a509500a64252e8df1514c0cd43741b8b0d861391fe6c2709bf8ed259a65ee" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="137.184.89.200-k8s-test--pod--1-eth0" Dec 13 08:54:39.269705 containerd[1477]: 2024-12-13 08:54:39.185 [INFO][3722] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="00a509500a64252e8df1514c0cd43741b8b0d861391fe6c2709bf8ed259a65ee" HandleID="k8s-pod-network.00a509500a64252e8df1514c0cd43741b8b0d861391fe6c2709bf8ed259a65ee" Workload="137.184.89.200-k8s-test--pod--1-eth0" Dec 13 08:54:39.269705 containerd[1477]: 2024-12-13 08:54:39.199 [INFO][3722] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="00a509500a64252e8df1514c0cd43741b8b0d861391fe6c2709bf8ed259a65ee" HandleID="k8s-pod-network.00a509500a64252e8df1514c0cd43741b8b0d861391fe6c2709bf8ed259a65ee" Workload="137.184.89.200-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000292810), Attrs:map[string]string{"namespace":"default", "node":"137.184.89.200", "pod":"test-pod-1", "timestamp":"2024-12-13 08:54:39.185435596 +0000 UTC"}, Hostname:"137.184.89.200", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 08:54:39.269705 containerd[1477]: 2024-12-13 08:54:39.199 [INFO][3722] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:54:39.269705 containerd[1477]: 2024-12-13 08:54:39.199 [INFO][3722] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:54:39.269705 containerd[1477]: 2024-12-13 08:54:39.199 [INFO][3722] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '137.184.89.200' Dec 13 08:54:39.269705 containerd[1477]: 2024-12-13 08:54:39.202 [INFO][3722] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.00a509500a64252e8df1514c0cd43741b8b0d861391fe6c2709bf8ed259a65ee" host="137.184.89.200" Dec 13 08:54:39.269705 containerd[1477]: 2024-12-13 08:54:39.210 [INFO][3722] ipam/ipam.go 372: Looking up existing affinities for host host="137.184.89.200" Dec 13 08:54:39.269705 containerd[1477]: 2024-12-13 08:54:39.217 [INFO][3722] ipam/ipam.go 489: Trying affinity for 192.168.124.192/26 host="137.184.89.200" Dec 13 08:54:39.269705 containerd[1477]: 2024-12-13 08:54:39.220 [INFO][3722] ipam/ipam.go 155: Attempting to load block cidr=192.168.124.192/26 host="137.184.89.200" Dec 13 08:54:39.269705 containerd[1477]: 2024-12-13 08:54:39.224 [INFO][3722] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.124.192/26 host="137.184.89.200" Dec 13 08:54:39.269705 containerd[1477]: 2024-12-13 08:54:39.224 [INFO][3722] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.124.192/26 handle="k8s-pod-network.00a509500a64252e8df1514c0cd43741b8b0d861391fe6c2709bf8ed259a65ee" host="137.184.89.200" Dec 13 08:54:39.269705 containerd[1477]: 2024-12-13 08:54:39.228 [INFO][3722] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.00a509500a64252e8df1514c0cd43741b8b0d861391fe6c2709bf8ed259a65ee Dec 13 08:54:39.269705 containerd[1477]: 2024-12-13 08:54:39.236 [INFO][3722] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.124.192/26 handle="k8s-pod-network.00a509500a64252e8df1514c0cd43741b8b0d861391fe6c2709bf8ed259a65ee" host="137.184.89.200" Dec 13 08:54:39.269705 containerd[1477]: 2024-12-13 08:54:39.245 [INFO][3722] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.124.197/26] block=192.168.124.192/26 handle="k8s-pod-network.00a509500a64252e8df1514c0cd43741b8b0d861391fe6c2709bf8ed259a65ee" host="137.184.89.200" Dec 13 08:54:39.269705 containerd[1477]: 2024-12-13 08:54:39.245 [INFO][3722] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.124.197/26] handle="k8s-pod-network.00a509500a64252e8df1514c0cd43741b8b0d861391fe6c2709bf8ed259a65ee" host="137.184.89.200" Dec 13 08:54:39.269705 containerd[1477]: 2024-12-13 08:54:39.245 [INFO][3722] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:54:39.269705 containerd[1477]: 2024-12-13 08:54:39.245 [INFO][3722] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.124.197/26] IPv6=[] ContainerID="00a509500a64252e8df1514c0cd43741b8b0d861391fe6c2709bf8ed259a65ee" HandleID="k8s-pod-network.00a509500a64252e8df1514c0cd43741b8b0d861391fe6c2709bf8ed259a65ee" Workload="137.184.89.200-k8s-test--pod--1-eth0" Dec 13 08:54:39.269705 containerd[1477]: 2024-12-13 08:54:39.248 [INFO][3712] cni-plugin/k8s.go 386: Populated endpoint ContainerID="00a509500a64252e8df1514c0cd43741b8b0d861391fe6c2709bf8ed259a65ee" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="137.184.89.200-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"137.184.89.200-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"f2d9b2fc-027d-4611-a77e-323e820abdbd", ResourceVersion:"1420", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 54, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"137.184.89.200", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.124.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:54:39.270392 containerd[1477]: 2024-12-13 08:54:39.248 [INFO][3712] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.124.197/32] ContainerID="00a509500a64252e8df1514c0cd43741b8b0d861391fe6c2709bf8ed259a65ee" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="137.184.89.200-k8s-test--pod--1-eth0" Dec 13 08:54:39.270392 containerd[1477]: 2024-12-13 08:54:39.248 [INFO][3712] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="00a509500a64252e8df1514c0cd43741b8b0d861391fe6c2709bf8ed259a65ee" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="137.184.89.200-k8s-test--pod--1-eth0" Dec 13 08:54:39.270392 containerd[1477]: 2024-12-13 08:54:39.250 [INFO][3712] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="00a509500a64252e8df1514c0cd43741b8b0d861391fe6c2709bf8ed259a65ee" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="137.184.89.200-k8s-test--pod--1-eth0" Dec 13 08:54:39.270392 containerd[1477]: 2024-12-13 08:54:39.252 [INFO][3712] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="00a509500a64252e8df1514c0cd43741b8b0d861391fe6c2709bf8ed259a65ee" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="137.184.89.200-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"137.184.89.200-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"f2d9b2fc-027d-4611-a77e-323e820abdbd", ResourceVersion:"1420", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 54, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"137.184.89.200", ContainerID:"00a509500a64252e8df1514c0cd43741b8b0d861391fe6c2709bf8ed259a65ee", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.124.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"d6:94:06:4b:8b:b7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:54:39.270392 containerd[1477]: 2024-12-13 08:54:39.265 [INFO][3712] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="00a509500a64252e8df1514c0cd43741b8b0d861391fe6c2709bf8ed259a65ee" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="137.184.89.200-k8s-test--pod--1-eth0" Dec 13 08:54:39.300526 containerd[1477]: time="2024-12-13T08:54:39.300378438Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 08:54:39.300526 containerd[1477]: time="2024-12-13T08:54:39.300442929Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 08:54:39.300526 containerd[1477]: time="2024-12-13T08:54:39.300458603Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:54:39.301302 containerd[1477]: time="2024-12-13T08:54:39.300550073Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:54:39.324792 systemd[1]: Started cri-containerd-00a509500a64252e8df1514c0cd43741b8b0d861391fe6c2709bf8ed259a65ee.scope - libcontainer container 00a509500a64252e8df1514c0cd43741b8b0d861391fe6c2709bf8ed259a65ee. Dec 13 08:54:39.373478 containerd[1477]: time="2024-12-13T08:54:39.373431725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:f2d9b2fc-027d-4611-a77e-323e820abdbd,Namespace:default,Attempt:0,} returns sandbox id \"00a509500a64252e8df1514c0cd43741b8b0d861391fe6c2709bf8ed259a65ee\"" Dec 13 08:54:39.379460 containerd[1477]: time="2024-12-13T08:54:39.379263535Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 08:54:39.808194 containerd[1477]: time="2024-12-13T08:54:39.808110949Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:54:39.808931 containerd[1477]: time="2024-12-13T08:54:39.808878495Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Dec 13 08:54:39.812201 containerd[1477]: time="2024-12-13T08:54:39.812108569Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1\", size \"71035905\" in 432.792555ms" Dec 13 08:54:39.812201 containerd[1477]: time="2024-12-13T08:54:39.812199338Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 08:54:39.814506 containerd[1477]: time="2024-12-13T08:54:39.814246315Z" level=info msg="CreateContainer within sandbox \"00a509500a64252e8df1514c0cd43741b8b0d861391fe6c2709bf8ed259a65ee\" for container &ContainerMetadata{Name:test,Attempt:0,}" Dec 13 08:54:39.835096 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2148861797.mount: Deactivated successfully. Dec 13 08:54:39.838972 containerd[1477]: time="2024-12-13T08:54:39.838906930Z" level=info msg="CreateContainer within sandbox \"00a509500a64252e8df1514c0cd43741b8b0d861391fe6c2709bf8ed259a65ee\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"fe2f9bbada8ce6ad7849acf4763cc7a65debc88fd12321652702e4b1f2d706b2\"" Dec 13 08:54:39.841027 containerd[1477]: time="2024-12-13T08:54:39.839907487Z" level=info msg="StartContainer for \"fe2f9bbada8ce6ad7849acf4763cc7a65debc88fd12321652702e4b1f2d706b2\"" Dec 13 08:54:39.886478 systemd[1]: Started cri-containerd-fe2f9bbada8ce6ad7849acf4763cc7a65debc88fd12321652702e4b1f2d706b2.scope - libcontainer container fe2f9bbada8ce6ad7849acf4763cc7a65debc88fd12321652702e4b1f2d706b2. Dec 13 08:54:39.907035 kubelet[1786]: E1213 08:54:39.906981 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 08:54:39.920694 containerd[1477]: time="2024-12-13T08:54:39.920643690Z" level=info msg="StartContainer for \"fe2f9bbada8ce6ad7849acf4763cc7a65debc88fd12321652702e4b1f2d706b2\" returns successfully" Dec 13 08:54:40.514418 systemd-networkd[1366]: cali5ec59c6bf6e: Gained IPv6LL Dec 13 08:54:40.907145 kubelet[1786]: E1213 08:54:40.907072 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 08:54:41.907813 kubelet[1786]: E1213 08:54:41.907749 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 08:54:42.908267 kubelet[1786]: E1213 08:54:42.908105 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 08:54:43.908806 kubelet[1786]: E1213 08:54:43.908713 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 08:54:44.860544 kubelet[1786]: E1213 08:54:44.860380 1786 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 08:54:44.907658 containerd[1477]: time="2024-12-13T08:54:44.907400478Z" level=info msg="StopPodSandbox for \"a8dcdb74807b6bddcf932d4393ec3431ed5e671a66a0330a45d8165c36624611\"" Dec 13 08:54:44.909888 kubelet[1786]: E1213 08:54:44.909736 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 08:54:45.005024 containerd[1477]: 2024-12-13 08:54:44.965 [WARNING][3859] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a8dcdb74807b6bddcf932d4393ec3431ed5e671a66a0330a45d8165c36624611" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"137.184.89.200-k8s-nginx--deployment--6d5f899847--fr59d-eth0", GenerateName:"nginx-deployment-6d5f899847-", Namespace:"default", SelfLink:"", UID:"28d52e1e-e1cd-4d9b-8b76-884c4325f94f", ResourceVersion:"1167", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 53, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"6d5f899847", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"137.184.89.200", ContainerID:"1c61394dee998b55d85cf89c2fb4c131d8c4cbbc0dae380b0af4ac646364d76b", Pod:"nginx-deployment-6d5f899847-fr59d", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.124.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali627a9bd1ab5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:54:45.005024 containerd[1477]: 2024-12-13 08:54:44.965 [INFO][3859] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a8dcdb74807b6bddcf932d4393ec3431ed5e671a66a0330a45d8165c36624611" Dec 13 08:54:45.005024 containerd[1477]: 2024-12-13 08:54:44.965 [INFO][3859] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a8dcdb74807b6bddcf932d4393ec3431ed5e671a66a0330a45d8165c36624611" iface="eth0" netns="" Dec 13 08:54:45.005024 containerd[1477]: 2024-12-13 08:54:44.965 [INFO][3859] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a8dcdb74807b6bddcf932d4393ec3431ed5e671a66a0330a45d8165c36624611" Dec 13 08:54:45.005024 containerd[1477]: 2024-12-13 08:54:44.965 [INFO][3859] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a8dcdb74807b6bddcf932d4393ec3431ed5e671a66a0330a45d8165c36624611" Dec 13 08:54:45.005024 containerd[1477]: 2024-12-13 08:54:44.989 [INFO][3865] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a8dcdb74807b6bddcf932d4393ec3431ed5e671a66a0330a45d8165c36624611" HandleID="k8s-pod-network.a8dcdb74807b6bddcf932d4393ec3431ed5e671a66a0330a45d8165c36624611" Workload="137.184.89.200-k8s-nginx--deployment--6d5f899847--fr59d-eth0" Dec 13 08:54:45.005024 containerd[1477]: 2024-12-13 08:54:44.989 [INFO][3865] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:54:45.005024 containerd[1477]: 2024-12-13 08:54:44.989 [INFO][3865] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:54:45.005024 containerd[1477]: 2024-12-13 08:54:44.997 [WARNING][3865] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a8dcdb74807b6bddcf932d4393ec3431ed5e671a66a0330a45d8165c36624611" HandleID="k8s-pod-network.a8dcdb74807b6bddcf932d4393ec3431ed5e671a66a0330a45d8165c36624611" Workload="137.184.89.200-k8s-nginx--deployment--6d5f899847--fr59d-eth0" Dec 13 08:54:45.005024 containerd[1477]: 2024-12-13 08:54:44.997 [INFO][3865] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a8dcdb74807b6bddcf932d4393ec3431ed5e671a66a0330a45d8165c36624611" HandleID="k8s-pod-network.a8dcdb74807b6bddcf932d4393ec3431ed5e671a66a0330a45d8165c36624611" Workload="137.184.89.200-k8s-nginx--deployment--6d5f899847--fr59d-eth0" Dec 13 08:54:45.005024 containerd[1477]: 2024-12-13 08:54:45.000 [INFO][3865] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:54:45.005024 containerd[1477]: 2024-12-13 08:54:45.002 [INFO][3859] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a8dcdb74807b6bddcf932d4393ec3431ed5e671a66a0330a45d8165c36624611" Dec 13 08:54:45.005024 containerd[1477]: time="2024-12-13T08:54:45.004765742Z" level=info msg="TearDown network for sandbox \"a8dcdb74807b6bddcf932d4393ec3431ed5e671a66a0330a45d8165c36624611\" successfully" Dec 13 08:54:45.005024 containerd[1477]: time="2024-12-13T08:54:45.004852036Z" level=info msg="StopPodSandbox for \"a8dcdb74807b6bddcf932d4393ec3431ed5e671a66a0330a45d8165c36624611\" returns successfully" Dec 13 08:54:45.061540 containerd[1477]: time="2024-12-13T08:54:45.061482715Z" level=info msg="RemovePodSandbox for \"a8dcdb74807b6bddcf932d4393ec3431ed5e671a66a0330a45d8165c36624611\"" Dec 13 08:54:45.061540 containerd[1477]: time="2024-12-13T08:54:45.061531328Z" level=info msg="Forcibly stopping sandbox \"a8dcdb74807b6bddcf932d4393ec3431ed5e671a66a0330a45d8165c36624611\"" Dec 13 08:54:45.177668 containerd[1477]: 2024-12-13 08:54:45.115 [WARNING][3885] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a8dcdb74807b6bddcf932d4393ec3431ed5e671a66a0330a45d8165c36624611" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"137.184.89.200-k8s-nginx--deployment--6d5f899847--fr59d-eth0", GenerateName:"nginx-deployment-6d5f899847-", Namespace:"default", SelfLink:"", UID:"28d52e1e-e1cd-4d9b-8b76-884c4325f94f", ResourceVersion:"1167", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 53, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"6d5f899847", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"137.184.89.200", ContainerID:"1c61394dee998b55d85cf89c2fb4c131d8c4cbbc0dae380b0af4ac646364d76b", Pod:"nginx-deployment-6d5f899847-fr59d", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.124.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali627a9bd1ab5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:54:45.177668 containerd[1477]: 2024-12-13 08:54:45.115 [INFO][3885] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a8dcdb74807b6bddcf932d4393ec3431ed5e671a66a0330a45d8165c36624611" Dec 13 08:54:45.177668 containerd[1477]: 2024-12-13 08:54:45.115 [INFO][3885] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a8dcdb74807b6bddcf932d4393ec3431ed5e671a66a0330a45d8165c36624611" iface="eth0" netns="" Dec 13 08:54:45.177668 containerd[1477]: 2024-12-13 08:54:45.115 [INFO][3885] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a8dcdb74807b6bddcf932d4393ec3431ed5e671a66a0330a45d8165c36624611" Dec 13 08:54:45.177668 containerd[1477]: 2024-12-13 08:54:45.115 [INFO][3885] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a8dcdb74807b6bddcf932d4393ec3431ed5e671a66a0330a45d8165c36624611" Dec 13 08:54:45.177668 containerd[1477]: 2024-12-13 08:54:45.140 [INFO][3891] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a8dcdb74807b6bddcf932d4393ec3431ed5e671a66a0330a45d8165c36624611" HandleID="k8s-pod-network.a8dcdb74807b6bddcf932d4393ec3431ed5e671a66a0330a45d8165c36624611" Workload="137.184.89.200-k8s-nginx--deployment--6d5f899847--fr59d-eth0" Dec 13 08:54:45.177668 containerd[1477]: 2024-12-13 08:54:45.141 [INFO][3891] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:54:45.177668 containerd[1477]: 2024-12-13 08:54:45.141 [INFO][3891] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:54:45.177668 containerd[1477]: 2024-12-13 08:54:45.153 [WARNING][3891] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a8dcdb74807b6bddcf932d4393ec3431ed5e671a66a0330a45d8165c36624611" HandleID="k8s-pod-network.a8dcdb74807b6bddcf932d4393ec3431ed5e671a66a0330a45d8165c36624611" Workload="137.184.89.200-k8s-nginx--deployment--6d5f899847--fr59d-eth0" Dec 13 08:54:45.177668 containerd[1477]: 2024-12-13 08:54:45.153 [INFO][3891] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a8dcdb74807b6bddcf932d4393ec3431ed5e671a66a0330a45d8165c36624611" HandleID="k8s-pod-network.a8dcdb74807b6bddcf932d4393ec3431ed5e671a66a0330a45d8165c36624611" Workload="137.184.89.200-k8s-nginx--deployment--6d5f899847--fr59d-eth0" Dec 13 08:54:45.177668 containerd[1477]: 2024-12-13 08:54:45.160 [INFO][3891] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:54:45.177668 containerd[1477]: 2024-12-13 08:54:45.162 [INFO][3885] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a8dcdb74807b6bddcf932d4393ec3431ed5e671a66a0330a45d8165c36624611" Dec 13 08:54:45.177668 containerd[1477]: time="2024-12-13T08:54:45.177401900Z" level=info msg="TearDown network for sandbox \"a8dcdb74807b6bddcf932d4393ec3431ed5e671a66a0330a45d8165c36624611\" successfully" Dec 13 08:54:45.194547 containerd[1477]: time="2024-12-13T08:54:45.194471855Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a8dcdb74807b6bddcf932d4393ec3431ed5e671a66a0330a45d8165c36624611\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 08:54:45.194710 containerd[1477]: time="2024-12-13T08:54:45.194583056Z" level=info msg="RemovePodSandbox \"a8dcdb74807b6bddcf932d4393ec3431ed5e671a66a0330a45d8165c36624611\" returns successfully" Dec 13 08:54:45.198630 containerd[1477]: time="2024-12-13T08:54:45.198556772Z" level=info msg="StopPodSandbox for \"ab4ba6cffaabde87a606e0ecfec01fad7b23113e345989233f5efbc695d02f30\"" Dec 13 08:54:45.315384 containerd[1477]: 2024-12-13 08:54:45.270 [WARNING][3909] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ab4ba6cffaabde87a606e0ecfec01fad7b23113e345989233f5efbc695d02f30" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"137.184.89.200-k8s-csi--node--driver--wfm6q-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ba5aff5f-e7db-4e55-ac8c-e5253f3d7000", ResourceVersion:"1184", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 53, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"137.184.89.200", ContainerID:"8e3e1f4a87258268b1f2d13e9953e7fb10b285ad3e3010df2765dbfc84b787d8", Pod:"csi-node-driver-wfm6q", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.124.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia3749974770", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:54:45.315384 containerd[1477]: 2024-12-13 08:54:45.270 [INFO][3909] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ab4ba6cffaabde87a606e0ecfec01fad7b23113e345989233f5efbc695d02f30" Dec 13 08:54:45.315384 containerd[1477]: 2024-12-13 08:54:45.270 [INFO][3909] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ab4ba6cffaabde87a606e0ecfec01fad7b23113e345989233f5efbc695d02f30" iface="eth0" netns="" Dec 13 08:54:45.315384 containerd[1477]: 2024-12-13 08:54:45.270 [INFO][3909] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ab4ba6cffaabde87a606e0ecfec01fad7b23113e345989233f5efbc695d02f30" Dec 13 08:54:45.315384 containerd[1477]: 2024-12-13 08:54:45.270 [INFO][3909] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ab4ba6cffaabde87a606e0ecfec01fad7b23113e345989233f5efbc695d02f30" Dec 13 08:54:45.315384 containerd[1477]: 2024-12-13 08:54:45.294 [INFO][3915] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ab4ba6cffaabde87a606e0ecfec01fad7b23113e345989233f5efbc695d02f30" HandleID="k8s-pod-network.ab4ba6cffaabde87a606e0ecfec01fad7b23113e345989233f5efbc695d02f30" Workload="137.184.89.200-k8s-csi--node--driver--wfm6q-eth0" Dec 13 08:54:45.315384 containerd[1477]: 2024-12-13 08:54:45.295 [INFO][3915] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:54:45.315384 containerd[1477]: 2024-12-13 08:54:45.295 [INFO][3915] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:54:45.315384 containerd[1477]: 2024-12-13 08:54:45.307 [WARNING][3915] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ab4ba6cffaabde87a606e0ecfec01fad7b23113e345989233f5efbc695d02f30" HandleID="k8s-pod-network.ab4ba6cffaabde87a606e0ecfec01fad7b23113e345989233f5efbc695d02f30" Workload="137.184.89.200-k8s-csi--node--driver--wfm6q-eth0" Dec 13 08:54:45.315384 containerd[1477]: 2024-12-13 08:54:45.307 [INFO][3915] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ab4ba6cffaabde87a606e0ecfec01fad7b23113e345989233f5efbc695d02f30" HandleID="k8s-pod-network.ab4ba6cffaabde87a606e0ecfec01fad7b23113e345989233f5efbc695d02f30" Workload="137.184.89.200-k8s-csi--node--driver--wfm6q-eth0" Dec 13 08:54:45.315384 containerd[1477]: 2024-12-13 08:54:45.312 [INFO][3915] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:54:45.315384 containerd[1477]: 2024-12-13 08:54:45.313 [INFO][3909] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ab4ba6cffaabde87a606e0ecfec01fad7b23113e345989233f5efbc695d02f30" Dec 13 08:54:45.315384 containerd[1477]: time="2024-12-13T08:54:45.315242065Z" level=info msg="TearDown network for sandbox \"ab4ba6cffaabde87a606e0ecfec01fad7b23113e345989233f5efbc695d02f30\" successfully" Dec 13 08:54:45.315384 containerd[1477]: time="2024-12-13T08:54:45.315269282Z" level=info msg="StopPodSandbox for \"ab4ba6cffaabde87a606e0ecfec01fad7b23113e345989233f5efbc695d02f30\" returns successfully" Dec 13 08:54:45.316076 containerd[1477]: time="2024-12-13T08:54:45.316037510Z" level=info msg="RemovePodSandbox for \"ab4ba6cffaabde87a606e0ecfec01fad7b23113e345989233f5efbc695d02f30\"" Dec 13 08:54:45.316118 containerd[1477]: time="2024-12-13T08:54:45.316079230Z" level=info msg="Forcibly stopping sandbox \"ab4ba6cffaabde87a606e0ecfec01fad7b23113e345989233f5efbc695d02f30\"" Dec 13 08:54:45.415478 containerd[1477]: 2024-12-13 08:54:45.370 [WARNING][3933] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ab4ba6cffaabde87a606e0ecfec01fad7b23113e345989233f5efbc695d02f30" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"137.184.89.200-k8s-csi--node--driver--wfm6q-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ba5aff5f-e7db-4e55-ac8c-e5253f3d7000", ResourceVersion:"1184", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 53, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"137.184.89.200", ContainerID:"8e3e1f4a87258268b1f2d13e9953e7fb10b285ad3e3010df2765dbfc84b787d8", Pod:"csi-node-driver-wfm6q", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.124.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia3749974770", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:54:45.415478 containerd[1477]: 2024-12-13 08:54:45.370 [INFO][3933] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ab4ba6cffaabde87a606e0ecfec01fad7b23113e345989233f5efbc695d02f30" Dec 13 08:54:45.415478 containerd[1477]: 2024-12-13 08:54:45.370 [INFO][3933] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ab4ba6cffaabde87a606e0ecfec01fad7b23113e345989233f5efbc695d02f30" iface="eth0" netns="" Dec 13 08:54:45.415478 containerd[1477]: 2024-12-13 08:54:45.370 [INFO][3933] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ab4ba6cffaabde87a606e0ecfec01fad7b23113e345989233f5efbc695d02f30" Dec 13 08:54:45.415478 containerd[1477]: 2024-12-13 08:54:45.370 [INFO][3933] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ab4ba6cffaabde87a606e0ecfec01fad7b23113e345989233f5efbc695d02f30" Dec 13 08:54:45.415478 containerd[1477]: 2024-12-13 08:54:45.397 [INFO][3939] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ab4ba6cffaabde87a606e0ecfec01fad7b23113e345989233f5efbc695d02f30" HandleID="k8s-pod-network.ab4ba6cffaabde87a606e0ecfec01fad7b23113e345989233f5efbc695d02f30" Workload="137.184.89.200-k8s-csi--node--driver--wfm6q-eth0" Dec 13 08:54:45.415478 containerd[1477]: 2024-12-13 08:54:45.397 [INFO][3939] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:54:45.415478 containerd[1477]: 2024-12-13 08:54:45.398 [INFO][3939] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:54:45.415478 containerd[1477]: 2024-12-13 08:54:45.407 [WARNING][3939] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ab4ba6cffaabde87a606e0ecfec01fad7b23113e345989233f5efbc695d02f30" HandleID="k8s-pod-network.ab4ba6cffaabde87a606e0ecfec01fad7b23113e345989233f5efbc695d02f30" Workload="137.184.89.200-k8s-csi--node--driver--wfm6q-eth0" Dec 13 08:54:45.415478 containerd[1477]: 2024-12-13 08:54:45.408 [INFO][3939] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ab4ba6cffaabde87a606e0ecfec01fad7b23113e345989233f5efbc695d02f30" HandleID="k8s-pod-network.ab4ba6cffaabde87a606e0ecfec01fad7b23113e345989233f5efbc695d02f30" Workload="137.184.89.200-k8s-csi--node--driver--wfm6q-eth0" Dec 13 08:54:45.415478 containerd[1477]: 2024-12-13 08:54:45.412 [INFO][3939] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:54:45.415478 containerd[1477]: 2024-12-13 08:54:45.413 [INFO][3933] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ab4ba6cffaabde87a606e0ecfec01fad7b23113e345989233f5efbc695d02f30" Dec 13 08:54:45.415979 containerd[1477]: time="2024-12-13T08:54:45.415554536Z" level=info msg="TearDown network for sandbox \"ab4ba6cffaabde87a606e0ecfec01fad7b23113e345989233f5efbc695d02f30\" successfully" Dec 13 08:54:45.419990 containerd[1477]: time="2024-12-13T08:54:45.419925315Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ab4ba6cffaabde87a606e0ecfec01fad7b23113e345989233f5efbc695d02f30\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 08:54:45.419990 containerd[1477]: time="2024-12-13T08:54:45.420000482Z" level=info msg="RemovePodSandbox \"ab4ba6cffaabde87a606e0ecfec01fad7b23113e345989233f5efbc695d02f30\" returns successfully" Dec 13 08:54:45.910543 kubelet[1786]: E1213 08:54:45.910485 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 08:54:46.911370 kubelet[1786]: E1213 08:54:46.911320 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 08:54:47.911587 kubelet[1786]: E1213 08:54:47.911525 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"