Jan 30 13:56:30.069365 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 30 13:56:30.069393 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:56:30.069407 kernel: BIOS-provided physical RAM map: Jan 30 13:56:30.069414 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 30 13:56:30.069420 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 30 13:56:30.069426 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 30 13:56:30.069433 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Jan 30 13:56:30.069440 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Jan 30 13:56:30.069446 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 30 13:56:30.069455 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 30 13:56:30.069461 kernel: NX (Execute Disable) protection: active Jan 30 13:56:30.069468 kernel: APIC: Static calls initialized Jan 30 13:56:30.069480 kernel: SMBIOS 2.8 present. Jan 30 13:56:30.069487 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Jan 30 13:56:30.069495 kernel: Hypervisor detected: KVM Jan 30 13:56:30.069510 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 30 13:56:30.069525 kernel: kvm-clock: using sched offset of 4155624467 cycles Jan 30 13:56:30.069537 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 30 13:56:30.069544 kernel: tsc: Detected 2000.000 MHz processor Jan 30 13:56:30.069552 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 13:56:30.069559 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 13:56:30.069566 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Jan 30 13:56:30.069573 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 30 13:56:30.069580 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 13:56:30.069590 kernel: ACPI: Early table checksum verification disabled Jan 30 13:56:30.069597 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Jan 30 13:56:30.069608 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:56:30.069622 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:56:30.069632 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:56:30.069642 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jan 30 13:56:30.069653 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:56:30.069663 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:56:30.069674 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:56:30.069690 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:56:30.069699 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Jan 30 13:56:30.069710 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Jan 30 13:56:30.069721 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jan 30 13:56:30.069732 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Jan 30 13:56:30.069739 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Jan 30 13:56:30.069746 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Jan 30 13:56:30.069760 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Jan 30 13:56:30.069768 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 30 13:56:30.069775 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 30 13:56:30.069782 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 30 13:56:30.069789 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 30 13:56:30.069803 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Jan 30 13:56:30.069810 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Jan 30 13:56:30.072952 kernel: Zone ranges: Jan 30 13:56:30.072980 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 13:56:30.072994 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Jan 30 13:56:30.073006 kernel: Normal empty Jan 30 13:56:30.073019 kernel: Movable zone start for each node Jan 30 13:56:30.073031 kernel: Early memory node ranges Jan 30 13:56:30.073044 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 30 13:56:30.073058 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Jan 30 13:56:30.073071 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Jan 30 13:56:30.073094 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 13:56:30.073107 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 30 13:56:30.073129 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Jan 30 13:56:30.073142 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 30 13:56:30.073156 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 30 13:56:30.073168 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 30 13:56:30.073181 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 30 13:56:30.073194 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 30 13:56:30.073208 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 13:56:30.073225 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 30 13:56:30.073238 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 30 13:56:30.073250 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 13:56:30.073264 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 30 13:56:30.073293 kernel: TSC deadline timer available Jan 30 13:56:30.073306 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 30 13:56:30.073318 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 30 13:56:30.073331 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Jan 30 13:56:30.073355 kernel: Booting paravirtualized kernel on KVM Jan 30 13:56:30.073370 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 13:56:30.073388 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 30 13:56:30.073401 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 30 13:56:30.073417 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 30 13:56:30.073430 kernel: pcpu-alloc: [0] 0 1 Jan 30 13:56:30.073443 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 30 13:56:30.073458 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:56:30.073471 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 13:56:30.073488 kernel: random: crng init done Jan 30 13:56:30.073501 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 13:56:30.073513 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 30 13:56:30.073526 kernel: Fallback order for Node 0: 0 Jan 30 13:56:30.073538 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Jan 30 13:56:30.073550 kernel: Policy zone: DMA32 Jan 30 13:56:30.073563 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 13:56:30.073576 kernel: Memory: 1971204K/2096612K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 125148K reserved, 0K cma-reserved) Jan 30 13:56:30.073589 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 30 13:56:30.073605 kernel: Kernel/User page tables isolation: enabled Jan 30 13:56:30.073618 kernel: ftrace: allocating 37921 entries in 149 pages Jan 30 13:56:30.073630 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 13:56:30.073644 kernel: Dynamic Preempt: voluntary Jan 30 13:56:30.073657 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 13:56:30.073677 kernel: rcu: RCU event tracing is enabled. Jan 30 13:56:30.073691 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 30 13:56:30.073704 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 13:56:30.073717 kernel: Rude variant of Tasks RCU enabled. Jan 30 13:56:30.073734 kernel: Tracing variant of Tasks RCU enabled. Jan 30 13:56:30.073745 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 13:56:30.073757 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 30 13:56:30.073770 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 30 13:56:30.073783 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 13:56:30.073802 kernel: Console: colour VGA+ 80x25 Jan 30 13:56:30.073815 kernel: printk: console [tty0] enabled Jan 30 13:56:30.073848 kernel: printk: console [ttyS0] enabled Jan 30 13:56:30.073861 kernel: ACPI: Core revision 20230628 Jan 30 13:56:30.073875 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 30 13:56:30.073892 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 13:56:30.073904 kernel: x2apic enabled Jan 30 13:56:30.073917 kernel: APIC: Switched APIC routing to: physical x2apic Jan 30 13:56:30.073929 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 30 13:56:30.073942 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns Jan 30 13:56:30.073955 kernel: Calibrating delay loop (skipped) preset value.. 4000.00 BogoMIPS (lpj=2000000) Jan 30 13:56:30.073968 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 30 13:56:30.073981 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 30 13:56:30.074010 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 13:56:30.074024 kernel: Spectre V2 : Mitigation: Retpolines Jan 30 13:56:30.074037 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 13:56:30.074055 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 30 13:56:30.074068 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 30 13:56:30.074082 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 30 13:56:30.074096 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 30 13:56:30.074109 kernel: MDS: Mitigation: Clear CPU buffers Jan 30 13:56:30.074123 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 30 13:56:30.074147 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 30 13:56:30.074161 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 30 13:56:30.074175 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 30 13:56:30.074188 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 30 13:56:30.074202 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 30 13:56:30.074217 kernel: Freeing SMP alternatives memory: 32K Jan 30 13:56:30.074230 kernel: pid_max: default: 32768 minimum: 301 Jan 30 13:56:30.074243 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 13:56:30.074262 kernel: landlock: Up and running. Jan 30 13:56:30.074275 kernel: SELinux: Initializing. Jan 30 13:56:30.074289 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 30 13:56:30.074302 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 30 13:56:30.074316 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Jan 30 13:56:30.074329 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:56:30.074343 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:56:30.074357 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:56:30.074376 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Jan 30 13:56:30.074389 kernel: signal: max sigframe size: 1776 Jan 30 13:56:30.074402 kernel: rcu: Hierarchical SRCU implementation. Jan 30 13:56:30.074417 kernel: rcu: Max phase no-delay instances is 400. Jan 30 13:56:30.074431 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 30 13:56:30.074445 kernel: smp: Bringing up secondary CPUs ... Jan 30 13:56:30.074459 kernel: smpboot: x86: Booting SMP configuration: Jan 30 13:56:30.074472 kernel: .... node #0, CPUs: #1 Jan 30 13:56:30.074485 kernel: smp: Brought up 1 node, 2 CPUs Jan 30 13:56:30.074504 kernel: smpboot: Max logical packages: 1 Jan 30 13:56:30.074522 kernel: smpboot: Total of 2 processors activated (8000.00 BogoMIPS) Jan 30 13:56:30.074536 kernel: devtmpfs: initialized Jan 30 13:56:30.074549 kernel: x86/mm: Memory block size: 128MB Jan 30 13:56:30.074563 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 13:56:30.074578 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 30 13:56:30.074591 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 13:56:30.074605 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 13:56:30.074619 kernel: audit: initializing netlink subsys (disabled) Jan 30 13:56:30.074632 kernel: audit: type=2000 audit(1738245388.104:1): state=initialized audit_enabled=0 res=1 Jan 30 13:56:30.074652 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 13:56:30.074665 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 13:56:30.074679 kernel: cpuidle: using governor menu Jan 30 13:56:30.074693 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 13:56:30.074706 kernel: dca service started, version 1.12.1 Jan 30 13:56:30.074719 kernel: PCI: Using configuration type 1 for base access Jan 30 13:56:30.074732 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 13:56:30.074745 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 13:56:30.074758 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 13:56:30.074815 kernel: ACPI: Added _OSI(Module Device) Jan 30 13:56:30.077547 kernel: ACPI: Added _OSI(Processor Device) Jan 30 13:56:30.077568 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 13:56:30.077583 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 13:56:30.077599 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 13:56:30.077615 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 30 13:56:30.077630 kernel: ACPI: Interpreter enabled Jan 30 13:56:30.077644 kernel: ACPI: PM: (supports S0 S5) Jan 30 13:56:30.077658 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 13:56:30.077684 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 13:56:30.077698 kernel: PCI: Using E820 reservations for host bridge windows Jan 30 13:56:30.077712 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 30 13:56:30.077726 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 13:56:30.078342 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 30 13:56:30.078552 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 30 13:56:30.078712 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 30 13:56:30.078741 kernel: acpiphp: Slot [3] registered Jan 30 13:56:30.078758 kernel: acpiphp: Slot [4] registered Jan 30 13:56:30.078774 kernel: acpiphp: Slot [5] registered Jan 30 13:56:30.078790 kernel: acpiphp: Slot [6] registered Jan 30 13:56:30.078806 kernel: acpiphp: Slot [7] registered Jan 30 13:56:30.078855 kernel: acpiphp: Slot [8] registered Jan 30 13:56:30.078871 kernel: acpiphp: Slot [9] registered Jan 30 13:56:30.078887 kernel: acpiphp: Slot [10] registered Jan 30 13:56:30.078903 kernel: acpiphp: Slot [11] registered Jan 30 13:56:30.078924 kernel: acpiphp: Slot [12] registered Jan 30 13:56:30.078939 kernel: acpiphp: Slot [13] registered Jan 30 13:56:30.078955 kernel: acpiphp: Slot [14] registered Jan 30 13:56:30.078971 kernel: acpiphp: Slot [15] registered Jan 30 13:56:30.078987 kernel: acpiphp: Slot [16] registered Jan 30 13:56:30.079004 kernel: acpiphp: Slot [17] registered Jan 30 13:56:30.079019 kernel: acpiphp: Slot [18] registered Jan 30 13:56:30.079035 kernel: acpiphp: Slot [19] registered Jan 30 13:56:30.079051 kernel: acpiphp: Slot [20] registered Jan 30 13:56:30.079066 kernel: acpiphp: Slot [21] registered Jan 30 13:56:30.079086 kernel: acpiphp: Slot [22] registered Jan 30 13:56:30.079101 kernel: acpiphp: Slot [23] registered Jan 30 13:56:30.079117 kernel: acpiphp: Slot [24] registered Jan 30 13:56:30.079133 kernel: acpiphp: Slot [25] registered Jan 30 13:56:30.079148 kernel: acpiphp: Slot [26] registered Jan 30 13:56:30.079164 kernel: acpiphp: Slot [27] registered Jan 30 13:56:30.079180 kernel: acpiphp: Slot [28] registered Jan 30 13:56:30.079196 kernel: acpiphp: Slot [29] registered Jan 30 13:56:30.079209 kernel: acpiphp: Slot [30] registered Jan 30 13:56:30.079227 kernel: acpiphp: Slot [31] registered Jan 30 13:56:30.079242 kernel: PCI host bridge to bus 0000:00 Jan 30 13:56:30.079451 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 30 13:56:30.079601 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 30 13:56:30.079742 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 30 13:56:30.081599 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 30 13:56:30.081755 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jan 30 13:56:30.081890 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 13:56:30.082070 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 30 13:56:30.082220 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 30 13:56:30.082409 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jan 30 13:56:30.082555 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Jan 30 13:56:30.082683 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 30 13:56:30.082861 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 30 13:56:30.083017 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 30 13:56:30.083142 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 30 13:56:30.083332 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Jan 30 13:56:30.083461 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Jan 30 13:56:30.083639 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 30 13:56:30.083793 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jan 30 13:56:30.085235 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jan 30 13:56:30.085422 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jan 30 13:56:30.085554 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jan 30 13:56:30.085712 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Jan 30 13:56:30.085876 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Jan 30 13:56:30.086000 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jan 30 13:56:30.086147 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 30 13:56:30.086310 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 30 13:56:30.086436 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Jan 30 13:56:30.086574 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Jan 30 13:56:30.086697 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Jan 30 13:56:30.089971 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 30 13:56:30.090209 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Jan 30 13:56:30.090382 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Jan 30 13:56:30.090532 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Jan 30 13:56:30.090721 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Jan 30 13:56:30.090994 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Jan 30 13:56:30.091134 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Jan 30 13:56:30.091269 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Jan 30 13:56:30.091434 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Jan 30 13:56:30.091605 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Jan 30 13:56:30.091749 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Jan 30 13:56:30.093153 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Jan 30 13:56:30.093410 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Jan 30 13:56:30.093569 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Jan 30 13:56:30.093720 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Jan 30 13:56:30.097151 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Jan 30 13:56:30.097437 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Jan 30 13:56:30.097626 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Jan 30 13:56:30.097780 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Jan 30 13:56:30.097800 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 30 13:56:30.097815 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 30 13:56:30.097925 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 30 13:56:30.097938 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 30 13:56:30.097962 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 30 13:56:30.097976 kernel: iommu: Default domain type: Translated Jan 30 13:56:30.097988 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 13:56:30.098001 kernel: PCI: Using ACPI for IRQ routing Jan 30 13:56:30.098013 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 30 13:56:30.098027 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 30 13:56:30.098041 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Jan 30 13:56:30.098259 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jan 30 13:56:30.098463 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jan 30 13:56:30.098634 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 30 13:56:30.098653 kernel: vgaarb: loaded Jan 30 13:56:30.098667 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 30 13:56:30.098681 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 30 13:56:30.098694 kernel: clocksource: Switched to clocksource kvm-clock Jan 30 13:56:30.098707 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 13:56:30.098720 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 13:56:30.098735 kernel: pnp: PnP ACPI init Jan 30 13:56:30.098750 kernel: pnp: PnP ACPI: found 4 devices Jan 30 13:56:30.098769 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 13:56:30.098783 kernel: NET: Registered PF_INET protocol family Jan 30 13:56:30.098797 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 13:56:30.098812 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 30 13:56:30.098848 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 13:56:30.098863 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 30 13:56:30.098877 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 30 13:56:30.098891 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 30 13:56:30.098905 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 30 13:56:30.098924 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 30 13:56:30.098938 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 13:56:30.098952 kernel: NET: Registered PF_XDP protocol family Jan 30 13:56:30.099111 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 30 13:56:30.099247 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 30 13:56:30.099382 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 30 13:56:30.099525 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 30 13:56:30.099665 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jan 30 13:56:30.100952 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jan 30 13:56:30.101162 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 30 13:56:30.101186 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 30 13:56:30.101345 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 45343 usecs Jan 30 13:56:30.101366 kernel: PCI: CLS 0 bytes, default 64 Jan 30 13:56:30.101379 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 30 13:56:30.101392 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns Jan 30 13:56:30.101406 kernel: Initialise system trusted keyrings Jan 30 13:56:30.101430 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 30 13:56:30.101443 kernel: Key type asymmetric registered Jan 30 13:56:30.101455 kernel: Asymmetric key parser 'x509' registered Jan 30 13:56:30.101471 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 13:56:30.101485 kernel: io scheduler mq-deadline registered Jan 30 13:56:30.101498 kernel: io scheduler kyber registered Jan 30 13:56:30.101511 kernel: io scheduler bfq registered Jan 30 13:56:30.101527 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 13:56:30.101542 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jan 30 13:56:30.101560 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 30 13:56:30.101577 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 30 13:56:30.101591 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 13:56:30.101603 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 13:56:30.101616 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 30 13:56:30.101629 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 30 13:56:30.101642 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 30 13:56:30.101901 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 30 13:56:30.101928 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 30 13:56:30.102077 kernel: rtc_cmos 00:03: registered as rtc0 Jan 30 13:56:30.102219 kernel: rtc_cmos 00:03: setting system clock to 2025-01-30T13:56:29 UTC (1738245389) Jan 30 13:56:30.102354 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 30 13:56:30.102371 kernel: intel_pstate: CPU model not supported Jan 30 13:56:30.102383 kernel: NET: Registered PF_INET6 protocol family Jan 30 13:56:30.102395 kernel: Segment Routing with IPv6 Jan 30 13:56:30.102408 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 13:56:30.102422 kernel: NET: Registered PF_PACKET protocol family Jan 30 13:56:30.102443 kernel: Key type dns_resolver registered Jan 30 13:56:30.102456 kernel: IPI shorthand broadcast: enabled Jan 30 13:56:30.102469 kernel: sched_clock: Marking stable (1245004306, 183155572)->(1481313752, -53153874) Jan 30 13:56:30.102481 kernel: registered taskstats version 1 Jan 30 13:56:30.102494 kernel: Loading compiled-in X.509 certificates Jan 30 13:56:30.102508 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 30 13:56:30.102519 kernel: Key type .fscrypt registered Jan 30 13:56:30.102532 kernel: Key type fscrypt-provisioning registered Jan 30 13:56:30.102544 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 13:56:30.102563 kernel: ima: Allocated hash algorithm: sha1 Jan 30 13:56:30.102575 kernel: ima: No architecture policies found Jan 30 13:56:30.102588 kernel: clk: Disabling unused clocks Jan 30 13:56:30.102601 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 30 13:56:30.102616 kernel: Write protecting the kernel read-only data: 36864k Jan 30 13:56:30.102656 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 30 13:56:30.102673 kernel: Run /init as init process Jan 30 13:56:30.102687 kernel: with arguments: Jan 30 13:56:30.102700 kernel: /init Jan 30 13:56:30.102716 kernel: with environment: Jan 30 13:56:30.102728 kernel: HOME=/ Jan 30 13:56:30.102742 kernel: TERM=linux Jan 30 13:56:30.102754 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 13:56:30.102771 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:56:30.102791 systemd[1]: Detected virtualization kvm. Jan 30 13:56:30.102805 systemd[1]: Detected architecture x86-64. Jan 30 13:56:30.104939 systemd[1]: Running in initrd. Jan 30 13:56:30.104967 systemd[1]: No hostname configured, using default hostname. Jan 30 13:56:30.104982 systemd[1]: Hostname set to . Jan 30 13:56:30.104997 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:56:30.105012 systemd[1]: Queued start job for default target initrd.target. Jan 30 13:56:30.105026 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:56:30.105042 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:56:30.105058 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 13:56:30.105082 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:56:30.105097 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 13:56:30.105111 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 13:56:30.105128 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 13:56:30.105143 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 13:56:30.105156 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:56:30.105170 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:56:30.105187 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:56:30.105201 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:56:30.105215 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:56:30.105232 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:56:30.105246 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:56:30.105259 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:56:30.105276 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 13:56:30.105290 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 13:56:30.105304 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:56:30.105318 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:56:30.105332 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:56:30.105345 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:56:30.105359 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 13:56:30.105373 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:56:30.105389 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 13:56:30.105402 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 13:56:30.105416 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:56:30.105431 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:56:30.105445 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:56:30.105460 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 13:56:30.105535 systemd-journald[180]: Collecting audit messages is disabled. Jan 30 13:56:30.105577 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:56:30.105590 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 13:56:30.105606 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:56:30.105626 systemd-journald[180]: Journal started Jan 30 13:56:30.105663 systemd-journald[180]: Runtime Journal (/run/log/journal/df0e53b2790d431c8239855ca0156884) is 4.9M, max 39.3M, 34.4M free. Jan 30 13:56:30.068086 systemd-modules-load[182]: Inserted module 'overlay' Jan 30 13:56:30.191187 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:56:30.191256 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 13:56:30.191279 kernel: Bridge firewalling registered Jan 30 13:56:30.134463 systemd-modules-load[182]: Inserted module 'br_netfilter' Jan 30 13:56:30.194520 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:56:30.202610 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:56:30.203917 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:56:30.227212 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:56:30.230048 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:56:30.237138 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:56:30.250760 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:56:30.276942 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:56:30.300409 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:56:30.316187 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 13:56:30.318986 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:56:30.349794 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:56:30.367056 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:56:30.384319 dracut-cmdline[214]: dracut-dracut-053 Jan 30 13:56:30.391779 dracut-cmdline[214]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:56:30.432226 systemd-resolved[220]: Positive Trust Anchors: Jan 30 13:56:30.432278 systemd-resolved[220]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:56:30.432334 systemd-resolved[220]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:56:30.441934 systemd-resolved[220]: Defaulting to hostname 'linux'. Jan 30 13:56:30.446878 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:56:30.449874 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:56:30.604962 kernel: SCSI subsystem initialized Jan 30 13:56:30.617866 kernel: Loading iSCSI transport class v2.0-870. Jan 30 13:56:30.643946 kernel: iscsi: registered transport (tcp) Jan 30 13:56:30.681076 kernel: iscsi: registered transport (qla4xxx) Jan 30 13:56:30.681198 kernel: QLogic iSCSI HBA Driver Jan 30 13:56:30.806542 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 13:56:30.815218 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 13:56:30.880055 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 13:56:30.880206 kernel: device-mapper: uevent: version 1.0.3 Jan 30 13:56:30.880231 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 13:56:30.947889 kernel: raid6: avx2x4 gen() 20900 MB/s Jan 30 13:56:30.965880 kernel: raid6: avx2x2 gen() 24410 MB/s Jan 30 13:56:30.983172 kernel: raid6: avx2x1 gen() 18522 MB/s Jan 30 13:56:30.983291 kernel: raid6: using algorithm avx2x2 gen() 24410 MB/s Jan 30 13:56:31.002113 kernel: raid6: .... xor() 13716 MB/s, rmw enabled Jan 30 13:56:31.002211 kernel: raid6: using avx2x2 recovery algorithm Jan 30 13:56:31.031902 kernel: xor: automatically using best checksumming function avx Jan 30 13:56:31.257882 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 13:56:31.276605 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:56:31.296416 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:56:31.313405 systemd-udevd[402]: Using default interface naming scheme 'v255'. Jan 30 13:56:31.318678 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:56:31.329085 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 13:56:31.352178 dracut-pre-trigger[412]: rd.md=0: removing MD RAID activation Jan 30 13:56:31.403009 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:56:31.410209 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:56:31.498250 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:56:31.506153 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 13:56:31.542859 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 13:56:31.545715 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:56:31.548092 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:56:31.549785 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:56:31.558077 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 13:56:31.588402 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:56:31.600864 kernel: scsi host0: Virtio SCSI HBA Jan 30 13:56:31.610884 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Jan 30 13:56:31.681236 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 30 13:56:31.681387 kernel: libata version 3.00 loaded. Jan 30 13:56:31.681404 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 13:56:31.681422 kernel: ata_piix 0000:00:01.1: version 2.13 Jan 30 13:56:31.681614 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 13:56:31.681634 kernel: scsi host1: ata_piix Jan 30 13:56:31.682864 kernel: GPT:9289727 != 125829119 Jan 30 13:56:31.682898 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 13:56:31.682913 kernel: GPT:9289727 != 125829119 Jan 30 13:56:31.682928 kernel: scsi host2: ata_piix Jan 30 13:56:31.683147 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 13:56:31.683166 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Jan 30 13:56:31.683179 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:56:31.683196 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Jan 30 13:56:31.684880 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Jan 30 13:56:31.697589 kernel: ACPI: bus type USB registered Jan 30 13:56:31.697612 kernel: usbcore: registered new interface driver usbfs Jan 30 13:56:31.697623 kernel: usbcore: registered new interface driver hub Jan 30 13:56:31.697634 kernel: usbcore: registered new device driver usb Jan 30 13:56:31.697644 kernel: virtio_blk virtio5: [vdb] 932 512-byte logical blocks (477 kB/466 KiB) Jan 30 13:56:31.701241 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:56:31.701481 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:56:31.704320 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:56:31.704909 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:56:31.705108 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:56:31.705776 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:56:31.718390 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:56:31.791119 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:56:31.803221 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:56:31.832336 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:56:31.881939 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 30 13:56:31.898807 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Jan 30 13:56:31.916329 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Jan 30 13:56:31.916565 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Jan 30 13:56:31.916745 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Jan 30 13:56:31.917203 kernel: hub 1-0:1.0: USB hub found Jan 30 13:56:31.917407 kernel: hub 1-0:1.0: 2 ports detected Jan 30 13:56:31.917564 kernel: AVX2 version of gcm_enc/dec engaged. Jan 30 13:56:31.917583 kernel: AES CTR mode by8 optimization enabled Jan 30 13:56:31.917601 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (450) Jan 30 13:56:31.917045 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 30 13:56:31.927891 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (455) Jan 30 13:56:31.952276 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 13:56:31.957079 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 30 13:56:31.958188 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 30 13:56:31.967325 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 13:56:31.978166 disk-uuid[549]: Primary Header is updated. Jan 30 13:56:31.978166 disk-uuid[549]: Secondary Entries is updated. Jan 30 13:56:31.978166 disk-uuid[549]: Secondary Header is updated. Jan 30 13:56:31.983887 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:56:31.989864 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:56:31.997237 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:56:31.999856 kernel: block device autoloading is deprecated and will be removed. Jan 30 13:56:32.996853 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:56:32.998102 disk-uuid[550]: The operation has completed successfully. Jan 30 13:56:33.093800 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 13:56:33.094896 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 13:56:33.110033 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 13:56:33.120953 sh[565]: Success Jan 30 13:56:33.179379 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 30 13:56:33.311029 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 13:56:33.322012 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 13:56:33.328920 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 13:56:33.392942 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 30 13:56:33.393131 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:56:33.393158 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 13:56:33.393179 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 13:56:33.395549 kernel: BTRFS info (device dm-0): using free space tree Jan 30 13:56:33.421583 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 13:56:33.423145 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 13:56:33.434462 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 13:56:33.439087 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 13:56:33.460771 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:56:33.460891 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:56:33.460913 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:56:33.469918 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:56:33.496023 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 13:56:33.499769 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:56:33.519171 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 13:56:33.529511 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 13:56:33.781524 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:56:33.793359 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:56:33.816573 ignition[657]: Ignition 2.19.0 Jan 30 13:56:33.816591 ignition[657]: Stage: fetch-offline Jan 30 13:56:33.816651 ignition[657]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:56:33.816668 ignition[657]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 13:56:33.816965 ignition[657]: parsed url from cmdline: "" Jan 30 13:56:33.822976 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:56:33.816972 ignition[657]: no config URL provided Jan 30 13:56:33.816981 ignition[657]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:56:33.816995 ignition[657]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:56:33.817004 ignition[657]: failed to fetch config: resource requires networking Jan 30 13:56:33.817345 ignition[657]: Ignition finished successfully Jan 30 13:56:33.845753 systemd-networkd[756]: lo: Link UP Jan 30 13:56:33.845771 systemd-networkd[756]: lo: Gained carrier Jan 30 13:56:33.850790 systemd-networkd[756]: Enumeration completed Jan 30 13:56:33.851481 systemd-networkd[756]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 30 13:56:33.851488 systemd-networkd[756]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Jan 30 13:56:33.852891 systemd-networkd[756]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:56:33.852900 systemd-networkd[756]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:56:33.853808 systemd-networkd[756]: eth0: Link UP Jan 30 13:56:33.853814 systemd-networkd[756]: eth0: Gained carrier Jan 30 13:56:33.853846 systemd-networkd[756]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 30 13:56:33.854050 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:56:33.854972 systemd[1]: Reached target network.target - Network. Jan 30 13:56:33.859436 systemd-networkd[756]: eth1: Link UP Jan 30 13:56:33.859443 systemd-networkd[756]: eth1: Gained carrier Jan 30 13:56:33.859461 systemd-networkd[756]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:56:33.862198 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 30 13:56:33.888012 systemd-networkd[756]: eth0: DHCPv4 address 209.38.134.12/19, gateway 209.38.128.1 acquired from 169.254.169.253 Jan 30 13:56:33.894847 systemd-networkd[756]: eth1: DHCPv4 address 10.124.0.21/20 acquired from 169.254.169.253 Jan 30 13:56:33.908173 ignition[759]: Ignition 2.19.0 Jan 30 13:56:33.908194 ignition[759]: Stage: fetch Jan 30 13:56:33.908488 ignition[759]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:56:33.908504 ignition[759]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 13:56:33.908660 ignition[759]: parsed url from cmdline: "" Jan 30 13:56:33.908666 ignition[759]: no config URL provided Jan 30 13:56:33.908674 ignition[759]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:56:33.908686 ignition[759]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:56:33.908713 ignition[759]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Jan 30 13:56:33.949206 ignition[759]: GET result: OK Jan 30 13:56:33.951179 ignition[759]: parsing config with SHA512: 3c83605896550c61eab2bff10d053e5c0173904550d11aeb38a35273db17c9b3eb8f2aaf614cfd2dfc7ca0edc7585cc692c3ca15bb7e166229d17badecd6eb3c Jan 30 13:56:33.966938 unknown[759]: fetched base config from "system" Jan 30 13:56:33.966957 unknown[759]: fetched base config from "system" Jan 30 13:56:33.967375 ignition[759]: fetch: fetch complete Jan 30 13:56:33.966968 unknown[759]: fetched user config from "digitalocean" Jan 30 13:56:33.967384 ignition[759]: fetch: fetch passed Jan 30 13:56:33.970070 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 30 13:56:33.967471 ignition[759]: Ignition finished successfully Jan 30 13:56:33.982290 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 13:56:34.061349 ignition[767]: Ignition 2.19.0 Jan 30 13:56:34.061363 ignition[767]: Stage: kargs Jan 30 13:56:34.061714 ignition[767]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:56:34.061728 ignition[767]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 13:56:34.062783 ignition[767]: kargs: kargs passed Jan 30 13:56:34.065706 ignition[767]: Ignition finished successfully Jan 30 13:56:34.069260 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 13:56:34.081326 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 13:56:34.107964 ignition[773]: Ignition 2.19.0 Jan 30 13:56:34.108890 ignition[773]: Stage: disks Jan 30 13:56:34.109314 ignition[773]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:56:34.109331 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 13:56:34.113113 ignition[773]: disks: disks passed Jan 30 13:56:34.115454 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 13:56:34.113483 ignition[773]: Ignition finished successfully Jan 30 13:56:34.120964 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 13:56:34.122930 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 13:56:34.129659 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:56:34.130392 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:56:34.134842 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:56:34.145675 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 13:56:34.177964 systemd-fsck[782]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 30 13:56:34.189200 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 13:56:34.208081 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 13:56:34.454848 kernel: EXT4-fs (vda9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 30 13:56:34.472780 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 13:56:34.475647 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 13:56:34.487534 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:56:34.493156 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 13:56:34.496182 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Jan 30 13:56:34.510608 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 30 13:56:34.525254 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (790) Jan 30 13:56:34.525299 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:56:34.525348 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:56:34.525365 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:56:34.514554 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 13:56:34.530877 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:56:34.514618 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:56:34.541525 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:56:34.542553 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 13:56:34.562355 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 13:56:34.683889 initrd-setup-root[820]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 13:56:34.710932 coreos-metadata[792]: Jan 30 13:56:34.708 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 30 13:56:34.712543 initrd-setup-root[827]: cut: /sysroot/etc/group: No such file or directory Jan 30 13:56:34.719251 coreos-metadata[793]: Jan 30 13:56:34.718 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 30 13:56:34.723475 coreos-metadata[792]: Jan 30 13:56:34.723 INFO Fetch successful Jan 30 13:56:34.726023 initrd-setup-root[834]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 13:56:34.737252 coreos-metadata[793]: Jan 30 13:56:34.736 INFO Fetch successful Jan 30 13:56:34.738917 initrd-setup-root[841]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 13:56:34.745652 coreos-metadata[793]: Jan 30 13:56:34.744 INFO wrote hostname ci-4081.3.0-a-7ea7bfb23e to /sysroot/etc/hostname Jan 30 13:56:34.743885 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Jan 30 13:56:34.744860 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Jan 30 13:56:34.746993 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 13:56:34.958974 systemd-networkd[756]: eth0: Gained IPv6LL Jan 30 13:56:34.966199 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 13:56:34.998507 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 13:56:35.009223 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 13:56:35.026247 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 13:56:35.028315 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:56:35.089842 systemd-networkd[756]: eth1: Gained IPv6LL Jan 30 13:56:35.097639 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 13:56:35.119210 ignition[911]: INFO : Ignition 2.19.0 Jan 30 13:56:35.120742 ignition[911]: INFO : Stage: mount Jan 30 13:56:35.120742 ignition[911]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:56:35.120742 ignition[911]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 13:56:35.123627 ignition[911]: INFO : mount: mount passed Jan 30 13:56:35.123627 ignition[911]: INFO : Ignition finished successfully Jan 30 13:56:35.123335 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 13:56:35.150301 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 13:56:35.476031 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:56:35.502751 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (922) Jan 30 13:56:35.506854 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:56:35.506987 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:56:35.512739 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:56:35.520939 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:56:35.525782 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:56:35.583282 ignition[939]: INFO : Ignition 2.19.0 Jan 30 13:56:35.592347 ignition[939]: INFO : Stage: files Jan 30 13:56:35.611456 ignition[939]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:56:35.611456 ignition[939]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 13:56:35.611456 ignition[939]: DEBUG : files: compiled without relabeling support, skipping Jan 30 13:56:35.628686 ignition[939]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 13:56:35.630665 ignition[939]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 13:56:35.657873 ignition[939]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 13:56:35.660367 ignition[939]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 13:56:35.666556 unknown[939]: wrote ssh authorized keys file for user: core Jan 30 13:56:35.668116 ignition[939]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 13:56:35.677678 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 30 13:56:35.683019 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 13:56:35.683019 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:56:35.683019 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:56:35.683019 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:56:35.683019 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:56:35.683019 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:56:35.704995 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 30 13:56:36.100606 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 30 13:56:36.860086 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:56:36.867872 ignition[939]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:56:36.878401 ignition[939]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:56:36.878401 ignition[939]: INFO : files: files passed Jan 30 13:56:36.878401 ignition[939]: INFO : Ignition finished successfully Jan 30 13:56:36.883672 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 13:56:36.906118 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 13:56:36.925392 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 13:56:36.938570 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 13:56:36.940090 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 13:56:36.993697 initrd-setup-root-after-ignition[969]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:56:36.993697 initrd-setup-root-after-ignition[969]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:56:37.004552 initrd-setup-root-after-ignition[973]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:56:37.007199 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:56:37.009335 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 13:56:37.048312 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 13:56:37.163252 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 13:56:37.164787 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 13:56:37.168492 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 13:56:37.170027 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 13:56:37.180513 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 13:56:37.193473 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 13:56:37.240082 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:56:37.263207 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 13:56:37.295804 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:56:37.297224 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:56:37.298461 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 13:56:37.299645 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 13:56:37.300015 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:56:37.301507 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 13:56:37.302619 systemd[1]: Stopped target basic.target - Basic System. Jan 30 13:56:37.310751 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 13:56:37.314037 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:56:37.315707 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 13:56:37.321476 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 13:56:37.324239 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:56:37.327667 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 13:56:37.328986 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 13:56:37.329933 systemd[1]: Stopped target swap.target - Swaps. Jan 30 13:56:37.333742 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 13:56:37.334037 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:56:37.345330 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:56:37.347428 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:56:37.349804 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 13:56:37.355733 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:56:37.360155 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 13:56:37.360341 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 13:56:37.362589 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 13:56:37.362913 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:56:37.368952 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 13:56:37.369259 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 13:56:37.373406 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 30 13:56:37.373762 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 13:56:37.386514 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 13:56:37.391276 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 13:56:37.396019 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 13:56:37.396393 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:56:37.398313 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 13:56:37.398555 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:56:37.417109 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 13:56:37.417254 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 13:56:37.449154 ignition[993]: INFO : Ignition 2.19.0 Jan 30 13:56:37.449154 ignition[993]: INFO : Stage: umount Jan 30 13:56:37.457732 ignition[993]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:56:37.457732 ignition[993]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 13:56:37.457732 ignition[993]: INFO : umount: umount passed Jan 30 13:56:37.457732 ignition[993]: INFO : Ignition finished successfully Jan 30 13:56:37.457999 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 13:56:37.458182 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 13:56:37.467484 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 13:56:37.468438 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 13:56:37.473257 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 13:56:37.473392 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 13:56:37.474339 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 30 13:56:37.474445 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 30 13:56:37.475276 systemd[1]: Stopped target network.target - Network. Jan 30 13:56:37.477017 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 13:56:37.477151 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:56:37.478992 systemd[1]: Stopped target paths.target - Path Units. Jan 30 13:56:37.487672 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 13:56:37.495119 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:56:37.496885 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 13:56:37.504310 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 13:56:37.515437 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 13:56:37.515549 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:56:37.519164 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 13:56:37.519262 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:56:37.525775 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 13:56:37.525965 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 13:56:37.527228 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 13:56:37.527330 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 13:56:37.529175 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 13:56:37.542503 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 13:56:37.544987 systemd-networkd[756]: eth0: DHCPv6 lease lost Jan 30 13:56:37.546207 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 13:56:37.547698 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 13:56:37.548018 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 13:56:37.549967 systemd-networkd[756]: eth1: DHCPv6 lease lost Jan 30 13:56:37.553914 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 13:56:37.554045 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 13:56:37.563615 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 13:56:37.564095 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 13:56:37.582711 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 13:56:37.583297 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 13:56:37.595255 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 13:56:37.595382 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:56:37.604256 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 13:56:37.605927 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 13:56:37.606094 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:56:37.609305 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:56:37.609444 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:56:37.617141 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 13:56:37.617300 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 13:56:37.620408 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 13:56:37.620547 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:56:37.624125 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:56:37.657419 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 13:56:37.659396 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:56:37.667644 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 13:56:37.668057 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 13:56:37.669466 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 13:56:37.669567 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:56:37.676726 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 13:56:37.676914 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:56:37.678163 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 13:56:37.678268 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 13:56:37.679250 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:56:37.679339 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:56:37.698742 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 13:56:37.700237 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 13:56:37.700413 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:56:37.707930 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 30 13:56:37.708107 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:56:37.717766 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 13:56:37.717966 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:56:37.720021 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:56:37.720156 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:56:37.722692 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 13:56:37.722958 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 13:56:37.739754 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 13:56:37.740007 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 13:56:37.744456 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 13:56:37.755497 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 13:56:37.798752 systemd[1]: Switching root. Jan 30 13:56:37.867538 systemd-journald[180]: Journal stopped Jan 30 13:56:39.885731 systemd-journald[180]: Received SIGTERM from PID 1 (systemd). Jan 30 13:56:39.885901 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 13:56:39.885934 kernel: SELinux: policy capability open_perms=1 Jan 30 13:56:39.885959 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 13:56:39.885976 kernel: SELinux: policy capability always_check_network=0 Jan 30 13:56:39.885994 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 13:56:39.886013 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 13:56:39.886031 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 13:56:39.886051 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 13:56:39.886081 kernel: audit: type=1403 audit(1738245398.260:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 13:56:39.886102 systemd[1]: Successfully loaded SELinux policy in 66.637ms. Jan 30 13:56:39.886138 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 38.711ms. Jan 30 13:56:39.886160 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:56:39.886181 systemd[1]: Detected virtualization kvm. Jan 30 13:56:39.886204 systemd[1]: Detected architecture x86-64. Jan 30 13:56:39.886225 systemd[1]: Detected first boot. Jan 30 13:56:39.886246 systemd[1]: Hostname set to . Jan 30 13:56:39.886276 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:56:39.886297 zram_generator::config[1036]: No configuration found. Jan 30 13:56:39.886320 systemd[1]: Populated /etc with preset unit settings. Jan 30 13:56:39.886340 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 30 13:56:39.886358 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 30 13:56:39.886377 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 30 13:56:39.886400 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 13:56:39.886418 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 13:56:39.886449 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 13:56:39.886473 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 13:56:39.886493 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 13:56:39.886515 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 13:56:39.886536 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 13:56:39.886554 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 13:56:39.886575 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:56:39.886596 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:56:39.886617 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 13:56:39.886646 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 13:56:39.886667 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 13:56:39.886689 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:56:39.886709 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 30 13:56:39.886729 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:56:39.886749 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 30 13:56:39.886767 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 30 13:56:39.886794 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 30 13:56:39.886815 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 13:56:39.886863 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:56:39.886885 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:56:39.886906 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:56:39.886930 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:56:39.886950 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 13:56:39.886971 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 13:56:39.887005 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:56:39.887027 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:56:39.887048 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:56:39.887069 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 13:56:39.887095 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 13:56:39.887114 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 13:56:39.887133 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 13:56:39.887152 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:56:39.887171 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 13:56:39.887200 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 13:56:39.887219 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 13:56:39.887240 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 13:56:39.887258 systemd[1]: Reached target machines.target - Containers. Jan 30 13:56:39.887277 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 13:56:39.887296 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:56:39.887313 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:56:39.887330 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 13:56:39.887350 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:56:39.887378 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:56:39.887396 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:56:39.887416 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 13:56:39.887436 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:56:39.887457 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 13:56:39.887476 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 30 13:56:39.887494 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 30 13:56:39.887514 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 30 13:56:39.887545 systemd[1]: Stopped systemd-fsck-usr.service. Jan 30 13:56:39.887567 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:56:39.887587 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:56:39.887608 kernel: loop: module loaded Jan 30 13:56:39.887627 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 13:56:39.887648 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 13:56:39.887667 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:56:39.887687 systemd[1]: verity-setup.service: Deactivated successfully. Jan 30 13:56:39.890698 systemd[1]: Stopped verity-setup.service. Jan 30 13:56:39.890756 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:56:39.890779 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 13:56:39.890802 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 13:56:39.893626 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 13:56:39.893694 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 13:56:39.893744 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 13:56:39.893764 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 13:56:39.893785 kernel: ACPI: bus type drm_connector registered Jan 30 13:56:39.893806 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:56:39.893855 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 13:56:39.893884 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 13:56:39.893917 kernel: fuse: init (API version 7.39) Jan 30 13:56:39.893935 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:56:39.893956 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:56:39.893975 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:56:39.893993 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:56:39.894013 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:56:39.894034 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:56:39.894057 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 13:56:39.894090 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 13:56:39.894111 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:56:39.894130 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:56:39.894213 systemd-journald[1105]: Collecting audit messages is disabled. Jan 30 13:56:39.894259 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:56:39.894281 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 13:56:39.894304 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 13:56:39.894341 systemd-journald[1105]: Journal started Jan 30 13:56:39.894380 systemd-journald[1105]: Runtime Journal (/run/log/journal/df0e53b2790d431c8239855ca0156884) is 4.9M, max 39.3M, 34.4M free. Jan 30 13:56:39.395883 systemd[1]: Queued start job for default target multi-user.target. Jan 30 13:56:39.899425 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:56:39.423663 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 30 13:56:39.424542 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 30 13:56:39.903436 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 13:56:39.918168 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 13:56:39.925982 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 13:56:39.928049 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 13:56:39.928110 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:56:39.933811 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 13:56:39.950937 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 13:56:39.959339 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 13:56:39.961507 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:56:39.967197 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 13:56:39.970301 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 13:56:39.971086 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:56:39.977184 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 13:56:39.979199 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:56:39.992555 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:56:40.011195 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 13:56:40.042218 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:56:40.058816 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 13:56:40.061264 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 13:56:40.063601 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 13:56:40.166952 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 13:56:40.169512 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 13:56:40.186258 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 13:56:40.205108 kernel: loop0: detected capacity change from 0 to 140768 Jan 30 13:56:40.226405 systemd-journald[1105]: Time spent on flushing to /var/log/journal/df0e53b2790d431c8239855ca0156884 is 36.255ms for 977 entries. Jan 30 13:56:40.226405 systemd-journald[1105]: System Journal (/var/log/journal/df0e53b2790d431c8239855ca0156884) is 8.0M, max 195.6M, 187.6M free. Jan 30 13:56:40.289163 systemd-journald[1105]: Received client request to flush runtime journal. Jan 30 13:56:40.253451 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 13:56:40.258000 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 13:56:40.271599 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 13:56:40.281903 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:56:40.295609 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 13:56:40.302862 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 13:56:40.347977 kernel: loop1: detected capacity change from 0 to 210664 Jan 30 13:56:40.367710 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:56:40.381296 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 13:56:40.395400 systemd-tmpfiles[1136]: ACLs are not supported, ignoring. Jan 30 13:56:40.395426 systemd-tmpfiles[1136]: ACLs are not supported, ignoring. Jan 30 13:56:40.426449 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:56:40.439185 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 13:56:40.450951 kernel: loop2: detected capacity change from 0 to 142488 Jan 30 13:56:40.469176 udevadm[1173]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 30 13:56:40.537500 kernel: loop3: detected capacity change from 0 to 8 Jan 30 13:56:40.551475 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 13:56:40.562221 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:56:40.571880 kernel: loop4: detected capacity change from 0 to 140768 Jan 30 13:56:40.607969 kernel: loop5: detected capacity change from 0 to 210664 Jan 30 13:56:40.632868 kernel: loop6: detected capacity change from 0 to 142488 Jan 30 13:56:40.676470 systemd-tmpfiles[1181]: ACLs are not supported, ignoring. Jan 30 13:56:40.678064 systemd-tmpfiles[1181]: ACLs are not supported, ignoring. Jan 30 13:56:40.689850 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:56:40.701870 kernel: loop7: detected capacity change from 0 to 8 Jan 30 13:56:40.713925 (sd-merge)[1182]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Jan 30 13:56:40.716745 (sd-merge)[1182]: Merged extensions into '/usr'. Jan 30 13:56:40.739232 systemd[1]: Reloading requested from client PID 1134 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 13:56:40.739259 systemd[1]: Reloading... Jan 30 13:56:40.907872 zram_generator::config[1210]: No configuration found. Jan 30 13:56:41.182206 ldconfig[1128]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 13:56:41.307938 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:56:41.394047 systemd[1]: Reloading finished in 654 ms. Jan 30 13:56:41.435375 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 13:56:41.438288 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 13:56:41.451305 systemd[1]: Starting ensure-sysext.service... Jan 30 13:56:41.468118 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:56:41.495231 systemd[1]: Reloading requested from client PID 1253 ('systemctl') (unit ensure-sysext.service)... Jan 30 13:56:41.495261 systemd[1]: Reloading... Jan 30 13:56:41.514241 systemd-tmpfiles[1254]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 13:56:41.514867 systemd-tmpfiles[1254]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 13:56:41.516489 systemd-tmpfiles[1254]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 13:56:41.517223 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. Jan 30 13:56:41.517326 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. Jan 30 13:56:41.523063 systemd-tmpfiles[1254]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:56:41.523083 systemd-tmpfiles[1254]: Skipping /boot Jan 30 13:56:41.543166 systemd-tmpfiles[1254]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:56:41.543188 systemd-tmpfiles[1254]: Skipping /boot Jan 30 13:56:41.648899 zram_generator::config[1281]: No configuration found. Jan 30 13:56:41.858387 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:56:41.933698 systemd[1]: Reloading finished in 437 ms. Jan 30 13:56:41.961200 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 13:56:41.972107 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:56:41.990267 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 13:56:41.998722 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 13:56:42.003185 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 13:56:42.016364 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:56:42.024253 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:56:42.039154 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 13:56:42.050542 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:56:42.053051 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:56:42.069424 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:56:42.081441 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:56:42.097427 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:56:42.099135 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:56:42.104992 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 13:56:42.106950 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:56:42.115180 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:56:42.115530 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:56:42.115886 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:56:42.117005 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:56:42.151673 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:56:42.152962 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:56:42.157732 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:56:42.158360 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:56:42.169320 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:56:42.171151 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:56:42.171456 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:56:42.180416 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:56:42.180702 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:56:42.183386 systemd[1]: Finished ensure-sysext.service. Jan 30 13:56:42.191431 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 13:56:42.203883 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:56:42.207136 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:56:42.210378 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:56:42.210715 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:56:42.222704 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 13:56:42.233610 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:56:42.233730 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:56:42.246481 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 30 13:56:42.247399 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 13:56:42.255351 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 13:56:42.266258 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 13:56:42.291156 systemd-udevd[1332]: Using default interface naming scheme 'v255'. Jan 30 13:56:42.293665 augenrules[1362]: No rules Jan 30 13:56:42.296435 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 13:56:42.297931 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 13:56:42.321645 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 13:56:42.366816 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:56:42.377221 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:56:42.482079 systemd-resolved[1330]: Positive Trust Anchors: Jan 30 13:56:42.482106 systemd-resolved[1330]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:56:42.482143 systemd-resolved[1330]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:56:42.500063 systemd-resolved[1330]: Using system hostname 'ci-4081.3.0-a-7ea7bfb23e'. Jan 30 13:56:42.504971 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:56:42.507082 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:56:42.508705 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 30 13:56:42.510164 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 13:56:42.549957 systemd-networkd[1377]: lo: Link UP Jan 30 13:56:42.549978 systemd-networkd[1377]: lo: Gained carrier Jan 30 13:56:42.552285 systemd-networkd[1377]: Enumeration completed Jan 30 13:56:42.552985 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:56:42.555044 systemd[1]: Reached target network.target - Network. Jan 30 13:56:42.564027 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 13:56:42.602535 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 30 13:56:42.642710 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Jan 30 13:56:42.644010 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:56:42.644182 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:56:42.652091 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:56:42.664439 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:56:42.668013 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:56:42.668765 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:56:42.669942 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 13:56:42.670009 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:56:42.685065 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:56:42.685521 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:56:42.705238 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1392) Jan 30 13:56:42.705367 kernel: ISO 9660 Extensions: RRIP_1991A Jan 30 13:56:42.704696 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:56:42.704989 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:56:42.712955 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Jan 30 13:56:42.717393 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:56:42.717986 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:56:42.721815 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:56:42.723956 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:56:42.857788 systemd-networkd[1377]: eth1: Configuring with /run/systemd/network/10-46:7a:25:e3:e4:0b.network. Jan 30 13:56:42.859436 systemd-networkd[1377]: eth1: Link UP Jan 30 13:56:42.859604 systemd-networkd[1377]: eth1: Gained carrier Jan 30 13:56:42.864207 systemd-timesyncd[1358]: Network configuration changed, trying to establish connection. Jan 30 13:56:42.874252 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 30 13:56:42.874370 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jan 30 13:56:42.882156 systemd-networkd[1377]: eth0: Configuring with /run/systemd/network/10-56:86:71:4a:6c:6c.network. Jan 30 13:56:42.884237 systemd-timesyncd[1358]: Network configuration changed, trying to establish connection. Jan 30 13:56:42.884279 systemd-networkd[1377]: eth0: Link UP Jan 30 13:56:42.884286 systemd-networkd[1377]: eth0: Gained carrier Jan 30 13:56:42.888895 kernel: ACPI: button: Power Button [PWRF] Jan 30 13:56:42.890627 systemd-timesyncd[1358]: Network configuration changed, trying to establish connection. Jan 30 13:56:42.891083 systemd-timesyncd[1358]: Network configuration changed, trying to establish connection. Jan 30 13:56:42.957158 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 30 13:56:42.998351 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jan 30 13:56:42.998467 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 13:56:42.997428 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 13:56:42.998888 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jan 30 13:56:43.011888 kernel: Console: switching to colour dummy device 80x25 Jan 30 13:56:43.012032 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 30 13:56:43.012060 kernel: [drm] features: -context_init Jan 30 13:56:43.013938 kernel: [drm] number of scanouts: 1 Jan 30 13:56:43.014003 kernel: [drm] number of cap sets: 0 Jan 30 13:56:43.019866 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jan 30 13:56:43.022457 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 13:56:47.572908 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 30 13:56:47.577907 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 4725805881 wd_nsec: 4725805454 Jan 30 13:56:47.573475 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:56:47.587434 kernel: Console: switching to colour frame buffer device 128x48 Jan 30 13:56:47.597903 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 30 13:56:47.611581 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:56:47.612852 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:56:47.628376 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:56:47.647525 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 13:56:47.726260 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:56:47.726549 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:56:47.749459 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:56:47.835868 kernel: EDAC MC: Ver: 3.0.0 Jan 30 13:56:47.843941 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:56:47.871411 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 13:56:47.885161 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 13:56:47.909154 lvm[1435]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:56:47.951275 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 13:56:47.955142 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:56:47.955324 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:56:47.955638 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 13:56:47.955812 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 13:56:47.956259 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 13:56:47.956637 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 13:56:47.956758 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 13:56:47.957516 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 13:56:47.957628 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:56:47.957755 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:56:47.960109 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 13:56:47.964040 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 13:56:47.981366 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 13:56:47.985778 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 13:56:47.988446 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 13:56:47.991655 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:56:47.993709 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:56:47.994480 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:56:47.994516 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:56:47.999067 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 13:56:48.006112 lvm[1439]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:56:48.011277 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 30 13:56:48.031239 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 13:56:48.044119 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 13:56:48.054551 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 13:56:48.058465 coreos-metadata[1441]: Jan 30 13:56:48.056 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 30 13:56:48.058693 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 13:56:48.062844 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 13:56:48.075118 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 13:56:48.096402 coreos-metadata[1441]: Jan 30 13:56:48.091 INFO Fetch successful Jan 30 13:56:48.092358 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 13:56:48.096723 jq[1445]: false Jan 30 13:56:48.106206 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 13:56:48.109804 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 13:56:48.111492 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 13:56:48.114667 dbus-daemon[1442]: [system] SELinux support is enabled Jan 30 13:56:48.119207 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 13:56:48.136024 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 13:56:48.137774 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 13:56:48.148917 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 13:56:48.167838 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 13:56:48.169415 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 13:56:48.177658 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 13:56:48.177923 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 13:56:48.187990 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 13:56:48.188061 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 13:56:48.193563 update_engine[1451]: I20250130 13:56:48.193138 1451 main.cc:92] Flatcar Update Engine starting Jan 30 13:56:48.196024 update_engine[1451]: I20250130 13:56:48.195948 1451 update_check_scheduler.cc:74] Next update check in 6m45s Jan 30 13:56:48.196048 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 13:56:48.196194 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Jan 30 13:56:48.196231 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 13:56:48.211397 systemd[1]: Started update-engine.service - Update Engine. Jan 30 13:56:48.228242 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 13:56:48.235611 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 13:56:48.235903 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 13:56:48.243638 jq[1454]: true Jan 30 13:56:48.247310 extend-filesystems[1446]: Found loop4 Jan 30 13:56:48.247310 extend-filesystems[1446]: Found loop5 Jan 30 13:56:48.247310 extend-filesystems[1446]: Found loop6 Jan 30 13:56:48.247310 extend-filesystems[1446]: Found loop7 Jan 30 13:56:48.247310 extend-filesystems[1446]: Found vda Jan 30 13:56:48.247310 extend-filesystems[1446]: Found vda1 Jan 30 13:56:48.247310 extend-filesystems[1446]: Found vda2 Jan 30 13:56:48.247310 extend-filesystems[1446]: Found vda3 Jan 30 13:56:48.247310 extend-filesystems[1446]: Found usr Jan 30 13:56:48.247310 extend-filesystems[1446]: Found vda4 Jan 30 13:56:48.247310 extend-filesystems[1446]: Found vda6 Jan 30 13:56:48.247310 extend-filesystems[1446]: Found vda7 Jan 30 13:56:48.247310 extend-filesystems[1446]: Found vda9 Jan 30 13:56:48.247310 extend-filesystems[1446]: Checking size of /dev/vda9 Jan 30 13:56:48.320414 (ntainerd)[1474]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 13:56:48.342104 jq[1472]: true Jan 30 13:56:48.344256 extend-filesystems[1446]: Resized partition /dev/vda9 Jan 30 13:56:48.356950 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 30 13:56:48.361197 extend-filesystems[1483]: resize2fs 1.47.1 (20-May-2024) Jan 30 13:56:48.366573 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 13:56:48.375616 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1383) Jan 30 13:56:48.384872 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Jan 30 13:56:48.489560 systemd-logind[1450]: New seat seat0. Jan 30 13:56:48.507684 locksmithd[1470]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 13:56:48.520762 systemd-logind[1450]: Watching system buttons on /dev/input/event1 (Power Button) Jan 30 13:56:48.520792 systemd-logind[1450]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 30 13:56:48.525755 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 13:56:48.558934 bash[1500]: Updated "/home/core/.ssh/authorized_keys" Jan 30 13:56:48.563861 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 13:56:48.587107 systemd[1]: Starting sshkeys.service... Jan 30 13:56:48.591460 systemd-networkd[1377]: eth1: Gained IPv6LL Jan 30 13:56:48.592205 systemd-networkd[1377]: eth0: Gained IPv6LL Jan 30 13:56:48.593983 systemd-timesyncd[1358]: Network configuration changed, trying to establish connection. Jan 30 13:56:48.600965 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 13:56:48.603703 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 13:56:48.619197 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:56:48.631251 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 13:56:48.647227 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 30 13:56:48.660468 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 30 13:56:48.694775 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 30 13:56:48.746580 extend-filesystems[1483]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 30 13:56:48.746580 extend-filesystems[1483]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 30 13:56:48.746580 extend-filesystems[1483]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 30 13:56:48.761659 extend-filesystems[1446]: Resized filesystem in /dev/vda9 Jan 30 13:56:48.761659 extend-filesystems[1446]: Found vdb Jan 30 13:56:48.748563 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 13:56:48.750134 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 13:56:48.775989 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 13:56:48.792645 coreos-metadata[1511]: Jan 30 13:56:48.791 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 30 13:56:48.808027 coreos-metadata[1511]: Jan 30 13:56:48.807 INFO Fetch successful Jan 30 13:56:48.847868 unknown[1511]: wrote ssh authorized keys file for user: core Jan 30 13:56:48.877865 sshd_keygen[1469]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 13:56:48.930728 update-ssh-keys[1528]: Updated "/home/core/.ssh/authorized_keys" Jan 30 13:56:48.933984 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 30 13:56:48.940335 systemd[1]: Finished sshkeys.service. Jan 30 13:56:48.989101 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 13:56:48.996351 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 13:56:49.005451 containerd[1474]: time="2025-01-30T13:56:49.002599237Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 30 13:56:49.038608 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 13:56:49.038997 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 13:56:49.052555 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 13:56:49.063663 containerd[1474]: time="2025-01-30T13:56:49.063559516Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:56:49.066404 containerd[1474]: time="2025-01-30T13:56:49.066334559Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:56:49.067189 containerd[1474]: time="2025-01-30T13:56:49.066568965Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 13:56:49.067189 containerd[1474]: time="2025-01-30T13:56:49.066689783Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 13:56:49.067189 containerd[1474]: time="2025-01-30T13:56:49.066970951Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 13:56:49.067189 containerd[1474]: time="2025-01-30T13:56:49.066998196Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 13:56:49.067189 containerd[1474]: time="2025-01-30T13:56:49.067077547Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:56:49.067189 containerd[1474]: time="2025-01-30T13:56:49.067096150Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:56:49.067768 containerd[1474]: time="2025-01-30T13:56:49.067733143Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:56:49.068490 containerd[1474]: time="2025-01-30T13:56:49.067874938Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 13:56:49.068490 containerd[1474]: time="2025-01-30T13:56:49.067905840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:56:49.068490 containerd[1474]: time="2025-01-30T13:56:49.067922864Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 13:56:49.068490 containerd[1474]: time="2025-01-30T13:56:49.068064060Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:56:49.068490 containerd[1474]: time="2025-01-30T13:56:49.068384676Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:56:49.072419 containerd[1474]: time="2025-01-30T13:56:49.070032535Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:56:49.072419 containerd[1474]: time="2025-01-30T13:56:49.070063167Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 13:56:49.072419 containerd[1474]: time="2025-01-30T13:56:49.070256761Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 13:56:49.072419 containerd[1474]: time="2025-01-30T13:56:49.070331129Z" level=info msg="metadata content store policy set" policy=shared Jan 30 13:56:49.093685 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 13:56:49.096078 containerd[1474]: time="2025-01-30T13:56:49.095731434Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 13:56:49.096592 containerd[1474]: time="2025-01-30T13:56:49.096290801Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 13:56:49.096592 containerd[1474]: time="2025-01-30T13:56:49.096522606Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 13:56:49.096592 containerd[1474]: time="2025-01-30T13:56:49.096560607Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 13:56:49.097708 containerd[1474]: time="2025-01-30T13:56:49.096777154Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 13:56:49.097708 containerd[1474]: time="2025-01-30T13:56:49.097619206Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 13:56:49.098459 containerd[1474]: time="2025-01-30T13:56:49.098408275Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 13:56:49.098856 containerd[1474]: time="2025-01-30T13:56:49.098772540Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 13:56:49.098856 containerd[1474]: time="2025-01-30T13:56:49.098809399Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 13:56:49.099115 containerd[1474]: time="2025-01-30T13:56:49.098967295Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 13:56:49.099115 containerd[1474]: time="2025-01-30T13:56:49.098990448Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 13:56:49.099115 containerd[1474]: time="2025-01-30T13:56:49.099005450Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 13:56:49.099115 containerd[1474]: time="2025-01-30T13:56:49.099038154Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 13:56:49.099115 containerd[1474]: time="2025-01-30T13:56:49.099064892Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 13:56:49.099115 containerd[1474]: time="2025-01-30T13:56:49.099082516Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 13:56:49.099466 containerd[1474]: time="2025-01-30T13:56:49.099303172Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 13:56:49.099466 containerd[1474]: time="2025-01-30T13:56:49.099335481Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 13:56:49.099466 containerd[1474]: time="2025-01-30T13:56:49.099393090Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 13:56:49.099466 containerd[1474]: time="2025-01-30T13:56:49.099427892Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 13:56:49.099466 containerd[1474]: time="2025-01-30T13:56:49.099445384Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 13:56:49.099745 containerd[1474]: time="2025-01-30T13:56:49.099597009Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 13:56:49.099745 containerd[1474]: time="2025-01-30T13:56:49.099617680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 13:56:49.099745 containerd[1474]: time="2025-01-30T13:56:49.099631701Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 13:56:49.099745 containerd[1474]: time="2025-01-30T13:56:49.099644791Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 13:56:49.099745 containerd[1474]: time="2025-01-30T13:56:49.099671548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 13:56:49.099745 containerd[1474]: time="2025-01-30T13:56:49.099685055Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 13:56:49.099745 containerd[1474]: time="2025-01-30T13:56:49.099697810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 13:56:49.100147 containerd[1474]: time="2025-01-30T13:56:49.099722905Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 13:56:49.100147 containerd[1474]: time="2025-01-30T13:56:49.099917224Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 13:56:49.100147 containerd[1474]: time="2025-01-30T13:56:49.099934153Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 13:56:49.100147 containerd[1474]: time="2025-01-30T13:56:49.099948739Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 13:56:49.100669 containerd[1474]: time="2025-01-30T13:56:49.100280218Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 13:56:49.100669 containerd[1474]: time="2025-01-30T13:56:49.100321831Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 13:56:49.100669 containerd[1474]: time="2025-01-30T13:56:49.100354006Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 13:56:49.100669 containerd[1474]: time="2025-01-30T13:56:49.100366405Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 13:56:49.100669 containerd[1474]: time="2025-01-30T13:56:49.100459683Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 13:56:49.100669 containerd[1474]: time="2025-01-30T13:56:49.100491026Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 13:56:49.100669 containerd[1474]: time="2025-01-30T13:56:49.100604394Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 13:56:49.100669 containerd[1474]: time="2025-01-30T13:56:49.100618836Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 13:56:49.100669 containerd[1474]: time="2025-01-30T13:56:49.100628525Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 13:56:49.100669 containerd[1474]: time="2025-01-30T13:56:49.100640743Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 13:56:49.101257 containerd[1474]: time="2025-01-30T13:56:49.100651388Z" level=info msg="NRI interface is disabled by configuration." Jan 30 13:56:49.101257 containerd[1474]: time="2025-01-30T13:56:49.100878344Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 13:56:49.102213 containerd[1474]: time="2025-01-30T13:56:49.101656050Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 13:56:49.102213 containerd[1474]: time="2025-01-30T13:56:49.101739377Z" level=info msg="Connect containerd service" Jan 30 13:56:49.102213 containerd[1474]: time="2025-01-30T13:56:49.101808219Z" level=info msg="using legacy CRI server" Jan 30 13:56:49.102213 containerd[1474]: time="2025-01-30T13:56:49.101818768Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 13:56:49.102213 containerd[1474]: time="2025-01-30T13:56:49.102126568Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 13:56:49.107435 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 13:56:49.113766 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 30 13:56:49.121666 containerd[1474]: time="2025-01-30T13:56:49.116404230Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:56:49.121666 containerd[1474]: time="2025-01-30T13:56:49.120874627Z" level=info msg="Start subscribing containerd event" Jan 30 13:56:49.121666 containerd[1474]: time="2025-01-30T13:56:49.121001366Z" level=info msg="Start recovering state" Jan 30 13:56:49.121666 containerd[1474]: time="2025-01-30T13:56:49.121142119Z" level=info msg="Start event monitor" Jan 30 13:56:49.121666 containerd[1474]: time="2025-01-30T13:56:49.121180031Z" level=info msg="Start snapshots syncer" Jan 30 13:56:49.121666 containerd[1474]: time="2025-01-30T13:56:49.121197761Z" level=info msg="Start cni network conf syncer for default" Jan 30 13:56:49.121666 containerd[1474]: time="2025-01-30T13:56:49.121210394Z" level=info msg="Start streaming server" Jan 30 13:56:49.116470 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 13:56:49.126948 containerd[1474]: time="2025-01-30T13:56:49.126380888Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 13:56:49.126948 containerd[1474]: time="2025-01-30T13:56:49.126491787Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 13:56:49.126817 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 13:56:49.136024 containerd[1474]: time="2025-01-30T13:56:49.134479733Z" level=info msg="containerd successfully booted in 0.146922s" Jan 30 13:56:50.190135 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:56:50.193365 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 13:56:50.197212 systemd[1]: Startup finished in 1.419s (kernel) + 8.486s (initrd) + 12.002s (userspace) = 21.908s. Jan 30 13:56:50.202000 (kubelet)[1557]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:56:51.083022 kubelet[1557]: E0130 13:56:51.082875 1557 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:56:51.086474 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:56:51.086717 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:56:51.087325 systemd[1]: kubelet.service: Consumed 1.619s CPU time. Jan 30 13:56:57.643734 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 13:56:57.661624 systemd[1]: Started sshd@0-209.38.134.12:22-147.75.109.163:45746.service - OpenSSH per-connection server daemon (147.75.109.163:45746). Jan 30 13:56:57.798752 sshd[1571]: Accepted publickey for core from 147.75.109.163 port 45746 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:56:57.803109 sshd[1571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:56:57.819201 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 13:56:57.830412 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 13:56:57.837245 systemd-logind[1450]: New session 1 of user core. Jan 30 13:56:57.861876 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 13:56:57.871550 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 13:56:57.896464 (systemd)[1575]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 13:56:58.142379 systemd[1575]: Queued start job for default target default.target. Jan 30 13:56:58.153708 systemd[1575]: Created slice app.slice - User Application Slice. Jan 30 13:56:58.153756 systemd[1575]: Reached target paths.target - Paths. Jan 30 13:56:58.153779 systemd[1575]: Reached target timers.target - Timers. Jan 30 13:56:58.173816 systemd[1575]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 13:56:58.186864 systemd[1575]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 13:56:58.188017 systemd[1575]: Reached target sockets.target - Sockets. Jan 30 13:56:58.188049 systemd[1575]: Reached target basic.target - Basic System. Jan 30 13:56:58.188134 systemd[1575]: Reached target default.target - Main User Target. Jan 30 13:56:58.188185 systemd[1575]: Startup finished in 276ms. Jan 30 13:56:58.188530 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 13:56:58.194667 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 13:56:58.307483 systemd[1]: Started sshd@1-209.38.134.12:22-147.75.109.163:45758.service - OpenSSH per-connection server daemon (147.75.109.163:45758). Jan 30 13:56:58.385897 sshd[1586]: Accepted publickey for core from 147.75.109.163 port 45758 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:56:58.388650 sshd[1586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:56:58.400616 systemd-logind[1450]: New session 2 of user core. Jan 30 13:56:58.409249 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 13:56:58.487528 sshd[1586]: pam_unix(sshd:session): session closed for user core Jan 30 13:56:58.516148 systemd[1]: sshd@1-209.38.134.12:22-147.75.109.163:45758.service: Deactivated successfully. Jan 30 13:56:58.520739 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 13:56:58.527178 systemd-logind[1450]: Session 2 logged out. Waiting for processes to exit. Jan 30 13:56:58.541028 systemd[1]: Started sshd@2-209.38.134.12:22-147.75.109.163:45774.service - OpenSSH per-connection server daemon (147.75.109.163:45774). Jan 30 13:56:58.544129 systemd-logind[1450]: Removed session 2. Jan 30 13:56:58.603942 sshd[1593]: Accepted publickey for core from 147.75.109.163 port 45774 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:56:58.605092 sshd[1593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:56:58.628731 systemd-logind[1450]: New session 3 of user core. Jan 30 13:56:58.640550 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 13:56:58.709932 sshd[1593]: pam_unix(sshd:session): session closed for user core Jan 30 13:56:58.721203 systemd[1]: sshd@2-209.38.134.12:22-147.75.109.163:45774.service: Deactivated successfully. Jan 30 13:56:58.724081 systemd[1]: session-3.scope: Deactivated successfully. Jan 30 13:56:58.727200 systemd-logind[1450]: Session 3 logged out. Waiting for processes to exit. Jan 30 13:56:58.733466 systemd[1]: Started sshd@3-209.38.134.12:22-147.75.109.163:45786.service - OpenSSH per-connection server daemon (147.75.109.163:45786). Jan 30 13:56:58.736622 systemd-logind[1450]: Removed session 3. Jan 30 13:56:58.808797 sshd[1600]: Accepted publickey for core from 147.75.109.163 port 45786 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:56:58.810962 sshd[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:56:58.821305 systemd-logind[1450]: New session 4 of user core. Jan 30 13:56:58.831189 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 13:56:58.907952 sshd[1600]: pam_unix(sshd:session): session closed for user core Jan 30 13:56:58.918781 systemd[1]: sshd@3-209.38.134.12:22-147.75.109.163:45786.service: Deactivated successfully. Jan 30 13:56:58.922336 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 13:56:58.929464 systemd-logind[1450]: Session 4 logged out. Waiting for processes to exit. Jan 30 13:56:58.936402 systemd[1]: Started sshd@4-209.38.134.12:22-147.75.109.163:45794.service - OpenSSH per-connection server daemon (147.75.109.163:45794). Jan 30 13:56:58.941562 systemd-logind[1450]: Removed session 4. Jan 30 13:56:59.008344 sshd[1607]: Accepted publickey for core from 147.75.109.163 port 45794 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:56:59.012006 sshd[1607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:56:59.020720 systemd-logind[1450]: New session 5 of user core. Jan 30 13:56:59.028430 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 13:56:59.145178 sudo[1610]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 13:56:59.145492 sudo[1610]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:56:59.160888 sudo[1610]: pam_unix(sudo:session): session closed for user root Jan 30 13:56:59.174645 sshd[1607]: pam_unix(sshd:session): session closed for user core Jan 30 13:56:59.190095 systemd[1]: sshd@4-209.38.134.12:22-147.75.109.163:45794.service: Deactivated successfully. Jan 30 13:56:59.193223 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 13:56:59.198964 systemd-logind[1450]: Session 5 logged out. Waiting for processes to exit. Jan 30 13:56:59.214974 systemd[1]: Started sshd@5-209.38.134.12:22-147.75.109.163:45798.service - OpenSSH per-connection server daemon (147.75.109.163:45798). Jan 30 13:56:59.217804 systemd-logind[1450]: Removed session 5. Jan 30 13:56:59.269247 sshd[1615]: Accepted publickey for core from 147.75.109.163 port 45798 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:56:59.275423 sshd[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:56:59.285676 systemd-logind[1450]: New session 6 of user core. Jan 30 13:56:59.291257 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 13:56:59.359287 sudo[1619]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 13:56:59.359688 sudo[1619]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:56:59.371128 sudo[1619]: pam_unix(sudo:session): session closed for user root Jan 30 13:56:59.381697 sudo[1618]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 30 13:56:59.383391 sudo[1618]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:56:59.421012 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 30 13:56:59.426640 auditctl[1622]: No rules Jan 30 13:56:59.428318 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 13:56:59.428685 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 30 13:56:59.440334 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 13:56:59.480638 augenrules[1640]: No rules Jan 30 13:56:59.482397 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 13:56:59.484388 sudo[1618]: pam_unix(sudo:session): session closed for user root Jan 30 13:56:59.491198 sshd[1615]: pam_unix(sshd:session): session closed for user core Jan 30 13:56:59.501166 systemd[1]: sshd@5-209.38.134.12:22-147.75.109.163:45798.service: Deactivated successfully. Jan 30 13:56:59.504689 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 13:56:59.505757 systemd-logind[1450]: Session 6 logged out. Waiting for processes to exit. Jan 30 13:56:59.517451 systemd[1]: Started sshd@6-209.38.134.12:22-147.75.109.163:45800.service - OpenSSH per-connection server daemon (147.75.109.163:45800). Jan 30 13:56:59.521104 systemd-logind[1450]: Removed session 6. Jan 30 13:56:59.587645 sshd[1648]: Accepted publickey for core from 147.75.109.163 port 45800 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:56:59.591731 sshd[1648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:56:59.609102 systemd-logind[1450]: New session 7 of user core. Jan 30 13:56:59.611309 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 13:56:59.678541 sudo[1651]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 13:56:59.682481 sudo[1651]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:57:01.174431 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 13:57:01.207496 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:57:01.455642 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:57:01.477090 (kubelet)[1690]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:57:01.561869 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:57:01.570792 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 13:57:01.571104 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:57:01.582354 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:57:01.641817 systemd[1]: Reloading requested from client PID 1703 ('systemctl') (unit session-7.scope)... Jan 30 13:57:01.642153 systemd[1]: Reloading... Jan 30 13:57:01.872870 zram_generator::config[1750]: No configuration found. Jan 30 13:57:02.072084 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:57:02.204248 systemd[1]: Reloading finished in 561 ms. Jan 30 13:57:02.338487 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 30 13:57:02.338616 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 30 13:57:02.339993 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:57:02.367242 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:57:02.668119 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:57:02.687936 (kubelet)[1794]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:57:02.813237 kubelet[1794]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:57:02.813237 kubelet[1794]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 13:57:02.813237 kubelet[1794]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:57:02.813237 kubelet[1794]: I0130 13:57:02.812865 1794 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:57:03.649193 kubelet[1794]: I0130 13:57:03.649092 1794 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 13:57:03.649193 kubelet[1794]: I0130 13:57:03.649141 1794 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:57:03.649570 kubelet[1794]: I0130 13:57:03.649511 1794 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 13:57:03.677888 kubelet[1794]: I0130 13:57:03.677255 1794 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:57:03.706362 kubelet[1794]: I0130 13:57:03.705885 1794 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:57:03.708858 kubelet[1794]: I0130 13:57:03.707783 1794 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:57:03.708858 kubelet[1794]: I0130 13:57:03.707885 1794 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"209.38.134.12","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 13:57:03.708858 kubelet[1794]: I0130 13:57:03.708501 1794 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:57:03.708858 kubelet[1794]: I0130 13:57:03.708524 1794 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 13:57:03.708858 kubelet[1794]: I0130 13:57:03.708739 1794 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:57:03.710172 kubelet[1794]: I0130 13:57:03.710129 1794 kubelet.go:400] "Attempting to sync node with API server" Jan 30 13:57:03.710172 kubelet[1794]: I0130 13:57:03.710173 1794 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:57:03.710350 kubelet[1794]: I0130 13:57:03.710220 1794 kubelet.go:312] "Adding apiserver pod source" Jan 30 13:57:03.710350 kubelet[1794]: I0130 13:57:03.710252 1794 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:57:03.718219 kubelet[1794]: E0130 13:57:03.717695 1794 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:57:03.718219 kubelet[1794]: E0130 13:57:03.718001 1794 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:57:03.719186 kubelet[1794]: I0130 13:57:03.719155 1794 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 13:57:03.721577 kubelet[1794]: I0130 13:57:03.721380 1794 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:57:03.721577 kubelet[1794]: W0130 13:57:03.721485 1794 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 13:57:03.723526 kubelet[1794]: I0130 13:57:03.722620 1794 server.go:1264] "Started kubelet" Jan 30 13:57:03.727726 kubelet[1794]: I0130 13:57:03.727587 1794 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:57:03.738111 kubelet[1794]: E0130 13:57:03.737766 1794 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{209.38.134.12.181f7d00d56ea6ec default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:209.38.134.12,UID:209.38.134.12,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:209.38.134.12,},FirstTimestamp:2025-01-30 13:57:03.7225715 +0000 UTC m=+1.026560008,LastTimestamp:2025-01-30 13:57:03.7225715 +0000 UTC m=+1.026560008,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:209.38.134.12,}" Jan 30 13:57:03.738844 kubelet[1794]: I0130 13:57:03.738044 1794 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:57:03.741163 kubelet[1794]: I0130 13:57:03.741113 1794 server.go:455] "Adding debug handlers to kubelet server" Jan 30 13:57:03.744556 kubelet[1794]: I0130 13:57:03.742794 1794 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:57:03.744556 kubelet[1794]: I0130 13:57:03.743313 1794 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:57:03.746308 kubelet[1794]: I0130 13:57:03.746264 1794 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 13:57:03.747810 kubelet[1794]: I0130 13:57:03.747764 1794 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 13:57:03.748115 kubelet[1794]: I0130 13:57:03.748100 1794 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:57:03.754898 kubelet[1794]: W0130 13:57:03.754765 1794 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 30 13:57:03.755291 kubelet[1794]: E0130 13:57:03.755247 1794 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 30 13:57:03.762726 kubelet[1794]: I0130 13:57:03.762264 1794 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:57:03.762726 kubelet[1794]: I0130 13:57:03.762480 1794 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:57:03.767436 kubelet[1794]: E0130 13:57:03.767224 1794 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:57:03.769862 kubelet[1794]: I0130 13:57:03.769558 1794 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:57:03.770573 kubelet[1794]: E0130 13:57:03.770099 1794 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"209.38.134.12\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Jan 30 13:57:03.771547 kubelet[1794]: W0130 13:57:03.771353 1794 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 30 13:57:03.771547 kubelet[1794]: E0130 13:57:03.771398 1794 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 30 13:57:03.775537 kubelet[1794]: W0130 13:57:03.775493 1794 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "209.38.134.12" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 30 13:57:03.775939 kubelet[1794]: E0130 13:57:03.775733 1794 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes "209.38.134.12" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 30 13:57:03.812614 kubelet[1794]: I0130 13:57:03.812183 1794 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 13:57:03.812614 kubelet[1794]: I0130 13:57:03.812221 1794 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 13:57:03.812614 kubelet[1794]: I0130 13:57:03.812253 1794 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:57:03.822985 kubelet[1794]: I0130 13:57:03.822919 1794 policy_none.go:49] "None policy: Start" Jan 30 13:57:03.830991 kubelet[1794]: I0130 13:57:03.825197 1794 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 13:57:03.830991 kubelet[1794]: I0130 13:57:03.825253 1794 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:57:03.852140 kubelet[1794]: I0130 13:57:03.851062 1794 kubelet_node_status.go:73] "Attempting to register node" node="209.38.134.12" Jan 30 13:57:03.858009 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 30 13:57:03.867874 kubelet[1794]: I0130 13:57:03.867477 1794 kubelet_node_status.go:76] "Successfully registered node" node="209.38.134.12" Jan 30 13:57:03.882047 kubelet[1794]: E0130 13:57:03.881921 1794 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"209.38.134.12\" not found" Jan 30 13:57:03.898988 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 30 13:57:03.911112 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 30 13:57:03.928513 kubelet[1794]: I0130 13:57:03.921491 1794 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:57:03.928513 kubelet[1794]: I0130 13:57:03.921794 1794 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:57:03.928513 kubelet[1794]: I0130 13:57:03.922033 1794 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:57:03.937876 kubelet[1794]: E0130 13:57:03.937265 1794 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"209.38.134.12\" not found" Jan 30 13:57:03.940974 kubelet[1794]: I0130 13:57:03.940905 1794 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:57:03.946258 kubelet[1794]: I0130 13:57:03.946204 1794 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:57:03.946510 kubelet[1794]: I0130 13:57:03.946493 1794 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 13:57:03.946635 kubelet[1794]: I0130 13:57:03.946622 1794 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 13:57:03.947564 kubelet[1794]: E0130 13:57:03.946801 1794 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 30 13:57:03.982339 kubelet[1794]: E0130 13:57:03.982073 1794 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"209.38.134.12\" not found" Jan 30 13:57:04.083546 kubelet[1794]: E0130 13:57:04.083434 1794 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"209.38.134.12\" not found" Jan 30 13:57:04.184395 kubelet[1794]: E0130 13:57:04.184169 1794 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"209.38.134.12\" not found" Jan 30 13:57:04.213531 sudo[1651]: pam_unix(sudo:session): session closed for user root Jan 30 13:57:04.217847 sshd[1648]: pam_unix(sshd:session): session closed for user core Jan 30 13:57:04.225697 systemd-logind[1450]: Session 7 logged out. Waiting for processes to exit. Jan 30 13:57:04.231118 systemd[1]: sshd@6-209.38.134.12:22-147.75.109.163:45800.service: Deactivated successfully. Jan 30 13:57:04.242697 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 13:57:04.245305 systemd-logind[1450]: Removed session 7. Jan 30 13:57:04.285320 kubelet[1794]: E0130 13:57:04.285247 1794 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"209.38.134.12\" not found" Jan 30 13:57:04.386229 kubelet[1794]: E0130 13:57:04.386078 1794 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"209.38.134.12\" not found" Jan 30 13:57:04.487486 kubelet[1794]: E0130 13:57:04.487269 1794 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"209.38.134.12\" not found" Jan 30 13:57:04.588475 kubelet[1794]: E0130 13:57:04.588406 1794 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"209.38.134.12\" not found" Jan 30 13:57:04.654133 kubelet[1794]: I0130 13:57:04.654046 1794 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 30 13:57:04.654377 kubelet[1794]: W0130 13:57:04.654356 1794 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 30 13:57:04.689691 kubelet[1794]: E0130 13:57:04.689563 1794 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"209.38.134.12\" not found" Jan 30 13:57:04.718583 kubelet[1794]: E0130 13:57:04.718250 1794 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:57:04.790599 kubelet[1794]: E0130 13:57:04.790377 1794 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"209.38.134.12\" not found" Jan 30 13:57:04.891426 kubelet[1794]: E0130 13:57:04.891344 1794 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"209.38.134.12\" not found" Jan 30 13:57:04.993944 kubelet[1794]: I0130 13:57:04.993778 1794 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 30 13:57:04.995174 containerd[1474]: time="2025-01-30T13:57:04.994571891Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 13:57:04.996020 kubelet[1794]: I0130 13:57:04.994932 1794 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 30 13:57:05.718484 kubelet[1794]: E0130 13:57:05.718404 1794 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:57:05.719439 kubelet[1794]: I0130 13:57:05.718763 1794 apiserver.go:52] "Watching apiserver" Jan 30 13:57:05.729574 kubelet[1794]: I0130 13:57:05.729447 1794 topology_manager.go:215] "Topology Admit Handler" podUID="49544baa-e68c-4257-a439-21c0ff0f2530" podNamespace="calico-system" podName="calico-node-8hdtk" Jan 30 13:57:05.730150 kubelet[1794]: I0130 13:57:05.729918 1794 topology_manager.go:215] "Topology Admit Handler" podUID="6a0e4b17-d4ac-44a2-88ca-fc8569ad472d" podNamespace="calico-system" podName="csi-node-driver-gz6sd" Jan 30 13:57:05.730150 kubelet[1794]: I0130 13:57:05.730045 1794 topology_manager.go:215] "Topology Admit Handler" podUID="b4b62b00-b1c0-42d0-b50e-05bbc4b6d43e" podNamespace="kube-system" podName="kube-proxy-8k8bq" Jan 30 13:57:05.731850 kubelet[1794]: E0130 13:57:05.731573 1794 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gz6sd" podUID="6a0e4b17-d4ac-44a2-88ca-fc8569ad472d" Jan 30 13:57:05.748685 kubelet[1794]: I0130 13:57:05.748645 1794 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 13:57:05.749263 systemd[1]: Created slice kubepods-besteffort-pod49544baa_e68c_4257_a439_21c0ff0f2530.slice - libcontainer container kubepods-besteffort-pod49544baa_e68c_4257_a439_21c0ff0f2530.slice. Jan 30 13:57:05.764956 kubelet[1794]: I0130 13:57:05.764900 1794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/6a0e4b17-d4ac-44a2-88ca-fc8569ad472d-varrun\") pod \"csi-node-driver-gz6sd\" (UID: \"6a0e4b17-d4ac-44a2-88ca-fc8569ad472d\") " pod="calico-system/csi-node-driver-gz6sd" Jan 30 13:57:05.767577 kubelet[1794]: I0130 13:57:05.767531 1794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgn8n\" (UniqueName: \"kubernetes.io/projected/6a0e4b17-d4ac-44a2-88ca-fc8569ad472d-kube-api-access-pgn8n\") pod \"csi-node-driver-gz6sd\" (UID: \"6a0e4b17-d4ac-44a2-88ca-fc8569ad472d\") " pod="calico-system/csi-node-driver-gz6sd" Jan 30 13:57:05.767891 kubelet[1794]: I0130 13:57:05.767867 1794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/49544baa-e68c-4257-a439-21c0ff0f2530-xtables-lock\") pod \"calico-node-8hdtk\" (UID: \"49544baa-e68c-4257-a439-21c0ff0f2530\") " pod="calico-system/calico-node-8hdtk" Jan 30 13:57:05.768033 kubelet[1794]: I0130 13:57:05.768014 1794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/49544baa-e68c-4257-a439-21c0ff0f2530-var-run-calico\") pod \"calico-node-8hdtk\" (UID: \"49544baa-e68c-4257-a439-21c0ff0f2530\") " pod="calico-system/calico-node-8hdtk" Jan 30 13:57:05.768324 kubelet[1794]: I0130 13:57:05.768305 1794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/49544baa-e68c-4257-a439-21c0ff0f2530-cni-bin-dir\") pod \"calico-node-8hdtk\" (UID: \"49544baa-e68c-4257-a439-21c0ff0f2530\") " pod="calico-system/calico-node-8hdtk" Jan 30 13:57:05.768446 kubelet[1794]: I0130 13:57:05.768434 1794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/49544baa-e68c-4257-a439-21c0ff0f2530-cni-net-dir\") pod \"calico-node-8hdtk\" (UID: \"49544baa-e68c-4257-a439-21c0ff0f2530\") " pod="calico-system/calico-node-8hdtk" Jan 30 13:57:05.768521 kubelet[1794]: I0130 13:57:05.768511 1794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/49544baa-e68c-4257-a439-21c0ff0f2530-flexvol-driver-host\") pod \"calico-node-8hdtk\" (UID: \"49544baa-e68c-4257-a439-21c0ff0f2530\") " pod="calico-system/calico-node-8hdtk" Jan 30 13:57:05.768600 kubelet[1794]: I0130 13:57:05.768590 1794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6a0e4b17-d4ac-44a2-88ca-fc8569ad472d-kubelet-dir\") pod \"csi-node-driver-gz6sd\" (UID: \"6a0e4b17-d4ac-44a2-88ca-fc8569ad472d\") " pod="calico-system/csi-node-driver-gz6sd" Jan 30 13:57:05.768673 kubelet[1794]: I0130 13:57:05.768660 1794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6a0e4b17-d4ac-44a2-88ca-fc8569ad472d-socket-dir\") pod \"csi-node-driver-gz6sd\" (UID: \"6a0e4b17-d4ac-44a2-88ca-fc8569ad472d\") " pod="calico-system/csi-node-driver-gz6sd" Jan 30 13:57:05.768754 kubelet[1794]: I0130 13:57:05.768739 1794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6a0e4b17-d4ac-44a2-88ca-fc8569ad472d-registration-dir\") pod \"csi-node-driver-gz6sd\" (UID: \"6a0e4b17-d4ac-44a2-88ca-fc8569ad472d\") " pod="calico-system/csi-node-driver-gz6sd" Jan 30 13:57:05.769017 kubelet[1794]: I0130 13:57:05.768973 1794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnzmd\" (UniqueName: \"kubernetes.io/projected/b4b62b00-b1c0-42d0-b50e-05bbc4b6d43e-kube-api-access-lnzmd\") pod \"kube-proxy-8k8bq\" (UID: \"b4b62b00-b1c0-42d0-b50e-05bbc4b6d43e\") " pod="kube-system/kube-proxy-8k8bq" Jan 30 13:57:05.769088 kubelet[1794]: I0130 13:57:05.769038 1794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/49544baa-e68c-4257-a439-21c0ff0f2530-lib-modules\") pod \"calico-node-8hdtk\" (UID: \"49544baa-e68c-4257-a439-21c0ff0f2530\") " pod="calico-system/calico-node-8hdtk" Jan 30 13:57:05.769088 kubelet[1794]: I0130 13:57:05.769074 1794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49544baa-e68c-4257-a439-21c0ff0f2530-tigera-ca-bundle\") pod \"calico-node-8hdtk\" (UID: \"49544baa-e68c-4257-a439-21c0ff0f2530\") " pod="calico-system/calico-node-8hdtk" Jan 30 13:57:05.769159 kubelet[1794]: I0130 13:57:05.769105 1794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b4b62b00-b1c0-42d0-b50e-05bbc4b6d43e-kube-proxy\") pod \"kube-proxy-8k8bq\" (UID: \"b4b62b00-b1c0-42d0-b50e-05bbc4b6d43e\") " pod="kube-system/kube-proxy-8k8bq" Jan 30 13:57:05.769159 kubelet[1794]: I0130 13:57:05.769135 1794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b4b62b00-b1c0-42d0-b50e-05bbc4b6d43e-xtables-lock\") pod \"kube-proxy-8k8bq\" (UID: \"b4b62b00-b1c0-42d0-b50e-05bbc4b6d43e\") " pod="kube-system/kube-proxy-8k8bq" Jan 30 13:57:05.769241 kubelet[1794]: I0130 13:57:05.769178 1794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/49544baa-e68c-4257-a439-21c0ff0f2530-node-certs\") pod \"calico-node-8hdtk\" (UID: \"49544baa-e68c-4257-a439-21c0ff0f2530\") " pod="calico-system/calico-node-8hdtk" Jan 30 13:57:05.769241 kubelet[1794]: I0130 13:57:05.769209 1794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/49544baa-e68c-4257-a439-21c0ff0f2530-cni-log-dir\") pod \"calico-node-8hdtk\" (UID: \"49544baa-e68c-4257-a439-21c0ff0f2530\") " pod="calico-system/calico-node-8hdtk" Jan 30 13:57:05.769307 kubelet[1794]: I0130 13:57:05.769238 1794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fddpq\" (UniqueName: \"kubernetes.io/projected/49544baa-e68c-4257-a439-21c0ff0f2530-kube-api-access-fddpq\") pod \"calico-node-8hdtk\" (UID: \"49544baa-e68c-4257-a439-21c0ff0f2530\") " pod="calico-system/calico-node-8hdtk" Jan 30 13:57:05.769307 kubelet[1794]: I0130 13:57:05.769277 1794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b4b62b00-b1c0-42d0-b50e-05bbc4b6d43e-lib-modules\") pod \"kube-proxy-8k8bq\" (UID: \"b4b62b00-b1c0-42d0-b50e-05bbc4b6d43e\") " pod="kube-system/kube-proxy-8k8bq" Jan 30 13:57:05.769664 kubelet[1794]: I0130 13:57:05.769529 1794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/49544baa-e68c-4257-a439-21c0ff0f2530-policysync\") pod \"calico-node-8hdtk\" (UID: \"49544baa-e68c-4257-a439-21c0ff0f2530\") " pod="calico-system/calico-node-8hdtk" Jan 30 13:57:05.769664 kubelet[1794]: I0130 13:57:05.769585 1794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/49544baa-e68c-4257-a439-21c0ff0f2530-var-lib-calico\") pod \"calico-node-8hdtk\" (UID: \"49544baa-e68c-4257-a439-21c0ff0f2530\") " pod="calico-system/calico-node-8hdtk" Jan 30 13:57:05.772915 systemd[1]: Created slice kubepods-besteffort-podb4b62b00_b1c0_42d0_b50e_05bbc4b6d43e.slice - libcontainer container kubepods-besteffort-podb4b62b00_b1c0_42d0_b50e_05bbc4b6d43e.slice. Jan 30 13:57:05.895424 kubelet[1794]: E0130 13:57:05.895375 1794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:05.895424 kubelet[1794]: W0130 13:57:05.895411 1794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:05.896018 kubelet[1794]: E0130 13:57:05.895439 1794 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:05.901873 kubelet[1794]: E0130 13:57:05.901243 1794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:05.901873 kubelet[1794]: W0130 13:57:05.901297 1794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:05.901873 kubelet[1794]: E0130 13:57:05.901322 1794 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:05.908974 kubelet[1794]: E0130 13:57:05.908929 1794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:05.908974 kubelet[1794]: W0130 13:57:05.908963 1794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:05.908974 kubelet[1794]: E0130 13:57:05.908993 1794 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:06.067300 kubelet[1794]: E0130 13:57:06.067139 1794 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:57:06.069164 containerd[1474]: time="2025-01-30T13:57:06.069088478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8hdtk,Uid:49544baa-e68c-4257-a439-21c0ff0f2530,Namespace:calico-system,Attempt:0,}" Jan 30 13:57:06.080360 kubelet[1794]: E0130 13:57:06.080049 1794 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:57:06.081657 containerd[1474]: time="2025-01-30T13:57:06.081240860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8k8bq,Uid:b4b62b00-b1c0-42d0-b50e-05bbc4b6d43e,Namespace:kube-system,Attempt:0,}" Jan 30 13:57:06.718669 kubelet[1794]: E0130 13:57:06.718548 1794 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:57:06.787713 containerd[1474]: time="2025-01-30T13:57:06.786553392Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:57:06.790981 containerd[1474]: time="2025-01-30T13:57:06.790903062Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 30 13:57:06.794807 containerd[1474]: time="2025-01-30T13:57:06.794737354Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:57:06.797971 containerd[1474]: time="2025-01-30T13:57:06.797884411Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:57:06.798420 containerd[1474]: time="2025-01-30T13:57:06.798390977Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:57:06.802184 containerd[1474]: time="2025-01-30T13:57:06.802109933Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:57:06.806506 containerd[1474]: time="2025-01-30T13:57:06.806414455Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 725.027272ms" Jan 30 13:57:06.808397 containerd[1474]: time="2025-01-30T13:57:06.808065937Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 738.774781ms" Jan 30 13:57:06.889165 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2987524099.mount: Deactivated successfully. Jan 30 13:57:07.033007 containerd[1474]: time="2025-01-30T13:57:07.032640810Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:57:07.033007 containerd[1474]: time="2025-01-30T13:57:07.032768458Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:57:07.033007 containerd[1474]: time="2025-01-30T13:57:07.032806773Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:57:07.033910 containerd[1474]: time="2025-01-30T13:57:07.033462786Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:57:07.042709 containerd[1474]: time="2025-01-30T13:57:07.041021044Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:57:07.055256 containerd[1474]: time="2025-01-30T13:57:07.044455792Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:57:07.055256 containerd[1474]: time="2025-01-30T13:57:07.054980220Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:57:07.055256 containerd[1474]: time="2025-01-30T13:57:07.055172859Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:57:07.208420 systemd[1]: Started cri-containerd-4d768c71a245df1fc5a5113a3126593c44b8707280811ba427b72bf86b7660f2.scope - libcontainer container 4d768c71a245df1fc5a5113a3126593c44b8707280811ba427b72bf86b7660f2. Jan 30 13:57:07.221805 systemd[1]: Started cri-containerd-f8e8322769ece349e9bca76da566367128f796b7eb89b22b6882cc744ef58659.scope - libcontainer container f8e8322769ece349e9bca76da566367128f796b7eb89b22b6882cc744ef58659. Jan 30 13:57:07.289896 containerd[1474]: time="2025-01-30T13:57:07.289657637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8k8bq,Uid:b4b62b00-b1c0-42d0-b50e-05bbc4b6d43e,Namespace:kube-system,Attempt:0,} returns sandbox id \"f8e8322769ece349e9bca76da566367128f796b7eb89b22b6882cc744ef58659\"" Jan 30 13:57:07.294306 kubelet[1794]: E0130 13:57:07.293696 1794 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:57:07.297976 containerd[1474]: time="2025-01-30T13:57:07.297398839Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 30 13:57:07.298858 containerd[1474]: time="2025-01-30T13:57:07.298750132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8hdtk,Uid:49544baa-e68c-4257-a439-21c0ff0f2530,Namespace:calico-system,Attempt:0,} returns sandbox id \"4d768c71a245df1fc5a5113a3126593c44b8707280811ba427b72bf86b7660f2\"" Jan 30 13:57:07.301072 kubelet[1794]: E0130 13:57:07.300689 1794 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:57:07.718948 kubelet[1794]: E0130 13:57:07.718756 1794 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:57:07.951874 kubelet[1794]: E0130 13:57:07.950769 1794 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gz6sd" podUID="6a0e4b17-d4ac-44a2-88ca-fc8569ad472d" Jan 30 13:57:08.698968 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3345827828.mount: Deactivated successfully. Jan 30 13:57:08.719270 kubelet[1794]: E0130 13:57:08.719191 1794 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:57:09.473779 containerd[1474]: time="2025-01-30T13:57:09.473697853Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:57:09.474811 containerd[1474]: time="2025-01-30T13:57:09.474776816Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=29058337" Jan 30 13:57:09.475923 containerd[1474]: time="2025-01-30T13:57:09.475864390Z" level=info msg="ImageCreate event name:\"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:57:09.478989 containerd[1474]: time="2025-01-30T13:57:09.478917462Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:57:09.480578 containerd[1474]: time="2025-01-30T13:57:09.480507940Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"29057356\" in 2.183017818s" Jan 30 13:57:09.480692 containerd[1474]: time="2025-01-30T13:57:09.480580517Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\"" Jan 30 13:57:09.482401 containerd[1474]: time="2025-01-30T13:57:09.482237672Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 30 13:57:09.486402 containerd[1474]: time="2025-01-30T13:57:09.486349583Z" level=info msg="CreateContainer within sandbox \"f8e8322769ece349e9bca76da566367128f796b7eb89b22b6882cc744ef58659\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 13:57:09.517202 containerd[1474]: time="2025-01-30T13:57:09.517099044Z" level=info msg="CreateContainer within sandbox \"f8e8322769ece349e9bca76da566367128f796b7eb89b22b6882cc744ef58659\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f231869fa727d10ca184e53a27947686266253d17c58f9a070658381f2d787ed\"" Jan 30 13:57:09.518161 containerd[1474]: time="2025-01-30T13:57:09.518107348Z" level=info msg="StartContainer for \"f231869fa727d10ca184e53a27947686266253d17c58f9a070658381f2d787ed\"" Jan 30 13:57:09.579204 systemd[1]: Started cri-containerd-f231869fa727d10ca184e53a27947686266253d17c58f9a070658381f2d787ed.scope - libcontainer container f231869fa727d10ca184e53a27947686266253d17c58f9a070658381f2d787ed. Jan 30 13:57:09.645119 containerd[1474]: time="2025-01-30T13:57:09.644799880Z" level=info msg="StartContainer for \"f231869fa727d10ca184e53a27947686266253d17c58f9a070658381f2d787ed\" returns successfully" Jan 30 13:57:09.719783 kubelet[1794]: E0130 13:57:09.719623 1794 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:57:09.954281 kubelet[1794]: E0130 13:57:09.953285 1794 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gz6sd" podUID="6a0e4b17-d4ac-44a2-88ca-fc8569ad472d" Jan 30 13:57:10.008438 kubelet[1794]: E0130 13:57:10.005747 1794 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:57:10.025887 kubelet[1794]: I0130 13:57:10.024954 1794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8k8bq" podStartSLOduration=4.838560252 podStartE2EDuration="7.024929711s" podCreationTimestamp="2025-01-30 13:57:03 +0000 UTC" firstStartedPulling="2025-01-30 13:57:07.295320816 +0000 UTC m=+4.599309314" lastFinishedPulling="2025-01-30 13:57:09.481690299 +0000 UTC m=+6.785678773" observedRunningTime="2025-01-30 13:57:10.02268584 +0000 UTC m=+7.326674349" watchObservedRunningTime="2025-01-30 13:57:10.024929711 +0000 UTC m=+7.328918219" Jan 30 13:57:10.089440 kubelet[1794]: E0130 13:57:10.089356 1794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:10.089950 kubelet[1794]: W0130 13:57:10.089397 1794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:10.089950 kubelet[1794]: E0130 13:57:10.089560 1794 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:10.093114 kubelet[1794]: E0130 13:57:10.092939 1794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:10.093407 kubelet[1794]: W0130 13:57:10.093212 1794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:10.093773 kubelet[1794]: E0130 13:57:10.093487 1794 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:10.095032 kubelet[1794]: E0130 13:57:10.094991 1794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:10.095490 kubelet[1794]: W0130 13:57:10.095341 1794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:10.095703 kubelet[1794]: E0130 13:57:10.095549 1794 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:10.096721 kubelet[1794]: E0130 13:57:10.096624 1794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:10.096721 kubelet[1794]: W0130 13:57:10.096645 1794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:10.096721 kubelet[1794]: E0130 13:57:10.096669 1794 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:10.097380 kubelet[1794]: E0130 13:57:10.097224 1794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:10.097380 kubelet[1794]: W0130 13:57:10.097258 1794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:10.097380 kubelet[1794]: E0130 13:57:10.097276 1794 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:10.097747 kubelet[1794]: E0130 13:57:10.097635 1794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:10.097747 kubelet[1794]: W0130 13:57:10.097681 1794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:10.097747 kubelet[1794]: E0130 13:57:10.097694 1794 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:10.098252 kubelet[1794]: E0130 13:57:10.098126 1794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:10.098252 kubelet[1794]: W0130 13:57:10.098136 1794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:10.098252 kubelet[1794]: E0130 13:57:10.098146 1794 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:10.098740 kubelet[1794]: E0130 13:57:10.098585 1794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:10.098740 kubelet[1794]: W0130 13:57:10.098618 1794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:10.098740 kubelet[1794]: E0130 13:57:10.098631 1794 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:10.099501 kubelet[1794]: E0130 13:57:10.099482 1794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:10.099605 kubelet[1794]: W0130 13:57:10.099592 1794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:10.099662 kubelet[1794]: E0130 13:57:10.099653 1794 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:10.100208 kubelet[1794]: E0130 13:57:10.100184 1794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:10.100506 kubelet[1794]: W0130 13:57:10.100328 1794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:10.100506 kubelet[1794]: E0130 13:57:10.100354 1794 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:10.100762 kubelet[1794]: E0130 13:57:10.100698 1794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:10.100762 kubelet[1794]: W0130 13:57:10.100710 1794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:10.100762 kubelet[1794]: E0130 13:57:10.100720 1794 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:10.101083 kubelet[1794]: E0130 13:57:10.101071 1794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:10.101250 kubelet[1794]: W0130 13:57:10.101139 1794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:10.101250 kubelet[1794]: E0130 13:57:10.101152 1794 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:10.101412 kubelet[1794]: E0130 13:57:10.101399 1794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:10.101483 kubelet[1794]: W0130 13:57:10.101472 1794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:10.101567 kubelet[1794]: E0130 13:57:10.101527 1794 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:10.103176 kubelet[1794]: E0130 13:57:10.103043 1794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:10.103176 kubelet[1794]: W0130 13:57:10.103061 1794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:10.103176 kubelet[1794]: E0130 13:57:10.103077 1794 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:10.103551 kubelet[1794]: E0130 13:57:10.103372 1794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:10.103623 kubelet[1794]: W0130 13:57:10.103609 1794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:10.103725 kubelet[1794]: E0130 13:57:10.103669 1794 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:10.104205 kubelet[1794]: E0130 13:57:10.104189 1794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:10.104391 kubelet[1794]: W0130 13:57:10.104280 1794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:10.104391 kubelet[1794]: E0130 13:57:10.104295 1794 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:10.104746 kubelet[1794]: E0130 13:57:10.104729 1794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:10.104971 kubelet[1794]: W0130 13:57:10.104872 1794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:10.104971 kubelet[1794]: E0130 13:57:10.104890 1794 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:10.105108 kubelet[1794]: E0130 13:57:10.105099 1794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:10.105204 kubelet[1794]: W0130 13:57:10.105149 1794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:10.105204 kubelet[1794]: E0130 13:57:10.105164 1794 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:10.105555 kubelet[1794]: E0130 13:57:10.105546 1794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:10.105646 kubelet[1794]: W0130 13:57:10.105601 1794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:10.105646 kubelet[1794]: E0130 13:57:10.105612 1794 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:10.106006 kubelet[1794]: E0130 13:57:10.105947 1794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:10.106006 kubelet[1794]: W0130 13:57:10.105956 1794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:10.106006 kubelet[1794]: E0130 13:57:10.105966 1794 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:10.110081 kubelet[1794]: E0130 13:57:10.109454 1794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:10.110081 kubelet[1794]: W0130 13:57:10.109482 1794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:10.110081 kubelet[1794]: E0130 13:57:10.109527 1794 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:10.110733 kubelet[1794]: E0130 13:57:10.110706 1794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:10.110972 kubelet[1794]: W0130 13:57:10.110863 1794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:10.110972 kubelet[1794]: E0130 13:57:10.110909 1794 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:10.111580 kubelet[1794]: E0130 13:57:10.111381 1794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:10.111580 kubelet[1794]: W0130 13:57:10.111397 1794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:10.111580 kubelet[1794]: E0130 13:57:10.111424 1794 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:10.113322 kubelet[1794]: E0130 13:57:10.113111 1794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:10.113322 kubelet[1794]: W0130 13:57:10.113135 1794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:10.113322 kubelet[1794]: E0130 13:57:10.113177 1794 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:10.113605 kubelet[1794]: E0130 13:57:10.113591 1794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:10.113670 kubelet[1794]: W0130 13:57:10.113656 1794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:10.113816 kubelet[1794]: E0130 13:57:10.113774 1794 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:10.114289 kubelet[1794]: E0130 13:57:10.114178 1794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:10.114289 kubelet[1794]: W0130 13:57:10.114200 1794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:10.114289 kubelet[1794]: E0130 13:57:10.114235 1794 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:10.114918 kubelet[1794]: E0130 13:57:10.114729 1794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:10.114918 kubelet[1794]: W0130 13:57:10.114747 1794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:10.114918 kubelet[1794]: E0130 13:57:10.114778 1794 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:10.115576 kubelet[1794]: E0130 13:57:10.115303 1794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:10.115576 kubelet[1794]: W0130 13:57:10.115320 1794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:10.115576 kubelet[1794]: E0130 13:57:10.115407 1794 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:10.115785 kubelet[1794]: E0130 13:57:10.115769 1794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:10.116068 kubelet[1794]: W0130 13:57:10.115908 1794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:10.116068 kubelet[1794]: E0130 13:57:10.115946 1794 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:10.117393 kubelet[1794]: E0130 13:57:10.117086 1794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:10.117393 kubelet[1794]: W0130 13:57:10.117107 1794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:10.117393 kubelet[1794]: E0130 13:57:10.117135 1794 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:10.118096 kubelet[1794]: E0130 13:57:10.118069 1794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:10.118096 kubelet[1794]: W0130 13:57:10.118094 1794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:10.118471 kubelet[1794]: E0130 13:57:10.118120 1794 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:10.118471 kubelet[1794]: E0130 13:57:10.118407 1794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:10.118471 kubelet[1794]: W0130 13:57:10.118416 1794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:10.118471 kubelet[1794]: E0130 13:57:10.118427 1794 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:10.720087 kubelet[1794]: E0130 13:57:10.719901 1794 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:57:10.842374 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2030770016.mount: Deactivated successfully. Jan 30 13:57:11.008394 kubelet[1794]: E0130 13:57:11.008253 1794 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:57:11.015362 kubelet[1794]: E0130 13:57:11.014668 1794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:11.015362 kubelet[1794]: W0130 13:57:11.014704 1794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:11.015362 kubelet[1794]: E0130 13:57:11.014736 1794 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:11.016069 kubelet[1794]: E0130 13:57:11.015762 1794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:11.016069 kubelet[1794]: W0130 13:57:11.015788 1794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:11.016069 kubelet[1794]: E0130 13:57:11.015815 1794 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:11.016473 kubelet[1794]: E0130 13:57:11.016307 1794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:11.016473 kubelet[1794]: W0130 13:57:11.016326 1794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:11.016473 kubelet[1794]: E0130 13:57:11.016345 1794 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:11.016655 kubelet[1794]: E0130 13:57:11.016645 1794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:11.016798 kubelet[1794]: W0130 13:57:11.016707 1794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:11.016798 kubelet[1794]: E0130 13:57:11.016729 1794 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:11.017475 kubelet[1794]: E0130 13:57:11.017345 1794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:11.017475 kubelet[1794]: W0130 13:57:11.017361 1794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:11.017475 kubelet[1794]: E0130 13:57:11.017374 1794 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:11.017898 kubelet[1794]: E0130 13:57:11.017726 1794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:11.017898 kubelet[1794]: W0130 13:57:11.017741 1794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:11.017898 kubelet[1794]: E0130 13:57:11.017799 1794 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:11.018500 kubelet[1794]: E0130 13:57:11.018357 1794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:11.018500 kubelet[1794]: W0130 13:57:11.018372 1794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:11.018500 kubelet[1794]: E0130 13:57:11.018387 1794 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:11.018809 kubelet[1794]: E0130 13:57:11.018707 1794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:11.018809 kubelet[1794]: W0130 13:57:11.018720 1794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:11.018809 kubelet[1794]: E0130 13:57:11.018734 1794 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:11.019801 kubelet[1794]: E0130 13:57:11.019710 1794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:11.019801 kubelet[1794]: W0130 13:57:11.019726 1794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:11.019801 kubelet[1794]: E0130 13:57:11.019739 1794 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:11.021026 kubelet[1794]: E0130 13:57:11.020747 1794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:11.021026 kubelet[1794]: W0130 13:57:11.020766 1794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:11.021026 kubelet[1794]: E0130 13:57:11.020783 1794 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:11.021359 kubelet[1794]: E0130 13:57:11.021231 1794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:11.021359 kubelet[1794]: W0130 13:57:11.021246 1794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:11.021359 kubelet[1794]: E0130 13:57:11.021261 1794 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:11.021717 kubelet[1794]: E0130 13:57:11.021663 1794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:11.021717 kubelet[1794]: W0130 13:57:11.021675 1794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:11.021717 kubelet[1794]: E0130 13:57:11.021686 1794 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:11.022240 kubelet[1794]: E0130 13:57:11.022181 1794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:11.022240 kubelet[1794]: W0130 13:57:11.022191 1794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:11.022240 kubelet[1794]: E0130 13:57:11.022202 1794 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:11.022651 kubelet[1794]: E0130 13:57:11.022563 1794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:11.022651 kubelet[1794]: W0130 13:57:11.022575 1794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:11.022651 kubelet[1794]: E0130 13:57:11.022602 1794 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:11.023084 kubelet[1794]: E0130 13:57:11.022975 1794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:11.023084 kubelet[1794]: W0130 13:57:11.022987 1794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:11.023084 kubelet[1794]: E0130 13:57:11.023004 1794 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:11.023546 kubelet[1794]: E0130 13:57:11.023405 1794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:11.023546 kubelet[1794]: W0130 13:57:11.023418 1794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:11.023546 kubelet[1794]: E0130 13:57:11.023434 1794 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:11.023877 kubelet[1794]: E0130 13:57:11.023751 1794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:11.023877 kubelet[1794]: W0130 13:57:11.023767 1794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:11.023877 kubelet[1794]: E0130 13:57:11.023779 1794 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:11.024462 kubelet[1794]: E0130 13:57:11.024304 1794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:11.024462 kubelet[1794]: W0130 13:57:11.024323 1794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:11.024462 kubelet[1794]: E0130 13:57:11.024338 1794 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:11.024982 kubelet[1794]: E0130 13:57:11.024791 1794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:11.024982 kubelet[1794]: W0130 13:57:11.024811 1794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:11.024982 kubelet[1794]: E0130 13:57:11.024856 1794 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:11.025223 kubelet[1794]: E0130 13:57:11.025207 1794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:11.025357 kubelet[1794]: W0130 13:57:11.025285 1794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:11.025357 kubelet[1794]: E0130 13:57:11.025307 1794 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:11.028178 containerd[1474]: time="2025-01-30T13:57:11.025981880Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:57:11.036845 containerd[1474]: time="2025-01-30T13:57:11.036108350Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Jan 30 13:57:11.036845 containerd[1474]: time="2025-01-30T13:57:11.036251632Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:57:11.038981 containerd[1474]: time="2025-01-30T13:57:11.038935712Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:57:11.039803 containerd[1474]: time="2025-01-30T13:57:11.039759308Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.557482862s" Jan 30 13:57:11.039905 containerd[1474]: time="2025-01-30T13:57:11.039806418Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 30 13:57:11.044441 containerd[1474]: time="2025-01-30T13:57:11.044381856Z" level=info msg="CreateContainer within sandbox \"4d768c71a245df1fc5a5113a3126593c44b8707280811ba427b72bf86b7660f2\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 30 13:57:11.073651 containerd[1474]: time="2025-01-30T13:57:11.073544707Z" level=info msg="CreateContainer within sandbox \"4d768c71a245df1fc5a5113a3126593c44b8707280811ba427b72bf86b7660f2\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"68a5131e8ba06a79726191b94a74854a68ad618d9c72b06fbd0bb07895e81925\"" Jan 30 13:57:11.074677 containerd[1474]: time="2025-01-30T13:57:11.074577960Z" level=info msg="StartContainer for \"68a5131e8ba06a79726191b94a74854a68ad618d9c72b06fbd0bb07895e81925\"" Jan 30 13:57:11.123876 kubelet[1794]: E0130 13:57:11.123050 1794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:11.123876 kubelet[1794]: W0130 13:57:11.123090 1794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:11.123876 kubelet[1794]: E0130 13:57:11.123121 1794 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:11.125720 kubelet[1794]: E0130 13:57:11.125048 1794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:11.125720 kubelet[1794]: W0130 13:57:11.125080 1794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:11.129956 kubelet[1794]: E0130 13:57:11.128965 1794 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:11.129956 kubelet[1794]: E0130 13:57:11.129275 1794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:11.129956 kubelet[1794]: W0130 13:57:11.129291 1794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:11.129956 kubelet[1794]: E0130 13:57:11.129318 1794 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:11.132250 kubelet[1794]: E0130 13:57:11.129709 1794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:11.132250 kubelet[1794]: W0130 13:57:11.131056 1794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:11.132250 kubelet[1794]: E0130 13:57:11.132025 1794 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:11.132563 kubelet[1794]: E0130 13:57:11.132536 1794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:11.132722 kubelet[1794]: W0130 13:57:11.132697 1794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:11.132855 kubelet[1794]: E0130 13:57:11.132840 1794 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:11.134198 kubelet[1794]: E0130 13:57:11.133737 1794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:11.134198 kubelet[1794]: W0130 13:57:11.134137 1794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:11.134838 kubelet[1794]: E0130 13:57:11.134400 1794 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:11.135868 kubelet[1794]: E0130 13:57:11.135396 1794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:11.135868 kubelet[1794]: W0130 13:57:11.135424 1794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:11.135868 kubelet[1794]: E0130 13:57:11.135530 1794 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:11.137409 kubelet[1794]: E0130 13:57:11.136380 1794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:11.137409 kubelet[1794]: W0130 13:57:11.136410 1794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:11.137699 kubelet[1794]: E0130 13:57:11.136435 1794 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:11.138045 kubelet[1794]: E0130 13:57:11.137991 1794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:11.138045 kubelet[1794]: W0130 13:57:11.138010 1794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:11.138297 kubelet[1794]: E0130 13:57:11.138190 1794 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:11.138933 kubelet[1794]: E0130 13:57:11.138908 1794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:11.139399 kubelet[1794]: W0130 13:57:11.138933 1794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:11.139399 kubelet[1794]: E0130 13:57:11.138965 1794 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:11.139552 kubelet[1794]: E0130 13:57:11.139534 1794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:11.139597 kubelet[1794]: W0130 13:57:11.139556 1794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:11.139597 kubelet[1794]: E0130 13:57:11.139577 1794 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:11.140913 kubelet[1794]: E0130 13:57:11.140300 1794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:11.140913 kubelet[1794]: W0130 13:57:11.140324 1794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:11.140913 kubelet[1794]: E0130 13:57:11.140343 1794 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:11.145254 systemd[1]: Started cri-containerd-68a5131e8ba06a79726191b94a74854a68ad618d9c72b06fbd0bb07895e81925.scope - libcontainer container 68a5131e8ba06a79726191b94a74854a68ad618d9c72b06fbd0bb07895e81925. Jan 30 13:57:11.210311 containerd[1474]: time="2025-01-30T13:57:11.209988035Z" level=info msg="StartContainer for \"68a5131e8ba06a79726191b94a74854a68ad618d9c72b06fbd0bb07895e81925\" returns successfully" Jan 30 13:57:11.223626 systemd[1]: cri-containerd-68a5131e8ba06a79726191b94a74854a68ad618d9c72b06fbd0bb07895e81925.scope: Deactivated successfully. Jan 30 13:57:11.409067 containerd[1474]: time="2025-01-30T13:57:11.408769649Z" level=info msg="shim disconnected" id=68a5131e8ba06a79726191b94a74854a68ad618d9c72b06fbd0bb07895e81925 namespace=k8s.io Jan 30 13:57:11.409067 containerd[1474]: time="2025-01-30T13:57:11.408868716Z" level=warning msg="cleaning up after shim disconnected" id=68a5131e8ba06a79726191b94a74854a68ad618d9c72b06fbd0bb07895e81925 namespace=k8s.io Jan 30 13:57:11.409067 containerd[1474]: time="2025-01-30T13:57:11.408878597Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:57:11.721303 kubelet[1794]: E0130 13:57:11.721042 1794 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:57:11.781145 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-68a5131e8ba06a79726191b94a74854a68ad618d9c72b06fbd0bb07895e81925-rootfs.mount: Deactivated successfully. Jan 30 13:57:11.948948 kubelet[1794]: E0130 13:57:11.947762 1794 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gz6sd" podUID="6a0e4b17-d4ac-44a2-88ca-fc8569ad472d" Jan 30 13:57:12.012922 kubelet[1794]: E0130 13:57:12.012724 1794 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:57:12.014419 containerd[1474]: time="2025-01-30T13:57:12.014353149Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 30 13:57:12.269332 systemd-resolved[1330]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Jan 30 13:57:12.722276 kubelet[1794]: E0130 13:57:12.722090 1794 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:57:13.723858 kubelet[1794]: E0130 13:57:13.722912 1794 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:57:13.950356 kubelet[1794]: E0130 13:57:13.950041 1794 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gz6sd" podUID="6a0e4b17-d4ac-44a2-88ca-fc8569ad472d" Jan 30 13:57:14.723492 kubelet[1794]: E0130 13:57:14.723384 1794 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:57:15.724692 kubelet[1794]: E0130 13:57:15.724610 1794 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:57:15.949627 kubelet[1794]: E0130 13:57:15.948723 1794 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gz6sd" podUID="6a0e4b17-d4ac-44a2-88ca-fc8569ad472d" Jan 30 13:57:16.725704 kubelet[1794]: E0130 13:57:16.725613 1794 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:57:17.035049 containerd[1474]: time="2025-01-30T13:57:17.034605170Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:57:17.037852 containerd[1474]: time="2025-01-30T13:57:17.037717172Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 30 13:57:17.040907 containerd[1474]: time="2025-01-30T13:57:17.039487837Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:57:17.046839 containerd[1474]: time="2025-01-30T13:57:17.046750740Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 5.03210802s" Jan 30 13:57:17.047178 containerd[1474]: time="2025-01-30T13:57:17.047131665Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 30 13:57:17.047397 containerd[1474]: time="2025-01-30T13:57:17.047374560Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:57:17.052541 containerd[1474]: time="2025-01-30T13:57:17.052483297Z" level=info msg="CreateContainer within sandbox \"4d768c71a245df1fc5a5113a3126593c44b8707280811ba427b72bf86b7660f2\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 30 13:57:17.081092 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount725393058.mount: Deactivated successfully. Jan 30 13:57:17.092942 containerd[1474]: time="2025-01-30T13:57:17.092892947Z" level=info msg="CreateContainer within sandbox \"4d768c71a245df1fc5a5113a3126593c44b8707280811ba427b72bf86b7660f2\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e4c73e6d877f39911197e6912a3d6394e5558a3dc0d45e74974516fc7943bc17\"" Jan 30 13:57:17.094237 containerd[1474]: time="2025-01-30T13:57:17.094167339Z" level=info msg="StartContainer for \"e4c73e6d877f39911197e6912a3d6394e5558a3dc0d45e74974516fc7943bc17\"" Jan 30 13:57:17.153008 systemd[1]: Started cri-containerd-e4c73e6d877f39911197e6912a3d6394e5558a3dc0d45e74974516fc7943bc17.scope - libcontainer container e4c73e6d877f39911197e6912a3d6394e5558a3dc0d45e74974516fc7943bc17. Jan 30 13:57:17.226169 containerd[1474]: time="2025-01-30T13:57:17.226069444Z" level=info msg="StartContainer for \"e4c73e6d877f39911197e6912a3d6394e5558a3dc0d45e74974516fc7943bc17\" returns successfully" Jan 30 13:57:17.727677 kubelet[1794]: E0130 13:57:17.727553 1794 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:57:17.949945 kubelet[1794]: E0130 13:57:17.949082 1794 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gz6sd" podUID="6a0e4b17-d4ac-44a2-88ca-fc8569ad472d" Jan 30 13:57:18.045622 kubelet[1794]: E0130 13:57:18.044762 1794 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:57:18.406443 containerd[1474]: time="2025-01-30T13:57:18.406232208Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:57:18.409638 systemd[1]: cri-containerd-e4c73e6d877f39911197e6912a3d6394e5558a3dc0d45e74974516fc7943bc17.scope: Deactivated successfully. Jan 30 13:57:18.447688 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e4c73e6d877f39911197e6912a3d6394e5558a3dc0d45e74974516fc7943bc17-rootfs.mount: Deactivated successfully. Jan 30 13:57:18.469300 kubelet[1794]: I0130 13:57:18.467618 1794 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 30 13:57:18.651568 containerd[1474]: time="2025-01-30T13:57:18.651477906Z" level=info msg="shim disconnected" id=e4c73e6d877f39911197e6912a3d6394e5558a3dc0d45e74974516fc7943bc17 namespace=k8s.io Jan 30 13:57:18.652493 containerd[1474]: time="2025-01-30T13:57:18.652093535Z" level=warning msg="cleaning up after shim disconnected" id=e4c73e6d877f39911197e6912a3d6394e5558a3dc0d45e74974516fc7943bc17 namespace=k8s.io Jan 30 13:57:18.652493 containerd[1474]: time="2025-01-30T13:57:18.652135475Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:57:18.728585 kubelet[1794]: E0130 13:57:18.728382 1794 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:57:18.845103 systemd-resolved[1330]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Jan 30 13:57:18.865030 systemd-timesyncd[1358]: Contacted time server 142.202.190.19:123 (2.flatcar.pool.ntp.org). Jan 30 13:57:18.865154 systemd-timesyncd[1358]: Initial clock synchronization to Thu 2025-01-30 13:57:18.788255 UTC. Jan 30 13:57:19.050064 kubelet[1794]: E0130 13:57:19.049510 1794 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:57:19.053066 containerd[1474]: time="2025-01-30T13:57:19.052641788Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 30 13:57:19.730472 kubelet[1794]: E0130 13:57:19.730384 1794 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:57:19.965013 systemd[1]: Created slice kubepods-besteffort-pod6a0e4b17_d4ac_44a2_88ca_fc8569ad472d.slice - libcontainer container kubepods-besteffort-pod6a0e4b17_d4ac_44a2_88ca_fc8569ad472d.slice. Jan 30 13:57:19.971336 containerd[1474]: time="2025-01-30T13:57:19.971262456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gz6sd,Uid:6a0e4b17-d4ac-44a2-88ca-fc8569ad472d,Namespace:calico-system,Attempt:0,}" Jan 30 13:57:20.258584 containerd[1474]: time="2025-01-30T13:57:20.250947668Z" level=error msg="Failed to destroy network for sandbox \"f09127e2e06684a44a2c53908a2bba3756904a74d4e0fe6114ccf89049fea0cd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:57:20.258584 containerd[1474]: time="2025-01-30T13:57:20.253420048Z" level=error msg="encountered an error cleaning up failed sandbox \"f09127e2e06684a44a2c53908a2bba3756904a74d4e0fe6114ccf89049fea0cd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:57:20.258584 containerd[1474]: time="2025-01-30T13:57:20.253620216Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gz6sd,Uid:6a0e4b17-d4ac-44a2-88ca-fc8569ad472d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f09127e2e06684a44a2c53908a2bba3756904a74d4e0fe6114ccf89049fea0cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:57:20.258870 kubelet[1794]: E0130 13:57:20.258025 1794 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f09127e2e06684a44a2c53908a2bba3756904a74d4e0fe6114ccf89049fea0cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:57:20.258870 kubelet[1794]: E0130 13:57:20.258121 1794 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f09127e2e06684a44a2c53908a2bba3756904a74d4e0fe6114ccf89049fea0cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gz6sd" Jan 30 13:57:20.258870 kubelet[1794]: E0130 13:57:20.258152 1794 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f09127e2e06684a44a2c53908a2bba3756904a74d4e0fe6114ccf89049fea0cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gz6sd" Jan 30 13:57:20.257610 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f09127e2e06684a44a2c53908a2bba3756904a74d4e0fe6114ccf89049fea0cd-shm.mount: Deactivated successfully. Jan 30 13:57:20.259086 kubelet[1794]: E0130 13:57:20.258225 1794 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-gz6sd_calico-system(6a0e4b17-d4ac-44a2-88ca-fc8569ad472d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-gz6sd_calico-system(6a0e4b17-d4ac-44a2-88ca-fc8569ad472d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f09127e2e06684a44a2c53908a2bba3756904a74d4e0fe6114ccf89049fea0cd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gz6sd" podUID="6a0e4b17-d4ac-44a2-88ca-fc8569ad472d" Jan 30 13:57:20.732701 kubelet[1794]: E0130 13:57:20.732492 1794 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:57:21.098623 kubelet[1794]: I0130 13:57:21.096515 1794 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f09127e2e06684a44a2c53908a2bba3756904a74d4e0fe6114ccf89049fea0cd" Jan 30 13:57:21.133782 containerd[1474]: time="2025-01-30T13:57:21.126075348Z" level=info msg="StopPodSandbox for \"f09127e2e06684a44a2c53908a2bba3756904a74d4e0fe6114ccf89049fea0cd\"" Jan 30 13:57:21.133782 containerd[1474]: time="2025-01-30T13:57:21.126568411Z" level=info msg="Ensure that sandbox f09127e2e06684a44a2c53908a2bba3756904a74d4e0fe6114ccf89049fea0cd in task-service has been cleanup successfully" Jan 30 13:57:21.236601 containerd[1474]: time="2025-01-30T13:57:21.236526963Z" level=error msg="StopPodSandbox for \"f09127e2e06684a44a2c53908a2bba3756904a74d4e0fe6114ccf89049fea0cd\" failed" error="failed to destroy network for sandbox \"f09127e2e06684a44a2c53908a2bba3756904a74d4e0fe6114ccf89049fea0cd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:57:21.237344 kubelet[1794]: E0130 13:57:21.237219 1794 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f09127e2e06684a44a2c53908a2bba3756904a74d4e0fe6114ccf89049fea0cd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f09127e2e06684a44a2c53908a2bba3756904a74d4e0fe6114ccf89049fea0cd" Jan 30 13:57:21.237499 kubelet[1794]: E0130 13:57:21.237309 1794 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f09127e2e06684a44a2c53908a2bba3756904a74d4e0fe6114ccf89049fea0cd"} Jan 30 13:57:21.237499 kubelet[1794]: E0130 13:57:21.237426 1794 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6a0e4b17-d4ac-44a2-88ca-fc8569ad472d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f09127e2e06684a44a2c53908a2bba3756904a74d4e0fe6114ccf89049fea0cd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:57:21.237499 kubelet[1794]: E0130 13:57:21.237465 1794 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6a0e4b17-d4ac-44a2-88ca-fc8569ad472d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f09127e2e06684a44a2c53908a2bba3756904a74d4e0fe6114ccf89049fea0cd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gz6sd" podUID="6a0e4b17-d4ac-44a2-88ca-fc8569ad472d" Jan 30 13:57:21.600492 kubelet[1794]: I0130 13:57:21.598814 1794 topology_manager.go:215] "Topology Admit Handler" podUID="8a5435f6-5bd8-40ba-84b3-23f344925544" podNamespace="default" podName="nginx-deployment-85f456d6dd-xf9gl" Jan 30 13:57:21.612903 systemd[1]: Created slice kubepods-besteffort-pod8a5435f6_5bd8_40ba_84b3_23f344925544.slice - libcontainer container kubepods-besteffort-pod8a5435f6_5bd8_40ba_84b3_23f344925544.slice. Jan 30 13:57:21.717813 kubelet[1794]: I0130 13:57:21.717560 1794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v85gp\" (UniqueName: \"kubernetes.io/projected/8a5435f6-5bd8-40ba-84b3-23f344925544-kube-api-access-v85gp\") pod \"nginx-deployment-85f456d6dd-xf9gl\" (UID: \"8a5435f6-5bd8-40ba-84b3-23f344925544\") " pod="default/nginx-deployment-85f456d6dd-xf9gl" Jan 30 13:57:21.733691 kubelet[1794]: E0130 13:57:21.733446 1794 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:57:21.930319 containerd[1474]: time="2025-01-30T13:57:21.929161109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-xf9gl,Uid:8a5435f6-5bd8-40ba-84b3-23f344925544,Namespace:default,Attempt:0,}" Jan 30 13:57:22.107277 containerd[1474]: time="2025-01-30T13:57:22.107126650Z" level=error msg="Failed to destroy network for sandbox \"3b8e1be27c7eca96672bb033249acf00efded4d3123e04a578c3154c059caa35\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:57:22.108873 containerd[1474]: time="2025-01-30T13:57:22.107846245Z" level=error msg="encountered an error cleaning up failed sandbox \"3b8e1be27c7eca96672bb033249acf00efded4d3123e04a578c3154c059caa35\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:57:22.108873 containerd[1474]: time="2025-01-30T13:57:22.107934397Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-xf9gl,Uid:8a5435f6-5bd8-40ba-84b3-23f344925544,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3b8e1be27c7eca96672bb033249acf00efded4d3123e04a578c3154c059caa35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:57:22.112856 kubelet[1794]: E0130 13:57:22.109342 1794 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b8e1be27c7eca96672bb033249acf00efded4d3123e04a578c3154c059caa35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:57:22.112856 kubelet[1794]: E0130 13:57:22.109433 1794 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b8e1be27c7eca96672bb033249acf00efded4d3123e04a578c3154c059caa35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-xf9gl" Jan 30 13:57:22.112856 kubelet[1794]: E0130 13:57:22.109473 1794 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b8e1be27c7eca96672bb033249acf00efded4d3123e04a578c3154c059caa35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-xf9gl" Jan 30 13:57:22.112191 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3b8e1be27c7eca96672bb033249acf00efded4d3123e04a578c3154c059caa35-shm.mount: Deactivated successfully. Jan 30 13:57:22.113282 kubelet[1794]: E0130 13:57:22.109545 1794 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-xf9gl_default(8a5435f6-5bd8-40ba-84b3-23f344925544)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-xf9gl_default(8a5435f6-5bd8-40ba-84b3-23f344925544)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3b8e1be27c7eca96672bb033249acf00efded4d3123e04a578c3154c059caa35\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-xf9gl" podUID="8a5435f6-5bd8-40ba-84b3-23f344925544" Jan 30 13:57:22.734061 kubelet[1794]: E0130 13:57:22.733981 1794 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:57:23.105406 kubelet[1794]: I0130 13:57:23.105251 1794 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b8e1be27c7eca96672bb033249acf00efded4d3123e04a578c3154c059caa35" Jan 30 13:57:23.106634 containerd[1474]: time="2025-01-30T13:57:23.106577631Z" level=info msg="StopPodSandbox for \"3b8e1be27c7eca96672bb033249acf00efded4d3123e04a578c3154c059caa35\"" Jan 30 13:57:23.108462 containerd[1474]: time="2025-01-30T13:57:23.107723573Z" level=info msg="Ensure that sandbox 3b8e1be27c7eca96672bb033249acf00efded4d3123e04a578c3154c059caa35 in task-service has been cleanup successfully" Jan 30 13:57:23.184376 containerd[1474]: time="2025-01-30T13:57:23.183810452Z" level=error msg="StopPodSandbox for \"3b8e1be27c7eca96672bb033249acf00efded4d3123e04a578c3154c059caa35\" failed" error="failed to destroy network for sandbox \"3b8e1be27c7eca96672bb033249acf00efded4d3123e04a578c3154c059caa35\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:57:23.185134 kubelet[1794]: E0130 13:57:23.184803 1794 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3b8e1be27c7eca96672bb033249acf00efded4d3123e04a578c3154c059caa35\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3b8e1be27c7eca96672bb033249acf00efded4d3123e04a578c3154c059caa35" Jan 30 13:57:23.185347 kubelet[1794]: E0130 13:57:23.185166 1794 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3b8e1be27c7eca96672bb033249acf00efded4d3123e04a578c3154c059caa35"} Jan 30 13:57:23.185391 kubelet[1794]: E0130 13:57:23.185344 1794 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8a5435f6-5bd8-40ba-84b3-23f344925544\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3b8e1be27c7eca96672bb033249acf00efded4d3123e04a578c3154c059caa35\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:57:23.185503 kubelet[1794]: E0130 13:57:23.185429 1794 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8a5435f6-5bd8-40ba-84b3-23f344925544\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3b8e1be27c7eca96672bb033249acf00efded4d3123e04a578c3154c059caa35\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-xf9gl" podUID="8a5435f6-5bd8-40ba-84b3-23f344925544" Jan 30 13:57:23.711165 kubelet[1794]: E0130 13:57:23.711029 1794 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:57:23.735121 kubelet[1794]: E0130 13:57:23.735056 1794 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:57:24.735567 kubelet[1794]: E0130 13:57:24.735494 1794 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:57:25.736278 kubelet[1794]: E0130 13:57:25.736216 1794 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:57:26.737683 kubelet[1794]: E0130 13:57:26.737631 1794 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:57:27.739603 kubelet[1794]: E0130 13:57:27.739513 1794 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:57:28.264691 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount408994085.mount: Deactivated successfully. Jan 30 13:57:28.322878 containerd[1474]: time="2025-01-30T13:57:28.322008184Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:57:28.323304 containerd[1474]: time="2025-01-30T13:57:28.322906642Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 30 13:57:28.324794 containerd[1474]: time="2025-01-30T13:57:28.324487262Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:57:28.327252 containerd[1474]: time="2025-01-30T13:57:28.327181632Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:57:28.328138 containerd[1474]: time="2025-01-30T13:57:28.327914473Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 9.275208815s" Jan 30 13:57:28.328138 containerd[1474]: time="2025-01-30T13:57:28.327962284Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 30 13:57:28.362311 containerd[1474]: time="2025-01-30T13:57:28.362243336Z" level=info msg="CreateContainer within sandbox \"4d768c71a245df1fc5a5113a3126593c44b8707280811ba427b72bf86b7660f2\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 30 13:57:28.387502 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4027465059.mount: Deactivated successfully. Jan 30 13:57:28.398655 containerd[1474]: time="2025-01-30T13:57:28.398460343Z" level=info msg="CreateContainer within sandbox \"4d768c71a245df1fc5a5113a3126593c44b8707280811ba427b72bf86b7660f2\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c8afbe8ac788c079fe9652c88bf8798b42d1b60c2c57434c9aea8a665d0c7ab8\"" Jan 30 13:57:28.399765 containerd[1474]: time="2025-01-30T13:57:28.399606834Z" level=info msg="StartContainer for \"c8afbe8ac788c079fe9652c88bf8798b42d1b60c2c57434c9aea8a665d0c7ab8\"" Jan 30 13:57:28.522233 systemd[1]: Started cri-containerd-c8afbe8ac788c079fe9652c88bf8798b42d1b60c2c57434c9aea8a665d0c7ab8.scope - libcontainer container c8afbe8ac788c079fe9652c88bf8798b42d1b60c2c57434c9aea8a665d0c7ab8. Jan 30 13:57:28.578545 containerd[1474]: time="2025-01-30T13:57:28.578473474Z" level=info msg="StartContainer for \"c8afbe8ac788c079fe9652c88bf8798b42d1b60c2c57434c9aea8a665d0c7ab8\" returns successfully" Jan 30 13:57:28.695558 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 30 13:57:28.695740 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 30 13:57:28.740352 kubelet[1794]: E0130 13:57:28.740284 1794 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:57:29.140613 kubelet[1794]: E0130 13:57:29.140556 1794 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:57:29.159866 kubelet[1794]: I0130 13:57:29.158411 1794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-8hdtk" podStartSLOduration=5.131592462 podStartE2EDuration="26.158388272s" podCreationTimestamp="2025-01-30 13:57:03 +0000 UTC" firstStartedPulling="2025-01-30 13:57:07.302742175 +0000 UTC m=+4.606730667" lastFinishedPulling="2025-01-30 13:57:28.329537999 +0000 UTC m=+25.633526477" observedRunningTime="2025-01-30 13:57:29.158029011 +0000 UTC m=+26.462017516" watchObservedRunningTime="2025-01-30 13:57:29.158388272 +0000 UTC m=+26.462376775" Jan 30 13:57:29.741638 kubelet[1794]: E0130 13:57:29.741487 1794 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:57:30.143868 kubelet[1794]: E0130 13:57:30.143070 1794 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:57:30.742643 kubelet[1794]: E0130 13:57:30.742263 1794 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:57:30.811969 kernel: bpftool[2634]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 30 13:57:31.162419 systemd-networkd[1377]: vxlan.calico: Link UP Jan 30 13:57:31.162436 systemd-networkd[1377]: vxlan.calico: Gained carrier Jan 30 13:57:31.743008 kubelet[1794]: E0130 13:57:31.742944 1794 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:57:32.743328 kubelet[1794]: E0130 13:57:32.743093 1794 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:57:32.813362 systemd-networkd[1377]: vxlan.calico: Gained IPv6LL Jan 30 13:57:33.293029 update_engine[1451]: I20250130 13:57:33.292757 1451 update_attempter.cc:509] Updating boot flags... Jan 30 13:57:33.331001 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2445) Jan 30 13:57:33.425009 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2713) Jan 30 13:57:33.744256 kubelet[1794]: E0130 13:57:33.744073 1794 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:57:33.948633 containerd[1474]: time="2025-01-30T13:57:33.948571272Z" level=info msg="StopPodSandbox for \"f09127e2e06684a44a2c53908a2bba3756904a74d4e0fe6114ccf89049fea0cd\"" Jan 30 13:57:33.952789 containerd[1474]: time="2025-01-30T13:57:33.952544573Z" level=info msg="StopPodSandbox for \"3b8e1be27c7eca96672bb033249acf00efded4d3123e04a578c3154c059caa35\"" Jan 30 13:57:34.202888 containerd[1474]: 2025-01-30 13:57:34.070 [INFO][2743] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3b8e1be27c7eca96672bb033249acf00efded4d3123e04a578c3154c059caa35" Jan 30 13:57:34.202888 containerd[1474]: 2025-01-30 13:57:34.071 [INFO][2743] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3b8e1be27c7eca96672bb033249acf00efded4d3123e04a578c3154c059caa35" iface="eth0" netns="/var/run/netns/cni-ec9be2a0-1762-3a2d-b1e2-7b3ac71ea786" Jan 30 13:57:34.202888 containerd[1474]: 2025-01-30 13:57:34.072 [INFO][2743] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3b8e1be27c7eca96672bb033249acf00efded4d3123e04a578c3154c059caa35" iface="eth0" netns="/var/run/netns/cni-ec9be2a0-1762-3a2d-b1e2-7b3ac71ea786" Jan 30 13:57:34.202888 containerd[1474]: 2025-01-30 13:57:34.074 [INFO][2743] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3b8e1be27c7eca96672bb033249acf00efded4d3123e04a578c3154c059caa35" iface="eth0" netns="/var/run/netns/cni-ec9be2a0-1762-3a2d-b1e2-7b3ac71ea786" Jan 30 13:57:34.202888 containerd[1474]: 2025-01-30 13:57:34.074 [INFO][2743] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3b8e1be27c7eca96672bb033249acf00efded4d3123e04a578c3154c059caa35" Jan 30 13:57:34.202888 containerd[1474]: 2025-01-30 13:57:34.074 [INFO][2743] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3b8e1be27c7eca96672bb033249acf00efded4d3123e04a578c3154c059caa35" Jan 30 13:57:34.202888 containerd[1474]: 2025-01-30 13:57:34.144 [INFO][2755] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3b8e1be27c7eca96672bb033249acf00efded4d3123e04a578c3154c059caa35" HandleID="k8s-pod-network.3b8e1be27c7eca96672bb033249acf00efded4d3123e04a578c3154c059caa35" Workload="209.38.134.12-k8s-nginx--deployment--85f456d6dd--xf9gl-eth0" Jan 30 13:57:34.202888 containerd[1474]: 2025-01-30 13:57:34.145 [INFO][2755] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:57:34.202888 containerd[1474]: 2025-01-30 13:57:34.145 [INFO][2755] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:57:34.202888 containerd[1474]: 2025-01-30 13:57:34.184 [WARNING][2755] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3b8e1be27c7eca96672bb033249acf00efded4d3123e04a578c3154c059caa35" HandleID="k8s-pod-network.3b8e1be27c7eca96672bb033249acf00efded4d3123e04a578c3154c059caa35" Workload="209.38.134.12-k8s-nginx--deployment--85f456d6dd--xf9gl-eth0" Jan 30 13:57:34.202888 containerd[1474]: 2025-01-30 13:57:34.184 [INFO][2755] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3b8e1be27c7eca96672bb033249acf00efded4d3123e04a578c3154c059caa35" HandleID="k8s-pod-network.3b8e1be27c7eca96672bb033249acf00efded4d3123e04a578c3154c059caa35" Workload="209.38.134.12-k8s-nginx--deployment--85f456d6dd--xf9gl-eth0" Jan 30 13:57:34.202888 containerd[1474]: 2025-01-30 13:57:34.187 [INFO][2755] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:57:34.202888 containerd[1474]: 2025-01-30 13:57:34.191 [INFO][2743] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3b8e1be27c7eca96672bb033249acf00efded4d3123e04a578c3154c059caa35" Jan 30 13:57:34.202888 containerd[1474]: time="2025-01-30T13:57:34.198391689Z" level=info msg="TearDown network for sandbox \"3b8e1be27c7eca96672bb033249acf00efded4d3123e04a578c3154c059caa35\" successfully" Jan 30 13:57:34.202888 containerd[1474]: time="2025-01-30T13:57:34.198444230Z" level=info msg="StopPodSandbox for \"3b8e1be27c7eca96672bb033249acf00efded4d3123e04a578c3154c059caa35\" returns successfully" Jan 30 13:57:34.201668 systemd[1]: run-netns-cni\x2dec9be2a0\x2d1762\x2d3a2d\x2db1e2\x2d7b3ac71ea786.mount: Deactivated successfully. Jan 30 13:57:34.205918 containerd[1474]: time="2025-01-30T13:57:34.203487208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-xf9gl,Uid:8a5435f6-5bd8-40ba-84b3-23f344925544,Namespace:default,Attempt:1,}" Jan 30 13:57:34.225223 containerd[1474]: 2025-01-30 13:57:34.073 [INFO][2742] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f09127e2e06684a44a2c53908a2bba3756904a74d4e0fe6114ccf89049fea0cd" Jan 30 13:57:34.225223 containerd[1474]: 2025-01-30 13:57:34.073 [INFO][2742] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f09127e2e06684a44a2c53908a2bba3756904a74d4e0fe6114ccf89049fea0cd" iface="eth0" netns="/var/run/netns/cni-d5e0c4db-e0be-07d3-e6c7-3fa078f92530" Jan 30 13:57:34.225223 containerd[1474]: 2025-01-30 13:57:34.074 [INFO][2742] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f09127e2e06684a44a2c53908a2bba3756904a74d4e0fe6114ccf89049fea0cd" iface="eth0" netns="/var/run/netns/cni-d5e0c4db-e0be-07d3-e6c7-3fa078f92530" Jan 30 13:57:34.225223 containerd[1474]: 2025-01-30 13:57:34.074 [INFO][2742] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f09127e2e06684a44a2c53908a2bba3756904a74d4e0fe6114ccf89049fea0cd" iface="eth0" netns="/var/run/netns/cni-d5e0c4db-e0be-07d3-e6c7-3fa078f92530" Jan 30 13:57:34.225223 containerd[1474]: 2025-01-30 13:57:34.074 [INFO][2742] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f09127e2e06684a44a2c53908a2bba3756904a74d4e0fe6114ccf89049fea0cd" Jan 30 13:57:34.225223 containerd[1474]: 2025-01-30 13:57:34.074 [INFO][2742] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f09127e2e06684a44a2c53908a2bba3756904a74d4e0fe6114ccf89049fea0cd" Jan 30 13:57:34.225223 containerd[1474]: 2025-01-30 13:57:34.167 [INFO][2754] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f09127e2e06684a44a2c53908a2bba3756904a74d4e0fe6114ccf89049fea0cd" HandleID="k8s-pod-network.f09127e2e06684a44a2c53908a2bba3756904a74d4e0fe6114ccf89049fea0cd" Workload="209.38.134.12-k8s-csi--node--driver--gz6sd-eth0" Jan 30 13:57:34.225223 containerd[1474]: 2025-01-30 13:57:34.168 [INFO][2754] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:57:34.225223 containerd[1474]: 2025-01-30 13:57:34.187 [INFO][2754] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:57:34.225223 containerd[1474]: 2025-01-30 13:57:34.211 [WARNING][2754] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f09127e2e06684a44a2c53908a2bba3756904a74d4e0fe6114ccf89049fea0cd" HandleID="k8s-pod-network.f09127e2e06684a44a2c53908a2bba3756904a74d4e0fe6114ccf89049fea0cd" Workload="209.38.134.12-k8s-csi--node--driver--gz6sd-eth0" Jan 30 13:57:34.225223 containerd[1474]: 2025-01-30 13:57:34.215 [INFO][2754] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f09127e2e06684a44a2c53908a2bba3756904a74d4e0fe6114ccf89049fea0cd" HandleID="k8s-pod-network.f09127e2e06684a44a2c53908a2bba3756904a74d4e0fe6114ccf89049fea0cd" Workload="209.38.134.12-k8s-csi--node--driver--gz6sd-eth0" Jan 30 13:57:34.225223 containerd[1474]: 2025-01-30 13:57:34.218 [INFO][2754] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:57:34.225223 containerd[1474]: 2025-01-30 13:57:34.223 [INFO][2742] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f09127e2e06684a44a2c53908a2bba3756904a74d4e0fe6114ccf89049fea0cd" Jan 30 13:57:34.228268 containerd[1474]: time="2025-01-30T13:57:34.227948000Z" level=info msg="TearDown network for sandbox \"f09127e2e06684a44a2c53908a2bba3756904a74d4e0fe6114ccf89049fea0cd\" successfully" Jan 30 13:57:34.228268 containerd[1474]: time="2025-01-30T13:57:34.228004819Z" level=info msg="StopPodSandbox for \"f09127e2e06684a44a2c53908a2bba3756904a74d4e0fe6114ccf89049fea0cd\" returns successfully" Jan 30 13:57:34.229262 containerd[1474]: time="2025-01-30T13:57:34.228889210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gz6sd,Uid:6a0e4b17-d4ac-44a2-88ca-fc8569ad472d,Namespace:calico-system,Attempt:1,}" Jan 30 13:57:34.229950 systemd[1]: run-netns-cni\x2dd5e0c4db\x2de0be\x2d07d3\x2de6c7\x2d3fa078f92530.mount: Deactivated successfully. Jan 30 13:57:34.539595 systemd-networkd[1377]: calic98034dc062: Link UP Jan 30 13:57:34.541459 systemd-networkd[1377]: calic98034dc062: Gained carrier Jan 30 13:57:34.568606 containerd[1474]: 2025-01-30 13:57:34.348 [INFO][2768] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {209.38.134.12-k8s-nginx--deployment--85f456d6dd--xf9gl-eth0 nginx-deployment-85f456d6dd- default 8a5435f6-5bd8-40ba-84b3-23f344925544 1172 0 2025-01-30 13:57:21 +0000 UTC map[app:nginx pod-template-hash:85f456d6dd projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 209.38.134.12 nginx-deployment-85f456d6dd-xf9gl eth0 default [] [] [kns.default ksa.default.default] calic98034dc062 [] []}} ContainerID="8db2b6627597b17590b4002cede6532695f040005c904f009365b6f52ac1c3dd" Namespace="default" Pod="nginx-deployment-85f456d6dd-xf9gl" WorkloadEndpoint="209.38.134.12-k8s-nginx--deployment--85f456d6dd--xf9gl-" Jan 30 13:57:34.568606 containerd[1474]: 2025-01-30 13:57:34.348 [INFO][2768] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8db2b6627597b17590b4002cede6532695f040005c904f009365b6f52ac1c3dd" Namespace="default" Pod="nginx-deployment-85f456d6dd-xf9gl" WorkloadEndpoint="209.38.134.12-k8s-nginx--deployment--85f456d6dd--xf9gl-eth0" Jan 30 13:57:34.568606 containerd[1474]: 2025-01-30 13:57:34.420 [INFO][2792] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8db2b6627597b17590b4002cede6532695f040005c904f009365b6f52ac1c3dd" HandleID="k8s-pod-network.8db2b6627597b17590b4002cede6532695f040005c904f009365b6f52ac1c3dd" Workload="209.38.134.12-k8s-nginx--deployment--85f456d6dd--xf9gl-eth0" Jan 30 13:57:34.568606 containerd[1474]: 2025-01-30 13:57:34.442 [INFO][2792] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8db2b6627597b17590b4002cede6532695f040005c904f009365b6f52ac1c3dd" HandleID="k8s-pod-network.8db2b6627597b17590b4002cede6532695f040005c904f009365b6f52ac1c3dd" Workload="209.38.134.12-k8s-nginx--deployment--85f456d6dd--xf9gl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001fc450), Attrs:map[string]string{"namespace":"default", "node":"209.38.134.12", "pod":"nginx-deployment-85f456d6dd-xf9gl", "timestamp":"2025-01-30 13:57:34.420342283 +0000 UTC"}, Hostname:"209.38.134.12", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:57:34.568606 containerd[1474]: 2025-01-30 13:57:34.442 [INFO][2792] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:57:34.568606 containerd[1474]: 2025-01-30 13:57:34.442 [INFO][2792] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:57:34.568606 containerd[1474]: 2025-01-30 13:57:34.442 [INFO][2792] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '209.38.134.12' Jan 30 13:57:34.568606 containerd[1474]: 2025-01-30 13:57:34.449 [INFO][2792] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8db2b6627597b17590b4002cede6532695f040005c904f009365b6f52ac1c3dd" host="209.38.134.12" Jan 30 13:57:34.568606 containerd[1474]: 2025-01-30 13:57:34.474 [INFO][2792] ipam/ipam.go 372: Looking up existing affinities for host host="209.38.134.12" Jan 30 13:57:34.568606 containerd[1474]: 2025-01-30 13:57:34.484 [INFO][2792] ipam/ipam.go 489: Trying affinity for 192.168.26.0/26 host="209.38.134.12" Jan 30 13:57:34.568606 containerd[1474]: 2025-01-30 13:57:34.488 [INFO][2792] ipam/ipam.go 155: Attempting to load block cidr=192.168.26.0/26 host="209.38.134.12" Jan 30 13:57:34.568606 containerd[1474]: 2025-01-30 13:57:34.492 [INFO][2792] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.26.0/26 host="209.38.134.12" Jan 30 13:57:34.568606 containerd[1474]: 2025-01-30 13:57:34.492 [INFO][2792] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.26.0/26 handle="k8s-pod-network.8db2b6627597b17590b4002cede6532695f040005c904f009365b6f52ac1c3dd" host="209.38.134.12" Jan 30 13:57:34.568606 containerd[1474]: 2025-01-30 13:57:34.495 [INFO][2792] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8db2b6627597b17590b4002cede6532695f040005c904f009365b6f52ac1c3dd Jan 30 13:57:34.568606 containerd[1474]: 2025-01-30 13:57:34.504 [INFO][2792] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.26.0/26 handle="k8s-pod-network.8db2b6627597b17590b4002cede6532695f040005c904f009365b6f52ac1c3dd" host="209.38.134.12" Jan 30 13:57:34.568606 containerd[1474]: 2025-01-30 13:57:34.513 [INFO][2792] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.26.1/26] block=192.168.26.0/26 handle="k8s-pod-network.8db2b6627597b17590b4002cede6532695f040005c904f009365b6f52ac1c3dd" host="209.38.134.12" Jan 30 13:57:34.568606 containerd[1474]: 2025-01-30 13:57:34.513 [INFO][2792] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.26.1/26] handle="k8s-pod-network.8db2b6627597b17590b4002cede6532695f040005c904f009365b6f52ac1c3dd" host="209.38.134.12" Jan 30 13:57:34.568606 containerd[1474]: 2025-01-30 13:57:34.513 [INFO][2792] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:57:34.568606 containerd[1474]: 2025-01-30 13:57:34.513 [INFO][2792] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.26.1/26] IPv6=[] ContainerID="8db2b6627597b17590b4002cede6532695f040005c904f009365b6f52ac1c3dd" HandleID="k8s-pod-network.8db2b6627597b17590b4002cede6532695f040005c904f009365b6f52ac1c3dd" Workload="209.38.134.12-k8s-nginx--deployment--85f456d6dd--xf9gl-eth0" Jan 30 13:57:34.569477 containerd[1474]: 2025-01-30 13:57:34.521 [INFO][2768] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8db2b6627597b17590b4002cede6532695f040005c904f009365b6f52ac1c3dd" Namespace="default" Pod="nginx-deployment-85f456d6dd-xf9gl" WorkloadEndpoint="209.38.134.12-k8s-nginx--deployment--85f456d6dd--xf9gl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"209.38.134.12-k8s-nginx--deployment--85f456d6dd--xf9gl-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"8a5435f6-5bd8-40ba-84b3-23f344925544", ResourceVersion:"1172", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 57, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"209.38.134.12", ContainerID:"", Pod:"nginx-deployment-85f456d6dd-xf9gl", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.26.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calic98034dc062", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:57:34.569477 containerd[1474]: 2025-01-30 13:57:34.521 [INFO][2768] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.26.1/32] ContainerID="8db2b6627597b17590b4002cede6532695f040005c904f009365b6f52ac1c3dd" Namespace="default" Pod="nginx-deployment-85f456d6dd-xf9gl" WorkloadEndpoint="209.38.134.12-k8s-nginx--deployment--85f456d6dd--xf9gl-eth0" Jan 30 13:57:34.569477 containerd[1474]: 2025-01-30 13:57:34.522 [INFO][2768] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic98034dc062 ContainerID="8db2b6627597b17590b4002cede6532695f040005c904f009365b6f52ac1c3dd" Namespace="default" Pod="nginx-deployment-85f456d6dd-xf9gl" WorkloadEndpoint="209.38.134.12-k8s-nginx--deployment--85f456d6dd--xf9gl-eth0" Jan 30 13:57:34.569477 containerd[1474]: 2025-01-30 13:57:34.547 [INFO][2768] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8db2b6627597b17590b4002cede6532695f040005c904f009365b6f52ac1c3dd" Namespace="default" Pod="nginx-deployment-85f456d6dd-xf9gl" WorkloadEndpoint="209.38.134.12-k8s-nginx--deployment--85f456d6dd--xf9gl-eth0" Jan 30 13:57:34.569477 containerd[1474]: 2025-01-30 13:57:34.552 [INFO][2768] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8db2b6627597b17590b4002cede6532695f040005c904f009365b6f52ac1c3dd" Namespace="default" Pod="nginx-deployment-85f456d6dd-xf9gl" WorkloadEndpoint="209.38.134.12-k8s-nginx--deployment--85f456d6dd--xf9gl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"209.38.134.12-k8s-nginx--deployment--85f456d6dd--xf9gl-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"8a5435f6-5bd8-40ba-84b3-23f344925544", ResourceVersion:"1172", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 57, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"209.38.134.12", ContainerID:"8db2b6627597b17590b4002cede6532695f040005c904f009365b6f52ac1c3dd", Pod:"nginx-deployment-85f456d6dd-xf9gl", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.26.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calic98034dc062", MAC:"ba:3a:0b:9a:a4:0f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:57:34.569477 containerd[1474]: 2025-01-30 13:57:34.563 [INFO][2768] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8db2b6627597b17590b4002cede6532695f040005c904f009365b6f52ac1c3dd" Namespace="default" Pod="nginx-deployment-85f456d6dd-xf9gl" WorkloadEndpoint="209.38.134.12-k8s-nginx--deployment--85f456d6dd--xf9gl-eth0" Jan 30 13:57:34.615791 systemd-networkd[1377]: cali015f678b8a8: Link UP Jan 30 13:57:34.616290 systemd-networkd[1377]: cali015f678b8a8: Gained carrier Jan 30 13:57:34.625813 containerd[1474]: time="2025-01-30T13:57:34.624340916Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:57:34.625813 containerd[1474]: time="2025-01-30T13:57:34.624896890Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:57:34.625813 containerd[1474]: time="2025-01-30T13:57:34.624965615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:57:34.625813 containerd[1474]: time="2025-01-30T13:57:34.625099205Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:57:34.643863 containerd[1474]: 2025-01-30 13:57:34.373 [INFO][2778] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {209.38.134.12-k8s-csi--node--driver--gz6sd-eth0 csi-node-driver- calico-system 6a0e4b17-d4ac-44a2-88ca-fc8569ad472d 1173 0 2025-01-30 13:57:03 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 209.38.134.12 csi-node-driver-gz6sd eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali015f678b8a8 [] []}} ContainerID="f920131af5376e81f2246a429367fbdd7dd56fb3de46b059e80744707b90cb7a" Namespace="calico-system" Pod="csi-node-driver-gz6sd" WorkloadEndpoint="209.38.134.12-k8s-csi--node--driver--gz6sd-" Jan 30 13:57:34.643863 containerd[1474]: 2025-01-30 13:57:34.374 [INFO][2778] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f920131af5376e81f2246a429367fbdd7dd56fb3de46b059e80744707b90cb7a" Namespace="calico-system" Pod="csi-node-driver-gz6sd" WorkloadEndpoint="209.38.134.12-k8s-csi--node--driver--gz6sd-eth0" Jan 30 13:57:34.643863 containerd[1474]: 2025-01-30 13:57:34.440 [INFO][2797] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f920131af5376e81f2246a429367fbdd7dd56fb3de46b059e80744707b90cb7a" HandleID="k8s-pod-network.f920131af5376e81f2246a429367fbdd7dd56fb3de46b059e80744707b90cb7a" Workload="209.38.134.12-k8s-csi--node--driver--gz6sd-eth0" Jan 30 13:57:34.643863 containerd[1474]: 2025-01-30 13:57:34.471 [INFO][2797] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f920131af5376e81f2246a429367fbdd7dd56fb3de46b059e80744707b90cb7a" HandleID="k8s-pod-network.f920131af5376e81f2246a429367fbdd7dd56fb3de46b059e80744707b90cb7a" Workload="209.38.134.12-k8s-csi--node--driver--gz6sd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002907f0), Attrs:map[string]string{"namespace":"calico-system", "node":"209.38.134.12", "pod":"csi-node-driver-gz6sd", "timestamp":"2025-01-30 13:57:34.440244314 +0000 UTC"}, Hostname:"209.38.134.12", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:57:34.643863 containerd[1474]: 2025-01-30 13:57:34.471 [INFO][2797] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:57:34.643863 containerd[1474]: 2025-01-30 13:57:34.513 [INFO][2797] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:57:34.643863 containerd[1474]: 2025-01-30 13:57:34.514 [INFO][2797] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '209.38.134.12' Jan 30 13:57:34.643863 containerd[1474]: 2025-01-30 13:57:34.519 [INFO][2797] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f920131af5376e81f2246a429367fbdd7dd56fb3de46b059e80744707b90cb7a" host="209.38.134.12" Jan 30 13:57:34.643863 containerd[1474]: 2025-01-30 13:57:34.533 [INFO][2797] ipam/ipam.go 372: Looking up existing affinities for host host="209.38.134.12" Jan 30 13:57:34.643863 containerd[1474]: 2025-01-30 13:57:34.560 [INFO][2797] ipam/ipam.go 489: Trying affinity for 192.168.26.0/26 host="209.38.134.12" Jan 30 13:57:34.643863 containerd[1474]: 2025-01-30 13:57:34.568 [INFO][2797] ipam/ipam.go 155: Attempting to load block cidr=192.168.26.0/26 host="209.38.134.12" Jan 30 13:57:34.643863 containerd[1474]: 2025-01-30 13:57:34.579 [INFO][2797] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.26.0/26 host="209.38.134.12" Jan 30 13:57:34.643863 containerd[1474]: 2025-01-30 13:57:34.580 [INFO][2797] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.26.0/26 handle="k8s-pod-network.f920131af5376e81f2246a429367fbdd7dd56fb3de46b059e80744707b90cb7a" host="209.38.134.12" Jan 30 13:57:34.643863 containerd[1474]: 2025-01-30 13:57:34.583 [INFO][2797] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f920131af5376e81f2246a429367fbdd7dd56fb3de46b059e80744707b90cb7a Jan 30 13:57:34.643863 containerd[1474]: 2025-01-30 13:57:34.592 [INFO][2797] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.26.0/26 handle="k8s-pod-network.f920131af5376e81f2246a429367fbdd7dd56fb3de46b059e80744707b90cb7a" host="209.38.134.12" Jan 30 13:57:34.643863 containerd[1474]: 2025-01-30 13:57:34.601 [INFO][2797] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.26.2/26] block=192.168.26.0/26 handle="k8s-pod-network.f920131af5376e81f2246a429367fbdd7dd56fb3de46b059e80744707b90cb7a" host="209.38.134.12" Jan 30 13:57:34.643863 containerd[1474]: 2025-01-30 13:57:34.601 [INFO][2797] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.26.2/26] handle="k8s-pod-network.f920131af5376e81f2246a429367fbdd7dd56fb3de46b059e80744707b90cb7a" host="209.38.134.12" Jan 30 13:57:34.643863 containerd[1474]: 2025-01-30 13:57:34.602 [INFO][2797] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:57:34.643863 containerd[1474]: 2025-01-30 13:57:34.602 [INFO][2797] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.26.2/26] IPv6=[] ContainerID="f920131af5376e81f2246a429367fbdd7dd56fb3de46b059e80744707b90cb7a" HandleID="k8s-pod-network.f920131af5376e81f2246a429367fbdd7dd56fb3de46b059e80744707b90cb7a" Workload="209.38.134.12-k8s-csi--node--driver--gz6sd-eth0" Jan 30 13:57:34.644899 containerd[1474]: 2025-01-30 13:57:34.605 [INFO][2778] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f920131af5376e81f2246a429367fbdd7dd56fb3de46b059e80744707b90cb7a" Namespace="calico-system" Pod="csi-node-driver-gz6sd" WorkloadEndpoint="209.38.134.12-k8s-csi--node--driver--gz6sd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"209.38.134.12-k8s-csi--node--driver--gz6sd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6a0e4b17-d4ac-44a2-88ca-fc8569ad472d", ResourceVersion:"1173", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 57, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"209.38.134.12", ContainerID:"", Pod:"csi-node-driver-gz6sd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.26.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali015f678b8a8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:57:34.644899 containerd[1474]: 2025-01-30 13:57:34.606 [INFO][2778] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.26.2/32] ContainerID="f920131af5376e81f2246a429367fbdd7dd56fb3de46b059e80744707b90cb7a" Namespace="calico-system" Pod="csi-node-driver-gz6sd" WorkloadEndpoint="209.38.134.12-k8s-csi--node--driver--gz6sd-eth0" Jan 30 13:57:34.644899 containerd[1474]: 2025-01-30 13:57:34.607 [INFO][2778] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali015f678b8a8 ContainerID="f920131af5376e81f2246a429367fbdd7dd56fb3de46b059e80744707b90cb7a" Namespace="calico-system" Pod="csi-node-driver-gz6sd" WorkloadEndpoint="209.38.134.12-k8s-csi--node--driver--gz6sd-eth0" Jan 30 13:57:34.644899 containerd[1474]: 2025-01-30 13:57:34.615 [INFO][2778] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f920131af5376e81f2246a429367fbdd7dd56fb3de46b059e80744707b90cb7a" Namespace="calico-system" Pod="csi-node-driver-gz6sd" WorkloadEndpoint="209.38.134.12-k8s-csi--node--driver--gz6sd-eth0" Jan 30 13:57:34.644899 containerd[1474]: 2025-01-30 13:57:34.616 [INFO][2778] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f920131af5376e81f2246a429367fbdd7dd56fb3de46b059e80744707b90cb7a" Namespace="calico-system" Pod="csi-node-driver-gz6sd" WorkloadEndpoint="209.38.134.12-k8s-csi--node--driver--gz6sd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"209.38.134.12-k8s-csi--node--driver--gz6sd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6a0e4b17-d4ac-44a2-88ca-fc8569ad472d", ResourceVersion:"1173", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 57, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"209.38.134.12", ContainerID:"f920131af5376e81f2246a429367fbdd7dd56fb3de46b059e80744707b90cb7a", Pod:"csi-node-driver-gz6sd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.26.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali015f678b8a8", MAC:"be:40:e5:7d:a8:1a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:57:34.644899 containerd[1474]: 2025-01-30 13:57:34.627 [INFO][2778] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f920131af5376e81f2246a429367fbdd7dd56fb3de46b059e80744707b90cb7a" Namespace="calico-system" Pod="csi-node-driver-gz6sd" WorkloadEndpoint="209.38.134.12-k8s-csi--node--driver--gz6sd-eth0" Jan 30 13:57:34.689175 systemd[1]: Started cri-containerd-8db2b6627597b17590b4002cede6532695f040005c904f009365b6f52ac1c3dd.scope - libcontainer container 8db2b6627597b17590b4002cede6532695f040005c904f009365b6f52ac1c3dd. Jan 30 13:57:34.726962 containerd[1474]: time="2025-01-30T13:57:34.726713064Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:57:34.728521 containerd[1474]: time="2025-01-30T13:57:34.727240127Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:57:34.728521 containerd[1474]: time="2025-01-30T13:57:34.727403252Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:57:34.728521 containerd[1474]: time="2025-01-30T13:57:34.727761942Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:57:34.747091 kubelet[1794]: E0130 13:57:34.747048 1794 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:57:34.764447 systemd[1]: Started cri-containerd-f920131af5376e81f2246a429367fbdd7dd56fb3de46b059e80744707b90cb7a.scope - libcontainer container f920131af5376e81f2246a429367fbdd7dd56fb3de46b059e80744707b90cb7a. Jan 30 13:57:34.800088 containerd[1474]: time="2025-01-30T13:57:34.799704335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-xf9gl,Uid:8a5435f6-5bd8-40ba-84b3-23f344925544,Namespace:default,Attempt:1,} returns sandbox id \"8db2b6627597b17590b4002cede6532695f040005c904f009365b6f52ac1c3dd\"" Jan 30 13:57:34.810598 containerd[1474]: time="2025-01-30T13:57:34.809960996Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 30 13:57:34.830701 containerd[1474]: time="2025-01-30T13:57:34.830585634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gz6sd,Uid:6a0e4b17-d4ac-44a2-88ca-fc8569ad472d,Namespace:calico-system,Attempt:1,} returns sandbox id \"f920131af5376e81f2246a429367fbdd7dd56fb3de46b059e80744707b90cb7a\"" Jan 30 13:57:35.748253 kubelet[1794]: E0130 13:57:35.748183 1794 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:57:35.759254 systemd-networkd[1377]: cali015f678b8a8: Gained IPv6LL Jan 30 13:57:35.949293 systemd-networkd[1377]: calic98034dc062: Gained IPv6LL Jan 30 13:57:36.748421 kubelet[1794]: E0130 13:57:36.748345 1794 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:57:37.749018 kubelet[1794]: E0130 13:57:37.748917 1794 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:57:38.135443 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1276086285.mount: Deactivated successfully. Jan 30 13:57:38.749889 kubelet[1794]: E0130 13:57:38.749793 1794 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:57:39.752925 kubelet[1794]: E0130 13:57:39.751030 1794 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:57:40.315893 containerd[1474]: time="2025-01-30T13:57:40.315541917Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:57:40.317536 containerd[1474]: time="2025-01-30T13:57:40.317353205Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=71015561" Jan 30 13:57:40.320522 containerd[1474]: time="2025-01-30T13:57:40.319356113Z" level=info msg="ImageCreate event name:\"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:57:40.324523 containerd[1474]: time="2025-01-30T13:57:40.324445411Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:57:40.330724 containerd[1474]: time="2025-01-30T13:57:40.330305345Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"71015439\" in 5.520290699s" Jan 30 13:57:40.330724 containerd[1474]: time="2025-01-30T13:57:40.330373286Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\"" Jan 30 13:57:40.333773 containerd[1474]: time="2025-01-30T13:57:40.333723388Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 30 13:57:40.336870 containerd[1474]: time="2025-01-30T13:57:40.336499640Z" level=info msg="CreateContainer within sandbox \"8db2b6627597b17590b4002cede6532695f040005c904f009365b6f52ac1c3dd\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 30 13:57:40.369892 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1733210414.mount: Deactivated successfully. Jan 30 13:57:40.377087 containerd[1474]: time="2025-01-30T13:57:40.375801280Z" level=info msg="CreateContainer within sandbox \"8db2b6627597b17590b4002cede6532695f040005c904f009365b6f52ac1c3dd\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"91b5a7847730e310bdef0ac8c3bb0e71a551c287c83c5539691e72bb674c952b\"" Jan 30 13:57:40.378369 containerd[1474]: time="2025-01-30T13:57:40.377775384Z" level=info msg="StartContainer for \"91b5a7847730e310bdef0ac8c3bb0e71a551c287c83c5539691e72bb674c952b\"" Jan 30 13:57:40.520644 systemd[1]: Started cri-containerd-91b5a7847730e310bdef0ac8c3bb0e71a551c287c83c5539691e72bb674c952b.scope - libcontainer container 91b5a7847730e310bdef0ac8c3bb0e71a551c287c83c5539691e72bb674c952b. Jan 30 13:57:40.614339 containerd[1474]: time="2025-01-30T13:57:40.613920402Z" level=info msg="StartContainer for \"91b5a7847730e310bdef0ac8c3bb0e71a551c287c83c5539691e72bb674c952b\" returns successfully" Jan 30 13:57:40.754205 kubelet[1794]: E0130 13:57:40.751407 1794 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:57:41.752104 kubelet[1794]: E0130 13:57:41.752027 1794 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:57:42.194277 containerd[1474]: time="2025-01-30T13:57:42.192958825Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:57:42.201558 containerd[1474]: time="2025-01-30T13:57:42.198083171Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 30 13:57:42.203296 containerd[1474]: time="2025-01-30T13:57:42.203226161Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:57:42.210712 containerd[1474]: time="2025-01-30T13:57:42.210598878Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:57:42.212547 containerd[1474]: time="2025-01-30T13:57:42.212450242Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.878664045s" Jan 30 13:57:42.212713 containerd[1474]: time="2025-01-30T13:57:42.212566492Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 30 13:57:42.224333 containerd[1474]: time="2025-01-30T13:57:42.223085094Z" level=info msg="CreateContainer within sandbox \"f920131af5376e81f2246a429367fbdd7dd56fb3de46b059e80744707b90cb7a\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 30 13:57:42.289356 containerd[1474]: time="2025-01-30T13:57:42.289059692Z" level=info msg="CreateContainer within sandbox \"f920131af5376e81f2246a429367fbdd7dd56fb3de46b059e80744707b90cb7a\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"a4d9c98f6a6fd03e90a5832e1d2f6630352a6a82f58bb401f891cbe9748a9884\"" Jan 30 13:57:42.292308 containerd[1474]: time="2025-01-30T13:57:42.290871337Z" level=info msg="StartContainer for \"a4d9c98f6a6fd03e90a5832e1d2f6630352a6a82f58bb401f891cbe9748a9884\"" Jan 30 13:57:42.418585 systemd[1]: Started cri-containerd-a4d9c98f6a6fd03e90a5832e1d2f6630352a6a82f58bb401f891cbe9748a9884.scope - libcontainer container a4d9c98f6a6fd03e90a5832e1d2f6630352a6a82f58bb401f891cbe9748a9884. Jan 30 13:57:42.563460 containerd[1474]: time="2025-01-30T13:57:42.558455727Z" level=info msg="StartContainer for \"a4d9c98f6a6fd03e90a5832e1d2f6630352a6a82f58bb401f891cbe9748a9884\" returns successfully" Jan 30 13:57:42.571655 containerd[1474]: time="2025-01-30T13:57:42.568931651Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 30 13:57:42.752682 kubelet[1794]: E0130 13:57:42.752567 1794 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:57:43.712029 kubelet[1794]: E0130 13:57:43.711935 1794 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:57:43.753937 kubelet[1794]: E0130 13:57:43.753755 1794 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:57:44.434495 containerd[1474]: time="2025-01-30T13:57:44.434177850Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:57:44.437139 containerd[1474]: time="2025-01-30T13:57:44.436720295Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 30 13:57:44.440984 containerd[1474]: time="2025-01-30T13:57:44.440510366Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:57:44.445085 containerd[1474]: time="2025-01-30T13:57:44.445013158Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:57:44.446739 containerd[1474]: time="2025-01-30T13:57:44.446453387Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.877443145s" Jan 30 13:57:44.446739 containerd[1474]: time="2025-01-30T13:57:44.446520345Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 30 13:57:44.452240 containerd[1474]: time="2025-01-30T13:57:44.452032516Z" level=info msg="CreateContainer within sandbox \"f920131af5376e81f2246a429367fbdd7dd56fb3de46b059e80744707b90cb7a\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 30 13:57:44.484204 containerd[1474]: time="2025-01-30T13:57:44.483557715Z" level=info msg="CreateContainer within sandbox \"f920131af5376e81f2246a429367fbdd7dd56fb3de46b059e80744707b90cb7a\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"ed5dbb849a2d2a778cfc1f971681848d4d1ab9ca62124164d9ca478740d517a3\"" Jan 30 13:57:44.491671 containerd[1474]: time="2025-01-30T13:57:44.491593264Z" level=info msg="StartContainer for \"ed5dbb849a2d2a778cfc1f971681848d4d1ab9ca62124164d9ca478740d517a3\"" Jan 30 13:57:44.586424 systemd[1]: Started cri-containerd-ed5dbb849a2d2a778cfc1f971681848d4d1ab9ca62124164d9ca478740d517a3.scope - libcontainer container ed5dbb849a2d2a778cfc1f971681848d4d1ab9ca62124164d9ca478740d517a3. Jan 30 13:57:44.673679 containerd[1474]: time="2025-01-30T13:57:44.673381069Z" level=info msg="StartContainer for \"ed5dbb849a2d2a778cfc1f971681848d4d1ab9ca62124164d9ca478740d517a3\" returns successfully" Jan 30 13:57:44.755383 kubelet[1794]: E0130 13:57:44.755129 1794 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:57:44.976394 kubelet[1794]: I0130 13:57:44.975593 1794 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 30 13:57:44.976394 kubelet[1794]: I0130 13:57:44.975641 1794 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 30 13:57:45.283568 kubelet[1794]: I0130 13:57:45.282560 1794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-gz6sd" podStartSLOduration=32.668548286000004 podStartE2EDuration="42.282533661s" podCreationTimestamp="2025-01-30 13:57:03 +0000 UTC" firstStartedPulling="2025-01-30 13:57:34.834459568 +0000 UTC m=+32.138448051" lastFinishedPulling="2025-01-30 13:57:44.448444935 +0000 UTC m=+41.752433426" observedRunningTime="2025-01-30 13:57:45.280874422 +0000 UTC m=+42.584862930" watchObservedRunningTime="2025-01-30 13:57:45.282533661 +0000 UTC m=+42.586522159" Jan 30 13:57:45.283568 kubelet[1794]: I0130 13:57:45.283313 1794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-xf9gl" podStartSLOduration=18.758010077 podStartE2EDuration="24.283295246s" podCreationTimestamp="2025-01-30 13:57:21 +0000 UTC" firstStartedPulling="2025-01-30 13:57:34.807005451 +0000 UTC m=+32.110993941" lastFinishedPulling="2025-01-30 13:57:40.332290617 +0000 UTC m=+37.636279110" observedRunningTime="2025-01-30 13:57:41.246164076 +0000 UTC m=+38.550152588" watchObservedRunningTime="2025-01-30 13:57:45.283295246 +0000 UTC m=+42.587283752" Jan 30 13:57:45.756152 kubelet[1794]: E0130 13:57:45.755952 1794 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:57:46.173937 kubelet[1794]: I0130 13:57:46.173251 1794 topology_manager.go:215] "Topology Admit Handler" podUID="3fa36027-391f-4b11-9558-3780ac02f388" podNamespace="default" podName="nfs-server-provisioner-0" Jan 30 13:57:46.186193 systemd[1]: Created slice kubepods-besteffort-pod3fa36027_391f_4b11_9558_3780ac02f388.slice - libcontainer container kubepods-besteffort-pod3fa36027_391f_4b11_9558_3780ac02f388.slice. Jan 30 13:57:46.273871 kubelet[1794]: I0130 13:57:46.273770 1794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrwws\" (UniqueName: \"kubernetes.io/projected/3fa36027-391f-4b11-9558-3780ac02f388-kube-api-access-qrwws\") pod \"nfs-server-provisioner-0\" (UID: \"3fa36027-391f-4b11-9558-3780ac02f388\") " pod="default/nfs-server-provisioner-0" Jan 30 13:57:46.273871 kubelet[1794]: I0130 13:57:46.273871 1794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/3fa36027-391f-4b11-9558-3780ac02f388-data\") pod \"nfs-server-provisioner-0\" (UID: \"3fa36027-391f-4b11-9558-3780ac02f388\") " pod="default/nfs-server-provisioner-0" Jan 30 13:57:46.492444 containerd[1474]: time="2025-01-30T13:57:46.492050338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:3fa36027-391f-4b11-9558-3780ac02f388,Namespace:default,Attempt:0,}" Jan 30 13:57:46.762417 kubelet[1794]: E0130 13:57:46.757003 1794 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:57:46.793650 systemd-networkd[1377]: cali60e51b789ff: Link UP Jan 30 13:57:46.794037 systemd-networkd[1377]: cali60e51b789ff: Gained carrier Jan 30 13:57:46.820083 containerd[1474]: 2025-01-30 13:57:46.592 [INFO][3098] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {209.38.134.12-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 3fa36027-391f-4b11-9558-3780ac02f388 1244 0 2025-01-30 13:57:46 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 209.38.134.12 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="29852ddbbeaa19383d1dfcdd87bb7cc074c3e0c1056c8fe122b5f7959d05eaea" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="209.38.134.12-k8s-nfs--server--provisioner--0-" Jan 30 13:57:46.820083 containerd[1474]: 2025-01-30 13:57:46.593 [INFO][3098] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="29852ddbbeaa19383d1dfcdd87bb7cc074c3e0c1056c8fe122b5f7959d05eaea" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="209.38.134.12-k8s-nfs--server--provisioner--0-eth0" Jan 30 13:57:46.820083 containerd[1474]: 2025-01-30 13:57:46.676 [INFO][3108] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="29852ddbbeaa19383d1dfcdd87bb7cc074c3e0c1056c8fe122b5f7959d05eaea" HandleID="k8s-pod-network.29852ddbbeaa19383d1dfcdd87bb7cc074c3e0c1056c8fe122b5f7959d05eaea" Workload="209.38.134.12-k8s-nfs--server--provisioner--0-eth0" Jan 30 13:57:46.820083 containerd[1474]: 2025-01-30 13:57:46.699 [INFO][3108] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="29852ddbbeaa19383d1dfcdd87bb7cc074c3e0c1056c8fe122b5f7959d05eaea" HandleID="k8s-pod-network.29852ddbbeaa19383d1dfcdd87bb7cc074c3e0c1056c8fe122b5f7959d05eaea" Workload="209.38.134.12-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003197b0), Attrs:map[string]string{"namespace":"default", "node":"209.38.134.12", "pod":"nfs-server-provisioner-0", "timestamp":"2025-01-30 13:57:46.676339021 +0000 UTC"}, Hostname:"209.38.134.12", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:57:46.820083 containerd[1474]: 2025-01-30 13:57:46.700 [INFO][3108] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:57:46.820083 containerd[1474]: 2025-01-30 13:57:46.700 [INFO][3108] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:57:46.820083 containerd[1474]: 2025-01-30 13:57:46.700 [INFO][3108] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '209.38.134.12' Jan 30 13:57:46.820083 containerd[1474]: 2025-01-30 13:57:46.707 [INFO][3108] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.29852ddbbeaa19383d1dfcdd87bb7cc074c3e0c1056c8fe122b5f7959d05eaea" host="209.38.134.12" Jan 30 13:57:46.820083 containerd[1474]: 2025-01-30 13:57:46.726 [INFO][3108] ipam/ipam.go 372: Looking up existing affinities for host host="209.38.134.12" Jan 30 13:57:46.820083 containerd[1474]: 2025-01-30 13:57:46.736 [INFO][3108] ipam/ipam.go 489: Trying affinity for 192.168.26.0/26 host="209.38.134.12" Jan 30 13:57:46.820083 containerd[1474]: 2025-01-30 13:57:46.744 [INFO][3108] ipam/ipam.go 155: Attempting to load block cidr=192.168.26.0/26 host="209.38.134.12" Jan 30 13:57:46.820083 containerd[1474]: 2025-01-30 13:57:46.749 [INFO][3108] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.26.0/26 host="209.38.134.12" Jan 30 13:57:46.820083 containerd[1474]: 2025-01-30 13:57:46.749 [INFO][3108] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.26.0/26 handle="k8s-pod-network.29852ddbbeaa19383d1dfcdd87bb7cc074c3e0c1056c8fe122b5f7959d05eaea" host="209.38.134.12" Jan 30 13:57:46.820083 containerd[1474]: 2025-01-30 13:57:46.754 [INFO][3108] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.29852ddbbeaa19383d1dfcdd87bb7cc074c3e0c1056c8fe122b5f7959d05eaea Jan 30 13:57:46.820083 containerd[1474]: 2025-01-30 13:57:46.763 [INFO][3108] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.26.0/26 handle="k8s-pod-network.29852ddbbeaa19383d1dfcdd87bb7cc074c3e0c1056c8fe122b5f7959d05eaea" host="209.38.134.12" Jan 30 13:57:46.820083 containerd[1474]: 2025-01-30 13:57:46.778 [INFO][3108] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.26.3/26] block=192.168.26.0/26 handle="k8s-pod-network.29852ddbbeaa19383d1dfcdd87bb7cc074c3e0c1056c8fe122b5f7959d05eaea" host="209.38.134.12" Jan 30 13:57:46.820083 containerd[1474]: 2025-01-30 13:57:46.778 [INFO][3108] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.26.3/26] handle="k8s-pod-network.29852ddbbeaa19383d1dfcdd87bb7cc074c3e0c1056c8fe122b5f7959d05eaea" host="209.38.134.12" Jan 30 13:57:46.820083 containerd[1474]: 2025-01-30 13:57:46.778 [INFO][3108] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:57:46.820083 containerd[1474]: 2025-01-30 13:57:46.779 [INFO][3108] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.26.3/26] IPv6=[] ContainerID="29852ddbbeaa19383d1dfcdd87bb7cc074c3e0c1056c8fe122b5f7959d05eaea" HandleID="k8s-pod-network.29852ddbbeaa19383d1dfcdd87bb7cc074c3e0c1056c8fe122b5f7959d05eaea" Workload="209.38.134.12-k8s-nfs--server--provisioner--0-eth0" Jan 30 13:57:46.820872 containerd[1474]: 2025-01-30 13:57:46.782 [INFO][3098] cni-plugin/k8s.go 386: Populated endpoint ContainerID="29852ddbbeaa19383d1dfcdd87bb7cc074c3e0c1056c8fe122b5f7959d05eaea" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="209.38.134.12-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"209.38.134.12-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"3fa36027-391f-4b11-9558-3780ac02f388", ResourceVersion:"1244", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 57, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"209.38.134.12", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.26.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:57:46.820872 containerd[1474]: 2025-01-30 13:57:46.783 [INFO][3098] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.26.3/32] ContainerID="29852ddbbeaa19383d1dfcdd87bb7cc074c3e0c1056c8fe122b5f7959d05eaea" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="209.38.134.12-k8s-nfs--server--provisioner--0-eth0" Jan 30 13:57:46.820872 containerd[1474]: 2025-01-30 13:57:46.783 [INFO][3098] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="29852ddbbeaa19383d1dfcdd87bb7cc074c3e0c1056c8fe122b5f7959d05eaea" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="209.38.134.12-k8s-nfs--server--provisioner--0-eth0" Jan 30 13:57:46.820872 containerd[1474]: 2025-01-30 13:57:46.792 [INFO][3098] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="29852ddbbeaa19383d1dfcdd87bb7cc074c3e0c1056c8fe122b5f7959d05eaea" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="209.38.134.12-k8s-nfs--server--provisioner--0-eth0" Jan 30 13:57:46.821051 containerd[1474]: 2025-01-30 13:57:46.795 [INFO][3098] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="29852ddbbeaa19383d1dfcdd87bb7cc074c3e0c1056c8fe122b5f7959d05eaea" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="209.38.134.12-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"209.38.134.12-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"3fa36027-391f-4b11-9558-3780ac02f388", ResourceVersion:"1244", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 57, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"209.38.134.12", ContainerID:"29852ddbbeaa19383d1dfcdd87bb7cc074c3e0c1056c8fe122b5f7959d05eaea", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.26.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"0a:2a:a5:ae:b8:d6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:57:46.821051 containerd[1474]: 2025-01-30 13:57:46.814 [INFO][3098] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="29852ddbbeaa19383d1dfcdd87bb7cc074c3e0c1056c8fe122b5f7959d05eaea" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="209.38.134.12-k8s-nfs--server--provisioner--0-eth0" Jan 30 13:57:46.861661 containerd[1474]: time="2025-01-30T13:57:46.861216715Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:57:46.861661 containerd[1474]: time="2025-01-30T13:57:46.861292447Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:57:46.861661 containerd[1474]: time="2025-01-30T13:57:46.861310116Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:57:46.861661 containerd[1474]: time="2025-01-30T13:57:46.861429207Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:57:46.919485 systemd[1]: run-containerd-runc-k8s.io-29852ddbbeaa19383d1dfcdd87bb7cc074c3e0c1056c8fe122b5f7959d05eaea-runc.qs74Eg.mount: Deactivated successfully. Jan 30 13:57:46.929873 systemd[1]: Started cri-containerd-29852ddbbeaa19383d1dfcdd87bb7cc074c3e0c1056c8fe122b5f7959d05eaea.scope - libcontainer container 29852ddbbeaa19383d1dfcdd87bb7cc074c3e0c1056c8fe122b5f7959d05eaea. Jan 30 13:57:47.011958 containerd[1474]: time="2025-01-30T13:57:47.011652937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:3fa36027-391f-4b11-9558-3780ac02f388,Namespace:default,Attempt:0,} returns sandbox id \"29852ddbbeaa19383d1dfcdd87bb7cc074c3e0c1056c8fe122b5f7959d05eaea\"" Jan 30 13:57:47.015907 containerd[1474]: time="2025-01-30T13:57:47.015631709Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 30 13:57:47.762223 kubelet[1794]: E0130 13:57:47.761548 1794 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:57:48.557093 systemd-networkd[1377]: cali60e51b789ff: Gained IPv6LL Jan 30 13:57:48.762629 kubelet[1794]: E0130 13:57:48.762569 1794 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:57:49.763536 kubelet[1794]: E0130 13:57:49.763444 1794 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:57:50.338626 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount665113386.mount: Deactivated successfully. Jan 30 13:57:50.764952 kubelet[1794]: E0130 13:57:50.764463 1794 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:57:51.765494 kubelet[1794]: E0130 13:57:51.765437 1794 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:57:51.833900 kubelet[1794]: E0130 13:57:51.832198 1794 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:57:52.766926 kubelet[1794]: E0130 13:57:52.766483 1794 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:57:53.324338 containerd[1474]: time="2025-01-30T13:57:53.324257672Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:57:53.328696 containerd[1474]: time="2025-01-30T13:57:53.328603314Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Jan 30 13:57:53.330698 containerd[1474]: time="2025-01-30T13:57:53.330480940Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:57:53.336860 containerd[1474]: time="2025-01-30T13:57:53.335974792Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:57:53.337109 containerd[1474]: time="2025-01-30T13:57:53.337071322Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 6.321374887s" Jan 30 13:57:53.337179 containerd[1474]: time="2025-01-30T13:57:53.337163442Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 30 13:57:53.391703 containerd[1474]: time="2025-01-30T13:57:53.391625798Z" level=info msg="CreateContainer within sandbox \"29852ddbbeaa19383d1dfcdd87bb7cc074c3e0c1056c8fe122b5f7959d05eaea\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 30 13:57:53.454215 containerd[1474]: time="2025-01-30T13:57:53.454097957Z" level=info msg="CreateContainer within sandbox \"29852ddbbeaa19383d1dfcdd87bb7cc074c3e0c1056c8fe122b5f7959d05eaea\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"c921d0f30bfec0f9cefdd7efbcdeba0be008a9c2162f6544cb9a08eb25f8d9ac\"" Jan 30 13:57:53.455044 containerd[1474]: time="2025-01-30T13:57:53.454922561Z" level=info msg="StartContainer for \"c921d0f30bfec0f9cefdd7efbcdeba0be008a9c2162f6544cb9a08eb25f8d9ac\"" Jan 30 13:57:53.502115 systemd[1]: run-containerd-runc-k8s.io-c921d0f30bfec0f9cefdd7efbcdeba0be008a9c2162f6544cb9a08eb25f8d9ac-runc.g8lJ7a.mount: Deactivated successfully. Jan 30 13:57:53.519693 systemd[1]: Started cri-containerd-c921d0f30bfec0f9cefdd7efbcdeba0be008a9c2162f6544cb9a08eb25f8d9ac.scope - libcontainer container c921d0f30bfec0f9cefdd7efbcdeba0be008a9c2162f6544cb9a08eb25f8d9ac. Jan 30 13:57:53.573717 containerd[1474]: time="2025-01-30T13:57:53.573654753Z" level=info msg="StartContainer for \"c921d0f30bfec0f9cefdd7efbcdeba0be008a9c2162f6544cb9a08eb25f8d9ac\" returns successfully" Jan 30 13:57:53.774265 kubelet[1794]: E0130 13:57:53.774191 1794 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:57:54.383428 kubelet[1794]: I0130 13:57:54.383285 1794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.035980065 podStartE2EDuration="8.381644711s" podCreationTimestamp="2025-01-30 13:57:46 +0000 UTC" firstStartedPulling="2025-01-30 13:57:47.015063224 +0000 UTC m=+44.319051697" lastFinishedPulling="2025-01-30 13:57:53.36072787 +0000 UTC m=+50.664716343" observedRunningTime="2025-01-30 13:57:54.381288992 +0000 UTC m=+51.685277499" watchObservedRunningTime="2025-01-30 13:57:54.381644711 +0000 UTC m=+51.685633214" Jan 30 13:57:54.774726 kubelet[1794]: E0130 13:57:54.774655 1794 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:57:55.775399 kubelet[1794]: E0130 13:57:55.775312 1794 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:57:56.776501 kubelet[1794]: E0130 13:57:56.776416 1794 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:57:57.777342 kubelet[1794]: E0130 13:57:57.777259 1794 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:57:58.778734 kubelet[1794]: E0130 13:57:58.778364 1794 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:57:59.779062 kubelet[1794]: E0130 13:57:59.778877 1794 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:58:00.780119 kubelet[1794]: E0130 13:58:00.780024 1794 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:58:01.780425 kubelet[1794]: E0130 13:58:01.780333 1794 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:58:02.782774 kubelet[1794]: E0130 13:58:02.782158 1794 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:58:03.404804 kubelet[1794]: I0130 13:58:03.404674 1794 topology_manager.go:215] "Topology Admit Handler" podUID="1514d530-24e5-4a8f-b7ef-12102730071e" podNamespace="default" podName="test-pod-1" Jan 30 13:58:03.425434 systemd[1]: Created slice kubepods-besteffort-pod1514d530_24e5_4a8f_b7ef_12102730071e.slice - libcontainer container kubepods-besteffort-pod1514d530_24e5_4a8f_b7ef_12102730071e.slice. Jan 30 13:58:03.546980 kubelet[1794]: I0130 13:58:03.544191 1794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjrv6\" (UniqueName: \"kubernetes.io/projected/1514d530-24e5-4a8f-b7ef-12102730071e-kube-api-access-tjrv6\") pod \"test-pod-1\" (UID: \"1514d530-24e5-4a8f-b7ef-12102730071e\") " pod="default/test-pod-1" Jan 30 13:58:03.546980 kubelet[1794]: I0130 13:58:03.544332 1794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-475346a6-608a-430e-9b31-ed4308b68f89\" (UniqueName: \"kubernetes.io/nfs/1514d530-24e5-4a8f-b7ef-12102730071e-pvc-475346a6-608a-430e-9b31-ed4308b68f89\") pod \"test-pod-1\" (UID: \"1514d530-24e5-4a8f-b7ef-12102730071e\") " pod="default/test-pod-1" Jan 30 13:58:03.710469 kubelet[1794]: E0130 13:58:03.710315 1794 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:58:03.724337 kernel: FS-Cache: Loaded Jan 30 13:58:03.784422 kubelet[1794]: E0130 13:58:03.783664 1794 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:58:03.796607 containerd[1474]: time="2025-01-30T13:58:03.789719204Z" level=info msg="StopPodSandbox for \"f09127e2e06684a44a2c53908a2bba3756904a74d4e0fe6114ccf89049fea0cd\"" Jan 30 13:58:03.885205 kernel: RPC: Registered named UNIX socket transport module. Jan 30 13:58:03.885372 kernel: RPC: Registered udp transport module. Jan 30 13:58:03.885420 kernel: RPC: Registered tcp transport module. Jan 30 13:58:03.885446 kernel: RPC: Registered tcp-with-tls transport module. Jan 30 13:58:03.885476 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 30 13:58:04.062905 containerd[1474]: 2025-01-30 13:58:03.935 [WARNING][3318] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f09127e2e06684a44a2c53908a2bba3756904a74d4e0fe6114ccf89049fea0cd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"209.38.134.12-k8s-csi--node--driver--gz6sd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6a0e4b17-d4ac-44a2-88ca-fc8569ad472d", ResourceVersion:"1223", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 57, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"209.38.134.12", ContainerID:"f920131af5376e81f2246a429367fbdd7dd56fb3de46b059e80744707b90cb7a", Pod:"csi-node-driver-gz6sd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.26.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali015f678b8a8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:58:04.062905 containerd[1474]: 2025-01-30 13:58:03.935 [INFO][3318] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f09127e2e06684a44a2c53908a2bba3756904a74d4e0fe6114ccf89049fea0cd" Jan 30 13:58:04.062905 containerd[1474]: 2025-01-30 13:58:03.935 [INFO][3318] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f09127e2e06684a44a2c53908a2bba3756904a74d4e0fe6114ccf89049fea0cd" iface="eth0" netns="" Jan 30 13:58:04.062905 containerd[1474]: 2025-01-30 13:58:03.935 [INFO][3318] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f09127e2e06684a44a2c53908a2bba3756904a74d4e0fe6114ccf89049fea0cd" Jan 30 13:58:04.062905 containerd[1474]: 2025-01-30 13:58:03.936 [INFO][3318] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f09127e2e06684a44a2c53908a2bba3756904a74d4e0fe6114ccf89049fea0cd" Jan 30 13:58:04.062905 containerd[1474]: 2025-01-30 13:58:04.032 [INFO][3326] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f09127e2e06684a44a2c53908a2bba3756904a74d4e0fe6114ccf89049fea0cd" HandleID="k8s-pod-network.f09127e2e06684a44a2c53908a2bba3756904a74d4e0fe6114ccf89049fea0cd" Workload="209.38.134.12-k8s-csi--node--driver--gz6sd-eth0" Jan 30 13:58:04.062905 containerd[1474]: 2025-01-30 13:58:04.032 [INFO][3326] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:58:04.062905 containerd[1474]: 2025-01-30 13:58:04.033 [INFO][3326] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:58:04.062905 containerd[1474]: 2025-01-30 13:58:04.044 [WARNING][3326] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f09127e2e06684a44a2c53908a2bba3756904a74d4e0fe6114ccf89049fea0cd" HandleID="k8s-pod-network.f09127e2e06684a44a2c53908a2bba3756904a74d4e0fe6114ccf89049fea0cd" Workload="209.38.134.12-k8s-csi--node--driver--gz6sd-eth0" Jan 30 13:58:04.062905 containerd[1474]: 2025-01-30 13:58:04.044 [INFO][3326] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f09127e2e06684a44a2c53908a2bba3756904a74d4e0fe6114ccf89049fea0cd" HandleID="k8s-pod-network.f09127e2e06684a44a2c53908a2bba3756904a74d4e0fe6114ccf89049fea0cd" Workload="209.38.134.12-k8s-csi--node--driver--gz6sd-eth0" Jan 30 13:58:04.062905 containerd[1474]: 2025-01-30 13:58:04.050 [INFO][3326] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:58:04.062905 containerd[1474]: 2025-01-30 13:58:04.054 [INFO][3318] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f09127e2e06684a44a2c53908a2bba3756904a74d4e0fe6114ccf89049fea0cd" Jan 30 13:58:04.062905 containerd[1474]: time="2025-01-30T13:58:04.059022028Z" level=info msg="TearDown network for sandbox \"f09127e2e06684a44a2c53908a2bba3756904a74d4e0fe6114ccf89049fea0cd\" successfully" Jan 30 13:58:04.062905 containerd[1474]: time="2025-01-30T13:58:04.059063374Z" level=info msg="StopPodSandbox for \"f09127e2e06684a44a2c53908a2bba3756904a74d4e0fe6114ccf89049fea0cd\" returns successfully" Jan 30 13:58:04.069879 containerd[1474]: time="2025-01-30T13:58:04.068575982Z" level=info msg="RemovePodSandbox for \"f09127e2e06684a44a2c53908a2bba3756904a74d4e0fe6114ccf89049fea0cd\"" Jan 30 13:58:04.069879 containerd[1474]: time="2025-01-30T13:58:04.068666064Z" level=info msg="Forcibly stopping sandbox \"f09127e2e06684a44a2c53908a2bba3756904a74d4e0fe6114ccf89049fea0cd\"" Jan 30 13:58:04.406323 containerd[1474]: 2025-01-30 13:58:04.316 [WARNING][3350] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f09127e2e06684a44a2c53908a2bba3756904a74d4e0fe6114ccf89049fea0cd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"209.38.134.12-k8s-csi--node--driver--gz6sd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6a0e4b17-d4ac-44a2-88ca-fc8569ad472d", ResourceVersion:"1223", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 57, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"209.38.134.12", ContainerID:"f920131af5376e81f2246a429367fbdd7dd56fb3de46b059e80744707b90cb7a", Pod:"csi-node-driver-gz6sd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.26.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali015f678b8a8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:58:04.406323 containerd[1474]: 2025-01-30 13:58:04.317 [INFO][3350] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f09127e2e06684a44a2c53908a2bba3756904a74d4e0fe6114ccf89049fea0cd" Jan 30 13:58:04.406323 containerd[1474]: 2025-01-30 13:58:04.317 [INFO][3350] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f09127e2e06684a44a2c53908a2bba3756904a74d4e0fe6114ccf89049fea0cd" iface="eth0" netns="" Jan 30 13:58:04.406323 containerd[1474]: 2025-01-30 13:58:04.317 [INFO][3350] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f09127e2e06684a44a2c53908a2bba3756904a74d4e0fe6114ccf89049fea0cd" Jan 30 13:58:04.406323 containerd[1474]: 2025-01-30 13:58:04.317 [INFO][3350] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f09127e2e06684a44a2c53908a2bba3756904a74d4e0fe6114ccf89049fea0cd" Jan 30 13:58:04.406323 containerd[1474]: 2025-01-30 13:58:04.376 [INFO][3356] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f09127e2e06684a44a2c53908a2bba3756904a74d4e0fe6114ccf89049fea0cd" HandleID="k8s-pod-network.f09127e2e06684a44a2c53908a2bba3756904a74d4e0fe6114ccf89049fea0cd" Workload="209.38.134.12-k8s-csi--node--driver--gz6sd-eth0" Jan 30 13:58:04.406323 containerd[1474]: 2025-01-30 13:58:04.377 [INFO][3356] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:58:04.406323 containerd[1474]: 2025-01-30 13:58:04.377 [INFO][3356] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:58:04.406323 containerd[1474]: 2025-01-30 13:58:04.391 [WARNING][3356] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f09127e2e06684a44a2c53908a2bba3756904a74d4e0fe6114ccf89049fea0cd" HandleID="k8s-pod-network.f09127e2e06684a44a2c53908a2bba3756904a74d4e0fe6114ccf89049fea0cd" Workload="209.38.134.12-k8s-csi--node--driver--gz6sd-eth0" Jan 30 13:58:04.406323 containerd[1474]: 2025-01-30 13:58:04.391 [INFO][3356] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f09127e2e06684a44a2c53908a2bba3756904a74d4e0fe6114ccf89049fea0cd" HandleID="k8s-pod-network.f09127e2e06684a44a2c53908a2bba3756904a74d4e0fe6114ccf89049fea0cd" Workload="209.38.134.12-k8s-csi--node--driver--gz6sd-eth0" Jan 30 13:58:04.406323 containerd[1474]: 2025-01-30 13:58:04.396 [INFO][3356] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:58:04.406323 containerd[1474]: 2025-01-30 13:58:04.401 [INFO][3350] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f09127e2e06684a44a2c53908a2bba3756904a74d4e0fe6114ccf89049fea0cd" Jan 30 13:58:04.406323 containerd[1474]: time="2025-01-30T13:58:04.403726201Z" level=info msg="TearDown network for sandbox \"f09127e2e06684a44a2c53908a2bba3756904a74d4e0fe6114ccf89049fea0cd\" successfully" Jan 30 13:58:04.438437 containerd[1474]: time="2025-01-30T13:58:04.437689509Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f09127e2e06684a44a2c53908a2bba3756904a74d4e0fe6114ccf89049fea0cd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:58:04.438437 containerd[1474]: time="2025-01-30T13:58:04.437859184Z" level=info msg="RemovePodSandbox \"f09127e2e06684a44a2c53908a2bba3756904a74d4e0fe6114ccf89049fea0cd\" returns successfully" Jan 30 13:58:04.439948 containerd[1474]: time="2025-01-30T13:58:04.439444658Z" level=info msg="StopPodSandbox for \"3b8e1be27c7eca96672bb033249acf00efded4d3123e04a578c3154c059caa35\"" Jan 30 13:58:04.447892 kernel: NFS: Registering the id_resolver key type Jan 30 13:58:04.452105 kernel: Key type id_resolver registered Jan 30 13:58:04.455947 kernel: Key type id_legacy registered Jan 30 13:58:04.563943 nfsidmap[3387]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.0-a-7ea7bfb23e' Jan 30 13:58:04.586309 nfsidmap[3388]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.0-a-7ea7bfb23e' Jan 30 13:58:04.641689 containerd[1474]: time="2025-01-30T13:58:04.641046181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:1514d530-24e5-4a8f-b7ef-12102730071e,Namespace:default,Attempt:0,}" Jan 30 13:58:04.682452 containerd[1474]: 2025-01-30 13:58:04.566 [WARNING][3375] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3b8e1be27c7eca96672bb033249acf00efded4d3123e04a578c3154c059caa35" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"209.38.134.12-k8s-nginx--deployment--85f456d6dd--xf9gl-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"8a5435f6-5bd8-40ba-84b3-23f344925544", ResourceVersion:"1203", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 57, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"209.38.134.12", ContainerID:"8db2b6627597b17590b4002cede6532695f040005c904f009365b6f52ac1c3dd", Pod:"nginx-deployment-85f456d6dd-xf9gl", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.26.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calic98034dc062", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:58:04.682452 containerd[1474]: 2025-01-30 13:58:04.567 [INFO][3375] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3b8e1be27c7eca96672bb033249acf00efded4d3123e04a578c3154c059caa35" Jan 30 13:58:04.682452 containerd[1474]: 2025-01-30 13:58:04.567 [INFO][3375] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3b8e1be27c7eca96672bb033249acf00efded4d3123e04a578c3154c059caa35" iface="eth0" netns="" Jan 30 13:58:04.682452 containerd[1474]: 2025-01-30 13:58:04.567 [INFO][3375] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3b8e1be27c7eca96672bb033249acf00efded4d3123e04a578c3154c059caa35" Jan 30 13:58:04.682452 containerd[1474]: 2025-01-30 13:58:04.567 [INFO][3375] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3b8e1be27c7eca96672bb033249acf00efded4d3123e04a578c3154c059caa35" Jan 30 13:58:04.682452 containerd[1474]: 2025-01-30 13:58:04.644 [INFO][3389] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3b8e1be27c7eca96672bb033249acf00efded4d3123e04a578c3154c059caa35" HandleID="k8s-pod-network.3b8e1be27c7eca96672bb033249acf00efded4d3123e04a578c3154c059caa35" Workload="209.38.134.12-k8s-nginx--deployment--85f456d6dd--xf9gl-eth0" Jan 30 13:58:04.682452 containerd[1474]: 2025-01-30 13:58:04.645 [INFO][3389] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:58:04.682452 containerd[1474]: 2025-01-30 13:58:04.645 [INFO][3389] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:58:04.682452 containerd[1474]: 2025-01-30 13:58:04.670 [WARNING][3389] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3b8e1be27c7eca96672bb033249acf00efded4d3123e04a578c3154c059caa35" HandleID="k8s-pod-network.3b8e1be27c7eca96672bb033249acf00efded4d3123e04a578c3154c059caa35" Workload="209.38.134.12-k8s-nginx--deployment--85f456d6dd--xf9gl-eth0" Jan 30 13:58:04.682452 containerd[1474]: 2025-01-30 13:58:04.670 [INFO][3389] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3b8e1be27c7eca96672bb033249acf00efded4d3123e04a578c3154c059caa35" HandleID="k8s-pod-network.3b8e1be27c7eca96672bb033249acf00efded4d3123e04a578c3154c059caa35" Workload="209.38.134.12-k8s-nginx--deployment--85f456d6dd--xf9gl-eth0" Jan 30 13:58:04.682452 containerd[1474]: 2025-01-30 13:58:04.677 [INFO][3389] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:58:04.682452 containerd[1474]: 2025-01-30 13:58:04.680 [INFO][3375] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3b8e1be27c7eca96672bb033249acf00efded4d3123e04a578c3154c059caa35" Jan 30 13:58:04.682452 containerd[1474]: time="2025-01-30T13:58:04.682100960Z" level=info msg="TearDown network for sandbox \"3b8e1be27c7eca96672bb033249acf00efded4d3123e04a578c3154c059caa35\" successfully" Jan 30 13:58:04.682452 containerd[1474]: time="2025-01-30T13:58:04.682139133Z" level=info msg="StopPodSandbox for \"3b8e1be27c7eca96672bb033249acf00efded4d3123e04a578c3154c059caa35\" returns successfully" Jan 30 13:58:04.684557 containerd[1474]: time="2025-01-30T13:58:04.682809850Z" level=info msg="RemovePodSandbox for \"3b8e1be27c7eca96672bb033249acf00efded4d3123e04a578c3154c059caa35\"" Jan 30 13:58:04.684557 containerd[1474]: time="2025-01-30T13:58:04.682955277Z" level=info msg="Forcibly stopping sandbox \"3b8e1be27c7eca96672bb033249acf00efded4d3123e04a578c3154c059caa35\"" Jan 30 13:58:04.785228 kubelet[1794]: E0130 13:58:04.785176 1794 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:58:05.060277 containerd[1474]: 2025-01-30 13:58:04.908 [WARNING][3409] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3b8e1be27c7eca96672bb033249acf00efded4d3123e04a578c3154c059caa35" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"209.38.134.12-k8s-nginx--deployment--85f456d6dd--xf9gl-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"8a5435f6-5bd8-40ba-84b3-23f344925544", ResourceVersion:"1203", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 57, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"209.38.134.12", ContainerID:"8db2b6627597b17590b4002cede6532695f040005c904f009365b6f52ac1c3dd", Pod:"nginx-deployment-85f456d6dd-xf9gl", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.26.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calic98034dc062", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:58:05.060277 containerd[1474]: 2025-01-30 13:58:04.908 [INFO][3409] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3b8e1be27c7eca96672bb033249acf00efded4d3123e04a578c3154c059caa35" Jan 30 13:58:05.060277 containerd[1474]: 2025-01-30 13:58:04.908 [INFO][3409] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3b8e1be27c7eca96672bb033249acf00efded4d3123e04a578c3154c059caa35" iface="eth0" netns="" Jan 30 13:58:05.060277 containerd[1474]: 2025-01-30 13:58:04.908 [INFO][3409] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3b8e1be27c7eca96672bb033249acf00efded4d3123e04a578c3154c059caa35" Jan 30 13:58:05.060277 containerd[1474]: 2025-01-30 13:58:04.908 [INFO][3409] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3b8e1be27c7eca96672bb033249acf00efded4d3123e04a578c3154c059caa35" Jan 30 13:58:05.060277 containerd[1474]: 2025-01-30 13:58:05.018 [INFO][3427] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3b8e1be27c7eca96672bb033249acf00efded4d3123e04a578c3154c059caa35" HandleID="k8s-pod-network.3b8e1be27c7eca96672bb033249acf00efded4d3123e04a578c3154c059caa35" Workload="209.38.134.12-k8s-nginx--deployment--85f456d6dd--xf9gl-eth0" Jan 30 13:58:05.060277 containerd[1474]: 2025-01-30 13:58:05.018 [INFO][3427] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:58:05.060277 containerd[1474]: 2025-01-30 13:58:05.018 [INFO][3427] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:58:05.060277 containerd[1474]: 2025-01-30 13:58:05.047 [WARNING][3427] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3b8e1be27c7eca96672bb033249acf00efded4d3123e04a578c3154c059caa35" HandleID="k8s-pod-network.3b8e1be27c7eca96672bb033249acf00efded4d3123e04a578c3154c059caa35" Workload="209.38.134.12-k8s-nginx--deployment--85f456d6dd--xf9gl-eth0" Jan 30 13:58:05.060277 containerd[1474]: 2025-01-30 13:58:05.047 [INFO][3427] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3b8e1be27c7eca96672bb033249acf00efded4d3123e04a578c3154c059caa35" HandleID="k8s-pod-network.3b8e1be27c7eca96672bb033249acf00efded4d3123e04a578c3154c059caa35" Workload="209.38.134.12-k8s-nginx--deployment--85f456d6dd--xf9gl-eth0" Jan 30 13:58:05.060277 containerd[1474]: 2025-01-30 13:58:05.051 [INFO][3427] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:58:05.060277 containerd[1474]: 2025-01-30 13:58:05.056 [INFO][3409] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3b8e1be27c7eca96672bb033249acf00efded4d3123e04a578c3154c059caa35" Jan 30 13:58:05.060277 containerd[1474]: time="2025-01-30T13:58:05.060153767Z" level=info msg="TearDown network for sandbox \"3b8e1be27c7eca96672bb033249acf00efded4d3123e04a578c3154c059caa35\" successfully" Jan 30 13:58:05.066691 containerd[1474]: time="2025-01-30T13:58:05.065658732Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3b8e1be27c7eca96672bb033249acf00efded4d3123e04a578c3154c059caa35\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:58:05.066691 containerd[1474]: time="2025-01-30T13:58:05.065755979Z" level=info msg="RemovePodSandbox \"3b8e1be27c7eca96672bb033249acf00efded4d3123e04a578c3154c059caa35\" returns successfully" Jan 30 13:58:05.130492 systemd-networkd[1377]: cali5ec59c6bf6e: Link UP Jan 30 13:58:05.131738 systemd-networkd[1377]: cali5ec59c6bf6e: Gained carrier Jan 30 13:58:05.175237 containerd[1474]: 2025-01-30 13:58:04.873 [INFO][3413] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {209.38.134.12-k8s-test--pod--1-eth0 default 1514d530-24e5-4a8f-b7ef-12102730071e 1316 0 2025-01-30 13:57:46 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 209.38.134.12 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="fe27fe1cfe443015074402c0a5cb23aa2c1a71a3581b079328e931decbf1c1c3" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="209.38.134.12-k8s-test--pod--1-" Jan 30 13:58:05.175237 containerd[1474]: 2025-01-30 13:58:04.874 [INFO][3413] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="fe27fe1cfe443015074402c0a5cb23aa2c1a71a3581b079328e931decbf1c1c3" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="209.38.134.12-k8s-test--pod--1-eth0" Jan 30 13:58:05.175237 containerd[1474]: 2025-01-30 13:58:05.007 [INFO][3426] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fe27fe1cfe443015074402c0a5cb23aa2c1a71a3581b079328e931decbf1c1c3" HandleID="k8s-pod-network.fe27fe1cfe443015074402c0a5cb23aa2c1a71a3581b079328e931decbf1c1c3" Workload="209.38.134.12-k8s-test--pod--1-eth0" Jan 30 13:58:05.175237 containerd[1474]: 2025-01-30 13:58:05.048 [INFO][3426] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fe27fe1cfe443015074402c0a5cb23aa2c1a71a3581b079328e931decbf1c1c3" HandleID="k8s-pod-network.fe27fe1cfe443015074402c0a5cb23aa2c1a71a3581b079328e931decbf1c1c3" Workload="209.38.134.12-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000050240), Attrs:map[string]string{"namespace":"default", "node":"209.38.134.12", "pod":"test-pod-1", "timestamp":"2025-01-30 13:58:05.007122913 +0000 UTC"}, Hostname:"209.38.134.12", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:58:05.175237 containerd[1474]: 2025-01-30 13:58:05.048 [INFO][3426] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:58:05.175237 containerd[1474]: 2025-01-30 13:58:05.051 [INFO][3426] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:58:05.175237 containerd[1474]: 2025-01-30 13:58:05.052 [INFO][3426] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '209.38.134.12' Jan 30 13:58:05.175237 containerd[1474]: 2025-01-30 13:58:05.058 [INFO][3426] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.fe27fe1cfe443015074402c0a5cb23aa2c1a71a3581b079328e931decbf1c1c3" host="209.38.134.12" Jan 30 13:58:05.175237 containerd[1474]: 2025-01-30 13:58:05.069 [INFO][3426] ipam/ipam.go 372: Looking up existing affinities for host host="209.38.134.12" Jan 30 13:58:05.175237 containerd[1474]: 2025-01-30 13:58:05.080 [INFO][3426] ipam/ipam.go 489: Trying affinity for 192.168.26.0/26 host="209.38.134.12" Jan 30 13:58:05.175237 containerd[1474]: 2025-01-30 13:58:05.084 [INFO][3426] ipam/ipam.go 155: Attempting to load block cidr=192.168.26.0/26 host="209.38.134.12" Jan 30 13:58:05.175237 containerd[1474]: 2025-01-30 13:58:05.093 [INFO][3426] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.26.0/26 host="209.38.134.12" Jan 30 13:58:05.175237 containerd[1474]: 2025-01-30 13:58:05.093 [INFO][3426] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.26.0/26 handle="k8s-pod-network.fe27fe1cfe443015074402c0a5cb23aa2c1a71a3581b079328e931decbf1c1c3" host="209.38.134.12" Jan 30 13:58:05.175237 containerd[1474]: 2025-01-30 13:58:05.099 [INFO][3426] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.fe27fe1cfe443015074402c0a5cb23aa2c1a71a3581b079328e931decbf1c1c3 Jan 30 13:58:05.175237 containerd[1474]: 2025-01-30 13:58:05.107 [INFO][3426] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.26.0/26 handle="k8s-pod-network.fe27fe1cfe443015074402c0a5cb23aa2c1a71a3581b079328e931decbf1c1c3" host="209.38.134.12" Jan 30 13:58:05.175237 containerd[1474]: 2025-01-30 13:58:05.118 [INFO][3426] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.26.4/26] block=192.168.26.0/26 handle="k8s-pod-network.fe27fe1cfe443015074402c0a5cb23aa2c1a71a3581b079328e931decbf1c1c3" host="209.38.134.12" Jan 30 13:58:05.175237 containerd[1474]: 2025-01-30 13:58:05.118 [INFO][3426] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.26.4/26] handle="k8s-pod-network.fe27fe1cfe443015074402c0a5cb23aa2c1a71a3581b079328e931decbf1c1c3" host="209.38.134.12" Jan 30 13:58:05.175237 containerd[1474]: 2025-01-30 13:58:05.118 [INFO][3426] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:58:05.175237 containerd[1474]: 2025-01-30 13:58:05.118 [INFO][3426] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.26.4/26] IPv6=[] ContainerID="fe27fe1cfe443015074402c0a5cb23aa2c1a71a3581b079328e931decbf1c1c3" HandleID="k8s-pod-network.fe27fe1cfe443015074402c0a5cb23aa2c1a71a3581b079328e931decbf1c1c3" Workload="209.38.134.12-k8s-test--pod--1-eth0" Jan 30 13:58:05.175237 containerd[1474]: 2025-01-30 13:58:05.122 [INFO][3413] cni-plugin/k8s.go 386: Populated endpoint ContainerID="fe27fe1cfe443015074402c0a5cb23aa2c1a71a3581b079328e931decbf1c1c3" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="209.38.134.12-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"209.38.134.12-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"1514d530-24e5-4a8f-b7ef-12102730071e", ResourceVersion:"1316", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 57, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"209.38.134.12", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.26.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:58:05.181611 containerd[1474]: 2025-01-30 13:58:05.123 [INFO][3413] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.26.4/32] ContainerID="fe27fe1cfe443015074402c0a5cb23aa2c1a71a3581b079328e931decbf1c1c3" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="209.38.134.12-k8s-test--pod--1-eth0" Jan 30 13:58:05.181611 containerd[1474]: 2025-01-30 13:58:05.123 [INFO][3413] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="fe27fe1cfe443015074402c0a5cb23aa2c1a71a3581b079328e931decbf1c1c3" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="209.38.134.12-k8s-test--pod--1-eth0" Jan 30 13:58:05.181611 containerd[1474]: 2025-01-30 13:58:05.131 [INFO][3413] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fe27fe1cfe443015074402c0a5cb23aa2c1a71a3581b079328e931decbf1c1c3" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="209.38.134.12-k8s-test--pod--1-eth0" Jan 30 13:58:05.181611 containerd[1474]: 2025-01-30 13:58:05.141 [INFO][3413] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="fe27fe1cfe443015074402c0a5cb23aa2c1a71a3581b079328e931decbf1c1c3" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="209.38.134.12-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"209.38.134.12-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"1514d530-24e5-4a8f-b7ef-12102730071e", ResourceVersion:"1316", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 57, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"209.38.134.12", ContainerID:"fe27fe1cfe443015074402c0a5cb23aa2c1a71a3581b079328e931decbf1c1c3", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.26.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"b6:aa:81:ff:04:ca", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:58:05.181611 containerd[1474]: 2025-01-30 13:58:05.169 [INFO][3413] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="fe27fe1cfe443015074402c0a5cb23aa2c1a71a3581b079328e931decbf1c1c3" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="209.38.134.12-k8s-test--pod--1-eth0" Jan 30 13:58:05.251225 containerd[1474]: time="2025-01-30T13:58:05.250687011Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:58:05.251225 containerd[1474]: time="2025-01-30T13:58:05.250779997Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:58:05.251225 containerd[1474]: time="2025-01-30T13:58:05.250913368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:58:05.253244 containerd[1474]: time="2025-01-30T13:58:05.252864550Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:58:05.319768 systemd[1]: Started cri-containerd-fe27fe1cfe443015074402c0a5cb23aa2c1a71a3581b079328e931decbf1c1c3.scope - libcontainer container fe27fe1cfe443015074402c0a5cb23aa2c1a71a3581b079328e931decbf1c1c3. Jan 30 13:58:05.436558 containerd[1474]: time="2025-01-30T13:58:05.436454210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:1514d530-24e5-4a8f-b7ef-12102730071e,Namespace:default,Attempt:0,} returns sandbox id \"fe27fe1cfe443015074402c0a5cb23aa2c1a71a3581b079328e931decbf1c1c3\"" Jan 30 13:58:05.444449 containerd[1474]: time="2025-01-30T13:58:05.441798975Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 30 13:58:05.787516 kubelet[1794]: E0130 13:58:05.787413 1794 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:58:05.848228 containerd[1474]: time="2025-01-30T13:58:05.848106961Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:58:05.853670 containerd[1474]: time="2025-01-30T13:58:05.852137747Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 30 13:58:05.858245 containerd[1474]: time="2025-01-30T13:58:05.857876983Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"71015439\" in 416.001538ms" Jan 30 13:58:05.858245 containerd[1474]: time="2025-01-30T13:58:05.857984553Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\"" Jan 30 13:58:05.865770 containerd[1474]: time="2025-01-30T13:58:05.864539726Z" level=info msg="CreateContainer within sandbox \"fe27fe1cfe443015074402c0a5cb23aa2c1a71a3581b079328e931decbf1c1c3\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 30 13:58:05.973786 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3896515531.mount: Deactivated successfully. Jan 30 13:58:05.989253 containerd[1474]: time="2025-01-30T13:58:05.988999442Z" level=info msg="CreateContainer within sandbox \"fe27fe1cfe443015074402c0a5cb23aa2c1a71a3581b079328e931decbf1c1c3\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"ad971046431fc2aaa4940270dac2ad40a031000c65dea3aae88c1162e7170ef1\"" Jan 30 13:58:05.992171 containerd[1474]: time="2025-01-30T13:58:05.990981483Z" level=info msg="StartContainer for \"ad971046431fc2aaa4940270dac2ad40a031000c65dea3aae88c1162e7170ef1\"" Jan 30 13:58:06.096257 systemd[1]: Started cri-containerd-ad971046431fc2aaa4940270dac2ad40a031000c65dea3aae88c1162e7170ef1.scope - libcontainer container ad971046431fc2aaa4940270dac2ad40a031000c65dea3aae88c1162e7170ef1. Jan 30 13:58:06.207307 containerd[1474]: time="2025-01-30T13:58:06.206987729Z" level=info msg="StartContainer for \"ad971046431fc2aaa4940270dac2ad40a031000c65dea3aae88c1162e7170ef1\" returns successfully" Jan 30 13:58:06.438812 kubelet[1794]: I0130 13:58:06.438087 1794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=20.012704066 podStartE2EDuration="20.433593941s" podCreationTimestamp="2025-01-30 13:57:46 +0000 UTC" firstStartedPulling="2025-01-30 13:58:05.439783059 +0000 UTC m=+62.743771540" lastFinishedPulling="2025-01-30 13:58:05.860672918 +0000 UTC m=+63.164661415" observedRunningTime="2025-01-30 13:58:06.431849521 +0000 UTC m=+63.735838038" watchObservedRunningTime="2025-01-30 13:58:06.433593941 +0000 UTC m=+63.737582440" Jan 30 13:58:06.477510 systemd-networkd[1377]: cali5ec59c6bf6e: Gained IPv6LL Jan 30 13:58:06.787971 kubelet[1794]: E0130 13:58:06.787876 1794 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:58:07.789914 kubelet[1794]: E0130 13:58:07.789698 1794 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:58:08.790119 kubelet[1794]: E0130 13:58:08.789999 1794 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:58:09.791075 kubelet[1794]: E0130 13:58:09.790966 1794 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"