Jan 16 09:06:31.025752 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:40:50 -00 2025 Jan 16 09:06:31.026984 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 16 09:06:31.027040 kernel: BIOS-provided physical RAM map: Jan 16 09:06:31.027051 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 16 09:06:31.027061 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 16 09:06:31.027070 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 16 09:06:31.027082 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Jan 16 09:06:31.027093 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Jan 16 09:06:31.027103 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 16 09:06:31.027117 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 16 09:06:31.027128 kernel: NX (Execute Disable) protection: active Jan 16 09:06:31.027138 kernel: APIC: Static calls initialized Jan 16 09:06:31.027153 kernel: SMBIOS 2.8 present. Jan 16 09:06:31.027163 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Jan 16 09:06:31.027175 kernel: Hypervisor detected: KVM Jan 16 09:06:31.027192 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 16 09:06:31.027207 kernel: kvm-clock: using sched offset of 3983619563 cycles Jan 16 09:06:31.027219 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 16 09:06:31.027230 kernel: tsc: Detected 2494.136 MHz processor Jan 16 09:06:31.027242 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 16 09:06:31.027254 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 16 09:06:31.027265 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Jan 16 09:06:31.027277 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 16 09:06:31.027288 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 16 09:06:31.027305 kernel: ACPI: Early table checksum verification disabled Jan 16 09:06:31.027317 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Jan 16 09:06:31.027328 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 09:06:31.027341 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 09:06:31.027353 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 09:06:31.027364 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jan 16 09:06:31.027375 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 09:06:31.027387 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 09:06:31.027399 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 09:06:31.027414 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 09:06:31.027426 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Jan 16 09:06:31.027438 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Jan 16 09:06:31.027449 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jan 16 09:06:31.027461 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Jan 16 09:06:31.027473 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Jan 16 09:06:31.027485 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Jan 16 09:06:31.027506 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Jan 16 09:06:31.027519 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 16 09:06:31.027531 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 16 09:06:31.027544 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 16 09:06:31.027555 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 16 09:06:31.027572 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Jan 16 09:06:31.027584 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Jan 16 09:06:31.027602 kernel: Zone ranges: Jan 16 09:06:31.027614 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 16 09:06:31.027626 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Jan 16 09:06:31.027638 kernel: Normal empty Jan 16 09:06:31.027649 kernel: Movable zone start for each node Jan 16 09:06:31.027662 kernel: Early memory node ranges Jan 16 09:06:31.027674 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 16 09:06:31.027687 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Jan 16 09:06:31.027700 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Jan 16 09:06:31.027717 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 16 09:06:31.027729 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 16 09:06:31.027746 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Jan 16 09:06:31.027758 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 16 09:06:31.027770 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 16 09:06:31.027783 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 16 09:06:31.027796 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 16 09:06:31.027822 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 16 09:06:31.027834 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 16 09:06:31.027852 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 16 09:06:31.027864 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 16 09:06:31.027890 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 16 09:06:31.027904 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 16 09:06:31.027917 kernel: TSC deadline timer available Jan 16 09:06:31.027929 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 16 09:06:31.027942 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 16 09:06:31.027956 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Jan 16 09:06:31.027972 kernel: Booting paravirtualized kernel on KVM Jan 16 09:06:31.027985 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 16 09:06:31.028004 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 16 09:06:31.028017 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 16 09:06:31.028030 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 16 09:06:31.028042 kernel: pcpu-alloc: [0] 0 1 Jan 16 09:06:31.028055 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 16 09:06:31.028069 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 16 09:06:31.028081 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 16 09:06:31.028096 kernel: random: crng init done Jan 16 09:06:31.028108 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 16 09:06:31.028121 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 16 09:06:31.028133 kernel: Fallback order for Node 0: 0 Jan 16 09:06:31.028146 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Jan 16 09:06:31.028157 kernel: Policy zone: DMA32 Jan 16 09:06:31.028169 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 16 09:06:31.028181 kernel: Memory: 1971204K/2096612K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42844K init, 2348K bss, 125148K reserved, 0K cma-reserved) Jan 16 09:06:31.028195 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 16 09:06:31.028212 kernel: Kernel/User page tables isolation: enabled Jan 16 09:06:31.028226 kernel: ftrace: allocating 37918 entries in 149 pages Jan 16 09:06:31.028238 kernel: ftrace: allocated 149 pages with 4 groups Jan 16 09:06:31.028251 kernel: Dynamic Preempt: voluntary Jan 16 09:06:31.028264 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 16 09:06:31.028277 kernel: rcu: RCU event tracing is enabled. Jan 16 09:06:31.028290 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 16 09:06:31.028302 kernel: Trampoline variant of Tasks RCU enabled. Jan 16 09:06:31.028314 kernel: Rude variant of Tasks RCU enabled. Jan 16 09:06:31.028328 kernel: Tracing variant of Tasks RCU enabled. Jan 16 09:06:31.028349 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 16 09:06:31.028362 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 16 09:06:31.028376 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 16 09:06:31.028388 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 16 09:06:31.028406 kernel: Console: colour VGA+ 80x25 Jan 16 09:06:31.028419 kernel: printk: console [tty0] enabled Jan 16 09:06:31.028431 kernel: printk: console [ttyS0] enabled Jan 16 09:06:31.028442 kernel: ACPI: Core revision 20230628 Jan 16 09:06:31.028455 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 16 09:06:31.028471 kernel: APIC: Switch to symmetric I/O mode setup Jan 16 09:06:31.028484 kernel: x2apic enabled Jan 16 09:06:31.028496 kernel: APIC: Switched APIC routing to: physical x2apic Jan 16 09:06:31.028507 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 16 09:06:31.028520 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39654230, max_idle_ns: 440795207432 ns Jan 16 09:06:31.028534 kernel: Calibrating delay loop (skipped) preset value.. 4988.27 BogoMIPS (lpj=2494136) Jan 16 09:06:31.028546 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 16 09:06:31.028559 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 16 09:06:31.028589 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 16 09:06:31.028604 kernel: Spectre V2 : Mitigation: Retpolines Jan 16 09:06:31.028620 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 16 09:06:31.028637 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 16 09:06:31.028650 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 16 09:06:31.028665 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 16 09:06:31.028679 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 16 09:06:31.028692 kernel: MDS: Mitigation: Clear CPU buffers Jan 16 09:06:31.028705 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 16 09:06:31.028728 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 16 09:06:31.028742 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 16 09:06:31.028755 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 16 09:06:31.028770 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 16 09:06:31.028782 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 16 09:06:31.030791 kernel: Freeing SMP alternatives memory: 32K Jan 16 09:06:31.030860 kernel: pid_max: default: 32768 minimum: 301 Jan 16 09:06:31.030875 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 16 09:06:31.030907 kernel: landlock: Up and running. Jan 16 09:06:31.030920 kernel: SELinux: Initializing. Jan 16 09:06:31.030933 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 16 09:06:31.030947 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 16 09:06:31.030960 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Jan 16 09:06:31.030973 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 16 09:06:31.030986 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 16 09:06:31.030999 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 16 09:06:31.031012 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Jan 16 09:06:31.031030 kernel: signal: max sigframe size: 1776 Jan 16 09:06:31.031044 kernel: rcu: Hierarchical SRCU implementation. Jan 16 09:06:31.031058 kernel: rcu: Max phase no-delay instances is 400. Jan 16 09:06:31.031072 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 16 09:06:31.031086 kernel: smp: Bringing up secondary CPUs ... Jan 16 09:06:31.031099 kernel: smpboot: x86: Booting SMP configuration: Jan 16 09:06:31.031112 kernel: .... node #0, CPUs: #1 Jan 16 09:06:31.031124 kernel: smp: Brought up 1 node, 2 CPUs Jan 16 09:06:31.031151 kernel: smpboot: Max logical packages: 1 Jan 16 09:06:31.031167 kernel: smpboot: Total of 2 processors activated (9976.54 BogoMIPS) Jan 16 09:06:31.031180 kernel: devtmpfs: initialized Jan 16 09:06:31.031193 kernel: x86/mm: Memory block size: 128MB Jan 16 09:06:31.031206 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 16 09:06:31.031219 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 16 09:06:31.031233 kernel: pinctrl core: initialized pinctrl subsystem Jan 16 09:06:31.031246 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 16 09:06:31.031259 kernel: audit: initializing netlink subsys (disabled) Jan 16 09:06:31.031272 kernel: audit: type=2000 audit(1737018390.170:1): state=initialized audit_enabled=0 res=1 Jan 16 09:06:31.031290 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 16 09:06:31.031302 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 16 09:06:31.031316 kernel: cpuidle: using governor menu Jan 16 09:06:31.031328 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 16 09:06:31.031342 kernel: dca service started, version 1.12.1 Jan 16 09:06:31.031355 kernel: PCI: Using configuration type 1 for base access Jan 16 09:06:31.031368 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 16 09:06:31.031381 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 16 09:06:31.031394 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 16 09:06:31.031411 kernel: ACPI: Added _OSI(Module Device) Jan 16 09:06:31.031426 kernel: ACPI: Added _OSI(Processor Device) Jan 16 09:06:31.031441 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 16 09:06:31.031456 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 16 09:06:31.031470 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 16 09:06:31.031483 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 16 09:06:31.031496 kernel: ACPI: Interpreter enabled Jan 16 09:06:31.031512 kernel: ACPI: PM: (supports S0 S5) Jan 16 09:06:31.031529 kernel: ACPI: Using IOAPIC for interrupt routing Jan 16 09:06:31.031549 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 16 09:06:31.031562 kernel: PCI: Using E820 reservations for host bridge windows Jan 16 09:06:31.031575 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 16 09:06:31.031588 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 16 09:06:31.031927 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 16 09:06:31.032104 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 16 09:06:31.032250 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 16 09:06:31.032275 kernel: acpiphp: Slot [3] registered Jan 16 09:06:31.032288 kernel: acpiphp: Slot [4] registered Jan 16 09:06:31.032301 kernel: acpiphp: Slot [5] registered Jan 16 09:06:31.032314 kernel: acpiphp: Slot [6] registered Jan 16 09:06:31.032327 kernel: acpiphp: Slot [7] registered Jan 16 09:06:31.032340 kernel: acpiphp: Slot [8] registered Jan 16 09:06:31.032352 kernel: acpiphp: Slot [9] registered Jan 16 09:06:31.032365 kernel: acpiphp: Slot [10] registered Jan 16 09:06:31.032379 kernel: acpiphp: Slot [11] registered Jan 16 09:06:31.032397 kernel: acpiphp: Slot [12] registered Jan 16 09:06:31.032410 kernel: acpiphp: Slot [13] registered Jan 16 09:06:31.032423 kernel: acpiphp: Slot [14] registered Jan 16 09:06:31.032437 kernel: acpiphp: Slot [15] registered Jan 16 09:06:31.032450 kernel: acpiphp: Slot [16] registered Jan 16 09:06:31.032464 kernel: acpiphp: Slot [17] registered Jan 16 09:06:31.032476 kernel: acpiphp: Slot [18] registered Jan 16 09:06:31.032489 kernel: acpiphp: Slot [19] registered Jan 16 09:06:31.032502 kernel: acpiphp: Slot [20] registered Jan 16 09:06:31.032516 kernel: acpiphp: Slot [21] registered Jan 16 09:06:31.032533 kernel: acpiphp: Slot [22] registered Jan 16 09:06:31.032545 kernel: acpiphp: Slot [23] registered Jan 16 09:06:31.032558 kernel: acpiphp: Slot [24] registered Jan 16 09:06:31.032571 kernel: acpiphp: Slot [25] registered Jan 16 09:06:31.032584 kernel: acpiphp: Slot [26] registered Jan 16 09:06:31.032599 kernel: acpiphp: Slot [27] registered Jan 16 09:06:31.032613 kernel: acpiphp: Slot [28] registered Jan 16 09:06:31.032627 kernel: acpiphp: Slot [29] registered Jan 16 09:06:31.032640 kernel: acpiphp: Slot [30] registered Jan 16 09:06:31.032658 kernel: acpiphp: Slot [31] registered Jan 16 09:06:31.032671 kernel: PCI host bridge to bus 0000:00 Jan 16 09:06:31.034628 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 16 09:06:31.034855 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 16 09:06:31.034983 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 16 09:06:31.035114 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 16 09:06:31.035236 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jan 16 09:06:31.035355 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 16 09:06:31.035544 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 16 09:06:31.035697 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 16 09:06:31.035961 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jan 16 09:06:31.036121 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Jan 16 09:06:31.036270 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 16 09:06:31.036413 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 16 09:06:31.036568 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 16 09:06:31.036725 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 16 09:06:31.038783 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Jan 16 09:06:31.038989 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Jan 16 09:06:31.039147 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 16 09:06:31.039291 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jan 16 09:06:31.039480 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jan 16 09:06:31.039643 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jan 16 09:06:31.039783 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jan 16 09:06:31.040097 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Jan 16 09:06:31.040242 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Jan 16 09:06:31.040378 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jan 16 09:06:31.040534 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 16 09:06:31.040707 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 16 09:06:31.040911 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Jan 16 09:06:31.041048 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Jan 16 09:06:31.041187 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Jan 16 09:06:31.041356 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 16 09:06:31.041492 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Jan 16 09:06:31.041631 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Jan 16 09:06:31.041772 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Jan 16 09:06:31.041997 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Jan 16 09:06:31.042136 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Jan 16 09:06:31.042268 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Jan 16 09:06:31.042463 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Jan 16 09:06:31.042640 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Jan 16 09:06:31.042779 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Jan 16 09:06:31.042941 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Jan 16 09:06:31.043085 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Jan 16 09:06:31.043234 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Jan 16 09:06:31.043370 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Jan 16 09:06:31.043501 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Jan 16 09:06:31.043635 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Jan 16 09:06:31.043779 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Jan 16 09:06:31.046221 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Jan 16 09:06:31.046411 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Jan 16 09:06:31.046432 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 16 09:06:31.046447 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 16 09:06:31.046461 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 16 09:06:31.046474 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 16 09:06:31.046486 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 16 09:06:31.046509 kernel: iommu: Default domain type: Translated Jan 16 09:06:31.046522 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 16 09:06:31.046535 kernel: PCI: Using ACPI for IRQ routing Jan 16 09:06:31.046548 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 16 09:06:31.046562 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 16 09:06:31.046576 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Jan 16 09:06:31.046720 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jan 16 09:06:31.049074 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jan 16 09:06:31.049262 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 16 09:06:31.049282 kernel: vgaarb: loaded Jan 16 09:06:31.049297 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 16 09:06:31.049312 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 16 09:06:31.049324 kernel: clocksource: Switched to clocksource kvm-clock Jan 16 09:06:31.049338 kernel: VFS: Disk quotas dquot_6.6.0 Jan 16 09:06:31.049354 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 16 09:06:31.049367 kernel: pnp: PnP ACPI init Jan 16 09:06:31.049380 kernel: pnp: PnP ACPI: found 4 devices Jan 16 09:06:31.049393 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 16 09:06:31.049413 kernel: NET: Registered PF_INET protocol family Jan 16 09:06:31.049426 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 16 09:06:31.049438 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 16 09:06:31.049452 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 16 09:06:31.049466 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 16 09:06:31.049479 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 16 09:06:31.049492 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 16 09:06:31.049505 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 16 09:06:31.049522 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 16 09:06:31.049537 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 16 09:06:31.049550 kernel: NET: Registered PF_XDP protocol family Jan 16 09:06:31.049685 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 16 09:06:31.049822 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 16 09:06:31.049975 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 16 09:06:31.050097 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 16 09:06:31.050213 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jan 16 09:06:31.050357 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jan 16 09:06:31.050507 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 16 09:06:31.050524 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 16 09:06:31.050662 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 53380 usecs Jan 16 09:06:31.050680 kernel: PCI: CLS 0 bytes, default 64 Jan 16 09:06:31.050692 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 16 09:06:31.050706 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39654230, max_idle_ns: 440795207432 ns Jan 16 09:06:31.050718 kernel: Initialise system trusted keyrings Jan 16 09:06:31.050731 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 16 09:06:31.050750 kernel: Key type asymmetric registered Jan 16 09:06:31.050763 kernel: Asymmetric key parser 'x509' registered Jan 16 09:06:31.052880 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 16 09:06:31.052930 kernel: io scheduler mq-deadline registered Jan 16 09:06:31.052944 kernel: io scheduler kyber registered Jan 16 09:06:31.052957 kernel: io scheduler bfq registered Jan 16 09:06:31.052970 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 16 09:06:31.052984 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jan 16 09:06:31.052997 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 16 09:06:31.053018 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 16 09:06:31.053032 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 16 09:06:31.053044 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 16 09:06:31.053057 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 16 09:06:31.053071 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 16 09:06:31.053084 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 16 09:06:31.053283 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 16 09:06:31.053304 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 16 09:06:31.053432 kernel: rtc_cmos 00:03: registered as rtc0 Jan 16 09:06:31.053554 kernel: rtc_cmos 00:03: setting system clock to 2025-01-16T09:06:30 UTC (1737018390) Jan 16 09:06:31.053677 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 16 09:06:31.053694 kernel: intel_pstate: CPU model not supported Jan 16 09:06:31.053707 kernel: NET: Registered PF_INET6 protocol family Jan 16 09:06:31.053720 kernel: Segment Routing with IPv6 Jan 16 09:06:31.053733 kernel: In-situ OAM (IOAM) with IPv6 Jan 16 09:06:31.053745 kernel: NET: Registered PF_PACKET protocol family Jan 16 09:06:31.053758 kernel: Key type dns_resolver registered Jan 16 09:06:31.053778 kernel: IPI shorthand broadcast: enabled Jan 16 09:06:31.053792 kernel: sched_clock: Marking stable (1318021638, 111849013)->(1481347970, -51477319) Jan 16 09:06:31.053818 kernel: registered taskstats version 1 Jan 16 09:06:31.053831 kernel: Loading compiled-in X.509 certificates Jan 16 09:06:31.053845 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: e8ca4908f7ff887d90a0430272c92dde55624447' Jan 16 09:06:31.053909 kernel: Key type .fscrypt registered Jan 16 09:06:31.053923 kernel: Key type fscrypt-provisioning registered Jan 16 09:06:31.053937 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 16 09:06:31.053957 kernel: ima: Allocated hash algorithm: sha1 Jan 16 09:06:31.053970 kernel: ima: No architecture policies found Jan 16 09:06:31.053984 kernel: clk: Disabling unused clocks Jan 16 09:06:31.053997 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 16 09:06:31.054010 kernel: Write protecting the kernel read-only data: 36864k Jan 16 09:06:31.054048 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 16 09:06:31.054066 kernel: Run /init as init process Jan 16 09:06:31.054079 kernel: with arguments: Jan 16 09:06:31.054092 kernel: /init Jan 16 09:06:31.054108 kernel: with environment: Jan 16 09:06:31.054121 kernel: HOME=/ Jan 16 09:06:31.054136 kernel: TERM=linux Jan 16 09:06:31.054149 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 16 09:06:31.054168 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 16 09:06:31.054184 systemd[1]: Detected virtualization kvm. Jan 16 09:06:31.054199 systemd[1]: Detected architecture x86-64. Jan 16 09:06:31.054213 systemd[1]: Running in initrd. Jan 16 09:06:31.054232 systemd[1]: No hostname configured, using default hostname. Jan 16 09:06:31.054247 systemd[1]: Hostname set to . Jan 16 09:06:31.054261 systemd[1]: Initializing machine ID from VM UUID. Jan 16 09:06:31.054274 systemd[1]: Queued start job for default target initrd.target. Jan 16 09:06:31.054288 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 16 09:06:31.054301 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 16 09:06:31.054317 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 16 09:06:31.054331 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 16 09:06:31.054351 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 16 09:06:31.054365 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 16 09:06:31.054386 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 16 09:06:31.054402 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 16 09:06:31.054416 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 16 09:06:31.054430 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 16 09:06:31.054444 systemd[1]: Reached target paths.target - Path Units. Jan 16 09:06:31.054463 systemd[1]: Reached target slices.target - Slice Units. Jan 16 09:06:31.054477 systemd[1]: Reached target swap.target - Swaps. Jan 16 09:06:31.054494 systemd[1]: Reached target timers.target - Timer Units. Jan 16 09:06:31.054509 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 16 09:06:31.054523 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 16 09:06:31.054541 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 16 09:06:31.054556 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 16 09:06:31.054571 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 16 09:06:31.054585 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 16 09:06:31.054599 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 16 09:06:31.054613 systemd[1]: Reached target sockets.target - Socket Units. Jan 16 09:06:31.054627 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 16 09:06:31.054641 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 16 09:06:31.054655 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 16 09:06:31.054699 systemd[1]: Starting systemd-fsck-usr.service... Jan 16 09:06:31.054713 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 16 09:06:31.054727 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 16 09:06:31.054741 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 09:06:31.054755 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 16 09:06:31.054769 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 16 09:06:31.054785 systemd[1]: Finished systemd-fsck-usr.service. Jan 16 09:06:31.057119 systemd-journald[182]: Collecting audit messages is disabled. Jan 16 09:06:31.057174 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 16 09:06:31.057192 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 16 09:06:31.057208 systemd-journald[182]: Journal started Jan 16 09:06:31.057238 systemd-journald[182]: Runtime Journal (/run/log/journal/144d0c5c319445f08c1fddd1fb4c2492) is 4.9M, max 39.3M, 34.4M free. Jan 16 09:06:31.026910 systemd-modules-load[183]: Inserted module 'overlay' Jan 16 09:06:31.103639 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 16 09:06:31.103696 kernel: Bridge firewalling registered Jan 16 09:06:31.103719 systemd[1]: Started systemd-journald.service - Journal Service. Jan 16 09:06:31.090660 systemd-modules-load[183]: Inserted module 'br_netfilter' Jan 16 09:06:31.109538 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 16 09:06:31.110590 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 09:06:31.118298 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 16 09:06:31.122141 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 16 09:06:31.130517 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 16 09:06:31.136369 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 16 09:06:31.154667 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 16 09:06:31.157701 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 16 09:06:31.160930 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 16 09:06:31.167211 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 16 09:06:31.170687 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 16 09:06:31.180181 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 16 09:06:31.186276 dracut-cmdline[215]: dracut-dracut-053 Jan 16 09:06:31.191299 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 16 09:06:31.222557 systemd-resolved[220]: Positive Trust Anchors: Jan 16 09:06:31.222576 systemd-resolved[220]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 16 09:06:31.222630 systemd-resolved[220]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 16 09:06:31.231333 systemd-resolved[220]: Defaulting to hostname 'linux'. Jan 16 09:06:31.233736 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 16 09:06:31.234435 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 16 09:06:31.296881 kernel: SCSI subsystem initialized Jan 16 09:06:31.308832 kernel: Loading iSCSI transport class v2.0-870. Jan 16 09:06:31.322855 kernel: iscsi: registered transport (tcp) Jan 16 09:06:31.350874 kernel: iscsi: registered transport (qla4xxx) Jan 16 09:06:31.350965 kernel: QLogic iSCSI HBA Driver Jan 16 09:06:31.415426 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 16 09:06:31.424180 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 16 09:06:31.463897 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 16 09:06:31.465757 kernel: device-mapper: uevent: version 1.0.3 Jan 16 09:06:31.465790 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 16 09:06:31.519876 kernel: raid6: avx2x4 gen() 13259 MB/s Jan 16 09:06:31.536899 kernel: raid6: avx2x2 gen() 12826 MB/s Jan 16 09:06:31.554422 kernel: raid6: avx2x1 gen() 10242 MB/s Jan 16 09:06:31.554514 kernel: raid6: using algorithm avx2x4 gen() 13259 MB/s Jan 16 09:06:31.572079 kernel: raid6: .... xor() 5213 MB/s, rmw enabled Jan 16 09:06:31.572189 kernel: raid6: using avx2x2 recovery algorithm Jan 16 09:06:31.606595 kernel: xor: automatically using best checksumming function avx Jan 16 09:06:31.816887 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 16 09:06:31.838997 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 16 09:06:31.847297 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 16 09:06:31.909618 systemd-udevd[402]: Using default interface naming scheme 'v255'. Jan 16 09:06:31.918520 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 16 09:06:31.929373 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 16 09:06:31.969553 dracut-pre-trigger[410]: rd.md=0: removing MD RAID activation Jan 16 09:06:32.067089 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 16 09:06:32.078738 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 16 09:06:32.195489 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 16 09:06:32.205236 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 16 09:06:32.262173 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 16 09:06:32.267439 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 16 09:06:32.269374 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 16 09:06:32.271072 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 16 09:06:32.282354 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 16 09:06:32.324708 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 16 09:06:32.330847 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Jan 16 09:06:32.404741 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 16 09:06:32.405018 kernel: scsi host0: Virtio SCSI HBA Jan 16 09:06:32.405294 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 16 09:06:32.405336 kernel: GPT:9289727 != 125829119 Jan 16 09:06:32.405354 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 16 09:06:32.405370 kernel: GPT:9289727 != 125829119 Jan 16 09:06:32.405386 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 16 09:06:32.405404 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 16 09:06:32.405421 kernel: ACPI: bus type USB registered Jan 16 09:06:32.405440 kernel: usbcore: registered new interface driver usbfs Jan 16 09:06:32.405453 kernel: usbcore: registered new interface driver hub Jan 16 09:06:32.405465 kernel: usbcore: registered new device driver usb Jan 16 09:06:32.414837 kernel: cryptd: max_cpu_qlen set to 1000 Jan 16 09:06:32.445856 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Jan 16 09:06:32.455120 kernel: virtio_blk virtio5: [vdb] 920 512-byte logical blocks (471 kB/460 KiB) Jan 16 09:06:32.461598 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 16 09:06:32.461883 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 16 09:06:32.466162 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 16 09:06:32.466698 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 16 09:06:32.467017 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 09:06:32.469148 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 09:06:32.477377 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 09:06:32.498831 kernel: AVX2 version of gcm_enc/dec engaged. Jan 16 09:06:32.499878 kernel: libata version 3.00 loaded. Jan 16 09:06:32.507243 kernel: AES CTR mode by8 optimization enabled Jan 16 09:06:32.514846 kernel: ata_piix 0000:00:01.1: version 2.13 Jan 16 09:06:32.536683 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (457) Jan 16 09:06:32.536714 kernel: scsi host1: ata_piix Jan 16 09:06:32.537008 kernel: scsi host2: ata_piix Jan 16 09:06:32.537224 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Jan 16 09:06:32.537260 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Jan 16 09:06:32.565892 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 16 09:06:32.599660 kernel: BTRFS: device fsid b8e2d3c5-4bed-4339-bed5-268c66823686 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (446) Jan 16 09:06:32.594432 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 09:06:32.606734 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Jan 16 09:06:32.609767 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Jan 16 09:06:32.610195 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Jan 16 09:06:32.610411 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Jan 16 09:06:32.610609 kernel: hub 1-0:1.0: USB hub found Jan 16 09:06:32.610864 kernel: hub 1-0:1.0: 2 ports detected Jan 16 09:06:32.608030 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 16 09:06:32.620968 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 16 09:06:32.627074 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 16 09:06:32.627794 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 16 09:06:32.641201 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 16 09:06:32.645671 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 16 09:06:32.653644 disk-uuid[532]: Primary Header is updated. Jan 16 09:06:32.653644 disk-uuid[532]: Secondary Entries is updated. Jan 16 09:06:32.653644 disk-uuid[532]: Secondary Header is updated. Jan 16 09:06:32.661840 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 16 09:06:32.686538 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 16 09:06:32.696044 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 16 09:06:33.672927 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 16 09:06:33.674456 disk-uuid[533]: The operation has completed successfully. Jan 16 09:06:33.724584 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 16 09:06:33.724747 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 16 09:06:33.750187 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 16 09:06:33.765709 sh[561]: Success Jan 16 09:06:33.782832 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 16 09:06:33.864966 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 16 09:06:33.873291 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 16 09:06:33.881634 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 16 09:06:33.903855 kernel: BTRFS info (device dm-0): first mount of filesystem b8e2d3c5-4bed-4339-bed5-268c66823686 Jan 16 09:06:33.903962 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 16 09:06:33.903988 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 16 09:06:33.904010 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 16 09:06:33.905247 kernel: BTRFS info (device dm-0): using free space tree Jan 16 09:06:33.917453 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 16 09:06:33.918781 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 16 09:06:33.929119 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 16 09:06:33.933125 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 16 09:06:33.947067 kernel: BTRFS info (device vda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 16 09:06:33.947153 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 16 09:06:33.947168 kernel: BTRFS info (device vda6): using free space tree Jan 16 09:06:33.951879 kernel: BTRFS info (device vda6): auto enabling async discard Jan 16 09:06:33.967162 kernel: BTRFS info (device vda6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 16 09:06:33.966538 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 16 09:06:33.978501 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 16 09:06:33.990021 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 16 09:06:34.152895 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 16 09:06:34.166286 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 16 09:06:34.175281 ignition[643]: Ignition 2.19.0 Jan 16 09:06:34.175300 ignition[643]: Stage: fetch-offline Jan 16 09:06:34.175398 ignition[643]: no configs at "/usr/lib/ignition/base.d" Jan 16 09:06:34.175416 ignition[643]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 16 09:06:34.175617 ignition[643]: parsed url from cmdline: "" Jan 16 09:06:34.182459 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 16 09:06:34.175624 ignition[643]: no config URL provided Jan 16 09:06:34.175634 ignition[643]: reading system config file "/usr/lib/ignition/user.ign" Jan 16 09:06:34.175651 ignition[643]: no config at "/usr/lib/ignition/user.ign" Jan 16 09:06:34.175661 ignition[643]: failed to fetch config: resource requires networking Jan 16 09:06:34.176297 ignition[643]: Ignition finished successfully Jan 16 09:06:34.214985 systemd-networkd[752]: lo: Link UP Jan 16 09:06:34.215000 systemd-networkd[752]: lo: Gained carrier Jan 16 09:06:34.218675 systemd-networkd[752]: Enumeration completed Jan 16 09:06:34.218938 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 16 09:06:34.220274 systemd-networkd[752]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 16 09:06:34.220280 systemd-networkd[752]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Jan 16 09:06:34.220925 systemd[1]: Reached target network.target - Network. Jan 16 09:06:34.222261 systemd-networkd[752]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 16 09:06:34.222268 systemd-networkd[752]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 16 09:06:34.223788 systemd-networkd[752]: eth0: Link UP Jan 16 09:06:34.223795 systemd-networkd[752]: eth0: Gained carrier Jan 16 09:06:34.223895 systemd-networkd[752]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 16 09:06:34.227379 systemd-networkd[752]: eth1: Link UP Jan 16 09:06:34.227386 systemd-networkd[752]: eth1: Gained carrier Jan 16 09:06:34.227432 systemd-networkd[752]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 16 09:06:34.233016 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 16 09:06:34.239938 systemd-networkd[752]: eth0: DHCPv4 address 143.110.238.88/20, gateway 143.110.224.1 acquired from 169.254.169.253 Jan 16 09:06:34.243958 systemd-networkd[752]: eth1: DHCPv4 address 10.124.0.7/20 acquired from 169.254.169.253 Jan 16 09:06:34.283956 ignition[755]: Ignition 2.19.0 Jan 16 09:06:34.283971 ignition[755]: Stage: fetch Jan 16 09:06:34.284292 ignition[755]: no configs at "/usr/lib/ignition/base.d" Jan 16 09:06:34.284313 ignition[755]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 16 09:06:34.284537 ignition[755]: parsed url from cmdline: "" Jan 16 09:06:34.284544 ignition[755]: no config URL provided Jan 16 09:06:34.284553 ignition[755]: reading system config file "/usr/lib/ignition/user.ign" Jan 16 09:06:34.284567 ignition[755]: no config at "/usr/lib/ignition/user.ign" Jan 16 09:06:34.284596 ignition[755]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Jan 16 09:06:34.311913 ignition[755]: GET result: OK Jan 16 09:06:34.312596 ignition[755]: parsing config with SHA512: f6137fb28d7839fff071ad1e54d14b2f3c3c5bccc6ae9170811a1ff5ca9c842b3edfa6387e7b3d30d40a0e8edf81c2fcd5886549d0d83bbfe906ee3b5d73f993 Jan 16 09:06:34.320875 unknown[755]: fetched base config from "system" Jan 16 09:06:34.321218 ignition[755]: fetch: fetch complete Jan 16 09:06:34.320890 unknown[755]: fetched base config from "system" Jan 16 09:06:34.321225 ignition[755]: fetch: fetch passed Jan 16 09:06:34.320898 unknown[755]: fetched user config from "digitalocean" Jan 16 09:06:34.321289 ignition[755]: Ignition finished successfully Jan 16 09:06:34.325979 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 16 09:06:34.339338 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 16 09:06:34.376244 ignition[762]: Ignition 2.19.0 Jan 16 09:06:34.376262 ignition[762]: Stage: kargs Jan 16 09:06:34.376780 ignition[762]: no configs at "/usr/lib/ignition/base.d" Jan 16 09:06:34.376846 ignition[762]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 16 09:06:34.378193 ignition[762]: kargs: kargs passed Jan 16 09:06:34.378498 ignition[762]: Ignition finished successfully Jan 16 09:06:34.380556 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 16 09:06:34.389207 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 16 09:06:34.440231 ignition[769]: Ignition 2.19.0 Jan 16 09:06:34.440257 ignition[769]: Stage: disks Jan 16 09:06:34.440659 ignition[769]: no configs at "/usr/lib/ignition/base.d" Jan 16 09:06:34.440679 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 16 09:06:34.444272 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 16 09:06:34.442262 ignition[769]: disks: disks passed Jan 16 09:06:34.442378 ignition[769]: Ignition finished successfully Jan 16 09:06:34.451723 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 16 09:06:34.453158 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 16 09:06:34.454218 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 16 09:06:34.455282 systemd[1]: Reached target sysinit.target - System Initialization. Jan 16 09:06:34.456384 systemd[1]: Reached target basic.target - Basic System. Jan 16 09:06:34.468836 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 16 09:06:34.502258 systemd-fsck[778]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 16 09:06:34.506191 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 16 09:06:34.515065 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 16 09:06:34.643847 kernel: EXT4-fs (vda9): mounted filesystem 39899d4c-a8b1-4feb-9875-e812cc535888 r/w with ordered data mode. Quota mode: none. Jan 16 09:06:34.644117 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 16 09:06:34.646557 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 16 09:06:34.654166 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 16 09:06:34.663125 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 16 09:06:34.667724 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Jan 16 09:06:34.680050 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (786) Jan 16 09:06:34.681144 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 16 09:06:34.685418 kernel: BTRFS info (device vda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 16 09:06:34.685454 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 16 09:06:34.685481 kernel: BTRFS info (device vda6): using free space tree Jan 16 09:06:34.686256 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 16 09:06:34.686312 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 16 09:06:34.689377 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 16 09:06:34.693404 kernel: BTRFS info (device vda6): auto enabling async discard Jan 16 09:06:34.700214 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 16 09:06:34.706195 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 16 09:06:34.804188 coreos-metadata[788]: Jan 16 09:06:34.804 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 16 09:06:34.817872 initrd-setup-root[816]: cut: /sysroot/etc/passwd: No such file or directory Jan 16 09:06:34.819049 coreos-metadata[788]: Jan 16 09:06:34.818 INFO Fetch successful Jan 16 09:06:34.823621 coreos-metadata[789]: Jan 16 09:06:34.823 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 16 09:06:34.826721 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Jan 16 09:06:34.826870 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Jan 16 09:06:34.833245 initrd-setup-root[824]: cut: /sysroot/etc/group: No such file or directory Jan 16 09:06:34.837475 coreos-metadata[789]: Jan 16 09:06:34.837 INFO Fetch successful Jan 16 09:06:34.840913 coreos-metadata[789]: Jan 16 09:06:34.840 INFO wrote hostname ci-4081.3.0-f-8515dcac45 to /sysroot/etc/hostname Jan 16 09:06:34.842016 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 16 09:06:34.845593 initrd-setup-root[831]: cut: /sysroot/etc/shadow: No such file or directory Jan 16 09:06:34.852777 initrd-setup-root[839]: cut: /sysroot/etc/gshadow: No such file or directory Jan 16 09:06:35.027386 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 16 09:06:35.036113 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 16 09:06:35.039171 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 16 09:06:35.058587 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 16 09:06:35.060118 kernel: BTRFS info (device vda6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 16 09:06:35.117687 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 16 09:06:35.139089 ignition[906]: INFO : Ignition 2.19.0 Jan 16 09:06:35.139089 ignition[906]: INFO : Stage: mount Jan 16 09:06:35.140568 ignition[906]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 16 09:06:35.140568 ignition[906]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 16 09:06:35.142037 ignition[906]: INFO : mount: mount passed Jan 16 09:06:35.142037 ignition[906]: INFO : Ignition finished successfully Jan 16 09:06:35.142444 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 16 09:06:35.149411 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 16 09:06:35.187578 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 16 09:06:35.206228 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (918) Jan 16 09:06:35.206312 kernel: BTRFS info (device vda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 16 09:06:35.208698 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 16 09:06:35.208834 kernel: BTRFS info (device vda6): using free space tree Jan 16 09:06:35.216881 kernel: BTRFS info (device vda6): auto enabling async discard Jan 16 09:06:35.231184 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 16 09:06:35.270121 ignition[934]: INFO : Ignition 2.19.0 Jan 16 09:06:35.271083 ignition[934]: INFO : Stage: files Jan 16 09:06:35.271583 ignition[934]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 16 09:06:35.271583 ignition[934]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 16 09:06:35.273144 ignition[934]: DEBUG : files: compiled without relabeling support, skipping Jan 16 09:06:35.275496 ignition[934]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 16 09:06:35.275496 ignition[934]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 16 09:06:35.279860 ignition[934]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 16 09:06:35.280877 ignition[934]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 16 09:06:35.282358 unknown[934]: wrote ssh authorized keys file for user: core Jan 16 09:06:35.283344 ignition[934]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 16 09:06:35.284589 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 16 09:06:35.285520 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 16 09:06:35.285520 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 16 09:06:35.287476 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 16 09:06:35.287476 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 16 09:06:35.287476 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 16 09:06:35.287476 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 16 09:06:35.287476 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Jan 16 09:06:35.658437 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 16 09:06:35.960621 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 16 09:06:35.962896 ignition[934]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 16 09:06:35.962896 ignition[934]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 16 09:06:35.962896 ignition[934]: INFO : files: files passed Jan 16 09:06:35.962896 ignition[934]: INFO : Ignition finished successfully Jan 16 09:06:35.964035 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 16 09:06:35.975233 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 16 09:06:35.979112 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 16 09:06:35.986055 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 16 09:06:35.986213 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 16 09:06:36.006062 systemd-networkd[752]: eth1: Gained IPv6LL Jan 16 09:06:36.019435 initrd-setup-root-after-ignition[964]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 16 09:06:36.019435 initrd-setup-root-after-ignition[964]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 16 09:06:36.022336 initrd-setup-root-after-ignition[968]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 16 09:06:36.029724 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 16 09:06:36.031589 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 16 09:06:36.043699 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 16 09:06:36.086123 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 16 09:06:36.086388 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 16 09:06:36.088518 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 16 09:06:36.089201 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 16 09:06:36.090551 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 16 09:06:36.096207 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 16 09:06:36.132259 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 16 09:06:36.140155 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 16 09:06:36.165587 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 16 09:06:36.167121 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 16 09:06:36.168723 systemd[1]: Stopped target timers.target - Timer Units. Jan 16 09:06:36.169408 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 16 09:06:36.169622 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 16 09:06:36.171155 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 16 09:06:36.171843 systemd[1]: Stopped target basic.target - Basic System. Jan 16 09:06:36.172342 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 16 09:06:36.173784 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 16 09:06:36.175171 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 16 09:06:36.176303 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 16 09:06:36.177591 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 16 09:06:36.179675 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 16 09:06:36.180640 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 16 09:06:36.182124 systemd[1]: Stopped target swap.target - Swaps. Jan 16 09:06:36.182775 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 16 09:06:36.183179 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 16 09:06:36.184227 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 16 09:06:36.194064 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 16 09:06:36.194929 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 16 09:06:36.198003 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 16 09:06:36.198157 systemd-networkd[752]: eth0: Gained IPv6LL Jan 16 09:06:36.201746 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 16 09:06:36.202248 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 16 09:06:36.207450 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 16 09:06:36.207834 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 16 09:06:36.209268 systemd[1]: ignition-files.service: Deactivated successfully. Jan 16 09:06:36.209570 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 16 09:06:36.210535 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 16 09:06:36.210738 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 16 09:06:36.223368 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 16 09:06:36.232432 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 16 09:06:36.235207 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 16 09:06:36.235498 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 16 09:06:36.236991 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 16 09:06:36.237209 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 16 09:06:36.248535 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 16 09:06:36.249574 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 16 09:06:36.265624 ignition[988]: INFO : Ignition 2.19.0 Jan 16 09:06:36.267246 ignition[988]: INFO : Stage: umount Jan 16 09:06:36.267246 ignition[988]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 16 09:06:36.267246 ignition[988]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 16 09:06:36.290351 ignition[988]: INFO : umount: umount passed Jan 16 09:06:36.290351 ignition[988]: INFO : Ignition finished successfully Jan 16 09:06:36.279201 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 16 09:06:36.280451 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 16 09:06:36.280574 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 16 09:06:36.290109 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 16 09:06:36.290258 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 16 09:06:36.290898 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 16 09:06:36.290978 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 16 09:06:36.291861 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 16 09:06:36.291934 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 16 09:06:36.298077 systemd[1]: Stopped target network.target - Network. Jan 16 09:06:36.298555 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 16 09:06:36.298670 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 16 09:06:36.300175 systemd[1]: Stopped target paths.target - Path Units. Jan 16 09:06:36.300552 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 16 09:06:36.301261 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 16 09:06:36.302106 systemd[1]: Stopped target slices.target - Slice Units. Jan 16 09:06:36.303306 systemd[1]: Stopped target sockets.target - Socket Units. Jan 16 09:06:36.304199 systemd[1]: iscsid.socket: Deactivated successfully. Jan 16 09:06:36.304268 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 16 09:06:36.305039 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 16 09:06:36.305103 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 16 09:06:36.305751 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 16 09:06:36.306076 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 16 09:06:36.306550 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 16 09:06:36.306612 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 16 09:06:36.307462 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 16 09:06:36.308507 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 16 09:06:36.309531 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 16 09:06:36.309640 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 16 09:06:36.311139 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 16 09:06:36.311255 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 16 09:06:36.311876 systemd-networkd[752]: eth0: DHCPv6 lease lost Jan 16 09:06:36.318142 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 16 09:06:36.318336 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 16 09:06:36.318892 systemd-networkd[752]: eth1: DHCPv6 lease lost Jan 16 09:06:36.324463 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 16 09:06:36.324924 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 16 09:06:36.327545 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 16 09:06:36.327622 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 16 09:06:36.335109 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 16 09:06:36.336118 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 16 09:06:36.336228 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 16 09:06:36.338015 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 16 09:06:36.338103 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 16 09:06:36.341446 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 16 09:06:36.341548 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 16 09:06:36.342554 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 16 09:06:36.342616 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 16 09:06:36.343507 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 16 09:06:36.358725 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 16 09:06:36.358969 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 16 09:06:36.360601 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 16 09:06:36.360916 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 16 09:06:36.362869 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 16 09:06:36.362970 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 16 09:06:36.364177 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 16 09:06:36.364240 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 16 09:06:36.365245 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 16 09:06:36.365329 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 16 09:06:36.366597 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 16 09:06:36.366684 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 16 09:06:36.368049 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 16 09:06:36.368137 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 16 09:06:36.386266 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 16 09:06:36.387609 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 16 09:06:36.387708 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 16 09:06:36.388219 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 16 09:06:36.388271 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 16 09:06:36.388679 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 16 09:06:36.388737 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 16 09:06:36.392187 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 16 09:06:36.392255 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 09:06:36.395656 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 16 09:06:36.395871 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 16 09:06:36.397359 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 16 09:06:36.402152 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 16 09:06:36.424474 systemd[1]: Switching root. Jan 16 09:06:36.456288 systemd-journald[182]: Journal stopped Jan 16 09:06:38.150246 systemd-journald[182]: Received SIGTERM from PID 1 (systemd). Jan 16 09:06:38.150353 kernel: SELinux: policy capability network_peer_controls=1 Jan 16 09:06:38.150379 kernel: SELinux: policy capability open_perms=1 Jan 16 09:06:38.150397 kernel: SELinux: policy capability extended_socket_class=1 Jan 16 09:06:38.150411 kernel: SELinux: policy capability always_check_network=0 Jan 16 09:06:38.150431 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 16 09:06:38.150466 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 16 09:06:38.150486 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 16 09:06:38.150515 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 16 09:06:38.150532 kernel: audit: type=1403 audit(1737018396.660:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 16 09:06:38.150551 systemd[1]: Successfully loaded SELinux policy in 63.715ms. Jan 16 09:06:38.150579 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 16.812ms. Jan 16 09:06:38.150594 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 16 09:06:38.150614 systemd[1]: Detected virtualization kvm. Jan 16 09:06:38.150634 systemd[1]: Detected architecture x86-64. Jan 16 09:06:38.150650 systemd[1]: Detected first boot. Jan 16 09:06:38.150674 systemd[1]: Hostname set to . Jan 16 09:06:38.150696 systemd[1]: Initializing machine ID from VM UUID. Jan 16 09:06:38.150719 zram_generator::config[1030]: No configuration found. Jan 16 09:06:38.150742 systemd[1]: Populated /etc with preset unit settings. Jan 16 09:06:38.150766 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 16 09:06:38.150788 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 16 09:06:38.152947 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 16 09:06:38.152987 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 16 09:06:38.153022 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 16 09:06:38.153036 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 16 09:06:38.153050 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 16 09:06:38.153097 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 16 09:06:38.153116 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 16 09:06:38.153134 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 16 09:06:38.153154 systemd[1]: Created slice user.slice - User and Session Slice. Jan 16 09:06:38.153173 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 16 09:06:38.153195 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 16 09:06:38.153213 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 16 09:06:38.153227 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 16 09:06:38.153240 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 16 09:06:38.153256 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 16 09:06:38.153275 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 16 09:06:38.153289 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 16 09:06:38.153302 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 16 09:06:38.153319 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 16 09:06:38.153333 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 16 09:06:38.153347 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 16 09:06:38.153361 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 16 09:06:38.153374 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 16 09:06:38.153387 systemd[1]: Reached target slices.target - Slice Units. Jan 16 09:06:38.153401 systemd[1]: Reached target swap.target - Swaps. Jan 16 09:06:38.153415 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 16 09:06:38.153438 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 16 09:06:38.153451 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 16 09:06:38.153493 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 16 09:06:38.153507 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 16 09:06:38.153521 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 16 09:06:38.153534 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 16 09:06:38.153549 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 16 09:06:38.153562 systemd[1]: Mounting media.mount - External Media Directory... Jan 16 09:06:38.153575 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 09:06:38.153593 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 16 09:06:38.153607 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 16 09:06:38.153620 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 16 09:06:38.153634 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 16 09:06:38.153647 systemd[1]: Reached target machines.target - Containers. Jan 16 09:06:38.153660 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 16 09:06:38.153674 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 09:06:38.153687 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 16 09:06:38.153703 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 16 09:06:38.153716 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 16 09:06:38.153730 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 16 09:06:38.153744 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 16 09:06:38.153756 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 16 09:06:38.153769 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 16 09:06:38.153791 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 16 09:06:38.153824 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 16 09:06:38.153845 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 16 09:06:38.153886 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 16 09:06:38.153905 systemd[1]: Stopped systemd-fsck-usr.service. Jan 16 09:06:38.153924 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 16 09:06:38.153944 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 16 09:06:38.153964 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 16 09:06:38.153983 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 16 09:06:38.154003 kernel: fuse: init (API version 7.39) Jan 16 09:06:38.154025 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 16 09:06:38.154046 systemd[1]: verity-setup.service: Deactivated successfully. Jan 16 09:06:38.154071 systemd[1]: Stopped verity-setup.service. Jan 16 09:06:38.154090 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 09:06:38.154109 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 16 09:06:38.154131 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 16 09:06:38.154153 systemd[1]: Mounted media.mount - External Media Directory. Jan 16 09:06:38.154175 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 16 09:06:38.154189 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 16 09:06:38.154208 kernel: ACPI: bus type drm_connector registered Jan 16 09:06:38.154227 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 16 09:06:38.154263 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 16 09:06:38.154289 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 16 09:06:38.154305 kernel: loop: module loaded Jan 16 09:06:38.154326 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 16 09:06:38.154344 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 16 09:06:38.154358 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 16 09:06:38.154371 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 16 09:06:38.154384 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 16 09:06:38.154404 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 16 09:06:38.154425 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 16 09:06:38.154447 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 16 09:06:38.154468 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 16 09:06:38.154486 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 16 09:06:38.154503 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 16 09:06:38.154519 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 16 09:06:38.154533 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 16 09:06:38.154550 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 16 09:06:38.154569 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 16 09:06:38.154588 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 16 09:06:38.154664 systemd-journald[1099]: Collecting audit messages is disabled. Jan 16 09:06:38.154703 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 16 09:06:38.154737 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 16 09:06:38.154758 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 16 09:06:38.154780 systemd-journald[1099]: Journal started Jan 16 09:06:38.160769 systemd-journald[1099]: Runtime Journal (/run/log/journal/144d0c5c319445f08c1fddd1fb4c2492) is 4.9M, max 39.3M, 34.4M free. Jan 16 09:06:38.160973 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 16 09:06:38.161035 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 16 09:06:38.161055 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 16 09:06:37.655890 systemd[1]: Queued start job for default target multi-user.target. Jan 16 09:06:37.683530 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 16 09:06:37.684484 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 16 09:06:38.169351 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 16 09:06:38.169423 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 16 09:06:38.179928 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 16 09:06:38.187838 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 16 09:06:38.192126 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 09:06:38.203837 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 16 09:06:38.203942 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 16 09:06:38.211840 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 16 09:06:38.224837 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 16 09:06:38.240952 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 16 09:06:38.244834 systemd[1]: Started systemd-journald.service - Journal Service. Jan 16 09:06:38.259966 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 16 09:06:38.284480 systemd-tmpfiles[1113]: ACLs are not supported, ignoring. Jan 16 09:06:38.284506 systemd-tmpfiles[1113]: ACLs are not supported, ignoring. Jan 16 09:06:38.318596 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 16 09:06:38.345116 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 16 09:06:38.361163 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 16 09:06:38.372491 kernel: loop0: detected capacity change from 0 to 140768 Jan 16 09:06:38.372701 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 16 09:06:38.378829 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 16 09:06:38.390761 systemd-journald[1099]: Time spent on flushing to /var/log/journal/144d0c5c319445f08c1fddd1fb4c2492 is 149.407ms for 978 entries. Jan 16 09:06:38.390761 systemd-journald[1099]: System Journal (/var/log/journal/144d0c5c319445f08c1fddd1fb4c2492) is 8.0M, max 195.6M, 187.6M free. Jan 16 09:06:38.553284 systemd-journald[1099]: Received client request to flush runtime journal. Jan 16 09:06:38.553363 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 16 09:06:38.553384 kernel: loop1: detected capacity change from 0 to 8 Jan 16 09:06:38.398196 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 16 09:06:38.402316 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 16 09:06:38.409456 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 16 09:06:38.420252 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 16 09:06:38.457712 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 16 09:06:38.463213 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 16 09:06:38.487828 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 16 09:06:38.544661 udevadm[1159]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 16 09:06:38.560531 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 16 09:06:38.567270 kernel: loop2: detected capacity change from 0 to 205544 Jan 16 09:06:38.589187 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 16 09:06:38.604251 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 16 09:06:38.622522 kernel: loop3: detected capacity change from 0 to 142488 Jan 16 09:06:38.679835 kernel: loop4: detected capacity change from 0 to 140768 Jan 16 09:06:38.696432 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Jan 16 09:06:38.696858 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Jan 16 09:06:38.705291 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 16 09:06:38.725132 kernel: loop5: detected capacity change from 0 to 8 Jan 16 09:06:38.733925 kernel: loop6: detected capacity change from 0 to 205544 Jan 16 09:06:38.781841 kernel: loop7: detected capacity change from 0 to 142488 Jan 16 09:06:38.831913 (sd-merge)[1177]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Jan 16 09:06:38.833458 (sd-merge)[1177]: Merged extensions into '/usr'. Jan 16 09:06:38.844291 systemd[1]: Reloading requested from client PID 1128 ('systemd-sysext') (unit systemd-sysext.service)... Jan 16 09:06:38.845172 systemd[1]: Reloading... Jan 16 09:06:38.988385 zram_generator::config[1201]: No configuration found. Jan 16 09:06:39.329095 ldconfig[1123]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 16 09:06:39.431078 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 16 09:06:39.513503 systemd[1]: Reloading finished in 667 ms. Jan 16 09:06:39.557390 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 16 09:06:39.559365 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 16 09:06:39.576332 systemd[1]: Starting ensure-sysext.service... Jan 16 09:06:39.594291 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 16 09:06:39.619988 systemd[1]: Reloading requested from client PID 1247 ('systemctl') (unit ensure-sysext.service)... Jan 16 09:06:39.620023 systemd[1]: Reloading... Jan 16 09:06:39.741862 systemd-tmpfiles[1248]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 16 09:06:39.742552 systemd-tmpfiles[1248]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 16 09:06:39.749669 systemd-tmpfiles[1248]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 16 09:06:39.751952 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. Jan 16 09:06:39.752066 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. Jan 16 09:06:39.773370 systemd-tmpfiles[1248]: Detected autofs mount point /boot during canonicalization of boot. Jan 16 09:06:39.773392 systemd-tmpfiles[1248]: Skipping /boot Jan 16 09:06:39.819648 systemd-tmpfiles[1248]: Detected autofs mount point /boot during canonicalization of boot. Jan 16 09:06:39.819682 systemd-tmpfiles[1248]: Skipping /boot Jan 16 09:06:39.908551 zram_generator::config[1278]: No configuration found. Jan 16 09:06:40.161628 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 16 09:06:40.249640 systemd[1]: Reloading finished in 628 ms. Jan 16 09:06:40.272942 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 16 09:06:40.274501 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 16 09:06:40.299339 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 16 09:06:40.314212 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 16 09:06:40.321090 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 16 09:06:40.336398 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 16 09:06:40.343299 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 16 09:06:40.348097 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 16 09:06:40.365245 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 09:06:40.365715 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 09:06:40.376387 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 16 09:06:40.389616 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 16 09:06:40.402903 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 16 09:06:40.403904 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 09:06:40.404179 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 09:06:40.413389 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 09:06:40.413765 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 09:06:40.414293 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 09:06:40.414472 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 09:06:40.424460 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 09:06:40.424906 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 09:06:40.436327 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 16 09:06:40.437410 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 09:06:40.454313 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 16 09:06:40.455910 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 09:06:40.462145 systemd[1]: Finished ensure-sysext.service. Jan 16 09:06:40.466064 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 16 09:06:40.466419 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 16 09:06:40.467574 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 16 09:06:40.469686 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 16 09:06:40.472020 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 16 09:06:40.480676 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 16 09:06:40.481973 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 16 09:06:40.483512 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 16 09:06:40.484363 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 16 09:06:40.504131 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 16 09:06:40.504463 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 16 09:06:40.514302 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 16 09:06:40.514929 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 16 09:06:40.529019 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 16 09:06:40.538516 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 16 09:06:40.555170 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 16 09:06:40.603604 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 16 09:06:40.608258 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 16 09:06:40.609333 augenrules[1357]: No rules Jan 16 09:06:40.616433 systemd-udevd[1325]: Using default interface naming scheme 'v255'. Jan 16 09:06:40.617479 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 16 09:06:40.708420 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 16 09:06:40.721105 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 16 09:06:40.723040 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 16 09:06:40.724357 systemd[1]: Reached target time-set.target - System Time Set. Jan 16 09:06:40.821168 systemd-resolved[1324]: Positive Trust Anchors: Jan 16 09:06:40.821192 systemd-resolved[1324]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 16 09:06:40.821253 systemd-resolved[1324]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 16 09:06:40.866435 systemd-resolved[1324]: Using system hostname 'ci-4081.3.0-f-8515dcac45'. Jan 16 09:06:40.873012 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 16 09:06:40.873970 systemd-networkd[1373]: lo: Link UP Jan 16 09:06:40.873983 systemd-networkd[1373]: lo: Gained carrier Jan 16 09:06:40.875139 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 16 09:06:40.876649 systemd-networkd[1373]: Enumeration completed Jan 16 09:06:40.876868 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 16 09:06:40.877521 systemd[1]: Reached target network.target - Network. Jan 16 09:06:40.887186 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 16 09:06:40.961098 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 16 09:06:41.015498 systemd-networkd[1373]: eth0: Configuring with /run/systemd/network/10-b2:54:e2:2d:85:a9.network. Jan 16 09:06:41.016758 systemd-networkd[1373]: eth0: Link UP Jan 16 09:06:41.016769 systemd-networkd[1373]: eth0: Gained carrier Jan 16 09:06:41.021183 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Jan 16 09:06:41.021754 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 09:06:41.022067 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 09:06:41.032742 systemd-timesyncd[1346]: Network configuration changed, trying to establish connection. Jan 16 09:06:41.034963 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 16 09:06:41.039719 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 16 09:06:41.049491 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 16 09:06:41.051120 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 09:06:41.051207 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 16 09:06:41.051239 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 09:06:41.061949 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1374) Jan 16 09:06:41.084920 kernel: ISO 9660 Extensions: RRIP_1991A Jan 16 09:06:41.089463 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Jan 16 09:06:41.092009 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 16 09:06:41.094018 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 16 09:06:41.100903 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 16 09:06:41.107894 kernel: ACPI: button: Power Button [PWRF] Jan 16 09:06:41.138549 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 16 09:06:41.141161 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 16 09:06:41.144669 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 16 09:06:41.153295 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 16 09:06:41.154064 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 16 09:06:41.166449 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 16 09:06:41.195212 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jan 16 09:06:41.250885 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 16 09:06:41.346617 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 09:06:41.361864 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jan 16 09:06:41.368434 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jan 16 09:06:41.371919 kernel: Console: switching to colour dummy device 80x25 Jan 16 09:06:41.373012 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 16 09:06:41.373138 kernel: [drm] features: -context_init Jan 16 09:06:41.376893 kernel: [drm] number of scanouts: 1 Jan 16 09:06:41.377107 kernel: [drm] number of cap sets: 0 Jan 16 09:06:41.380873 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jan 16 09:06:41.393523 systemd-networkd[1373]: eth1: Configuring with /run/systemd/network/10-ce:5e:d9:41:76:bd.network. Jan 16 09:06:41.394866 kernel: mousedev: PS/2 mouse device common for all mice Jan 16 09:06:41.395569 systemd-networkd[1373]: eth1: Link UP Jan 16 09:06:41.395708 systemd-timesyncd[1346]: Network configuration changed, trying to establish connection. Jan 16 09:06:41.395887 systemd-networkd[1373]: eth1: Gained carrier Jan 16 09:06:41.402423 systemd-timesyncd[1346]: Network configuration changed, trying to establish connection. Jan 16 09:06:41.406341 systemd-timesyncd[1346]: Network configuration changed, trying to establish connection. Jan 16 09:06:41.410997 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 16 09:06:41.411122 kernel: Console: switching to colour frame buffer device 128x48 Jan 16 09:06:41.444507 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 16 09:06:41.480497 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 16 09:06:41.480900 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 09:06:41.499449 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 09:06:41.511949 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 16 09:06:41.533431 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 16 09:06:41.545493 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 16 09:06:41.546279 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 09:06:41.555278 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 09:06:41.622643 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 16 09:06:41.709653 kernel: EDAC MC: Ver: 3.0.0 Jan 16 09:06:41.725349 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 09:06:41.741724 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 16 09:06:41.755505 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 16 09:06:41.779703 lvm[1430]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 16 09:06:41.822089 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 16 09:06:41.822657 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 16 09:06:41.822871 systemd[1]: Reached target sysinit.target - System Initialization. Jan 16 09:06:41.823171 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 16 09:06:41.823371 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 16 09:06:41.823793 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 16 09:06:41.825329 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 16 09:06:41.825494 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 16 09:06:41.825579 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 16 09:06:41.825629 systemd[1]: Reached target paths.target - Path Units. Jan 16 09:06:41.825699 systemd[1]: Reached target timers.target - Timer Units. Jan 16 09:06:41.829958 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 16 09:06:41.834775 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 16 09:06:41.849018 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 16 09:06:41.867201 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 16 09:06:41.879623 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 16 09:06:41.881379 lvm[1434]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 16 09:06:41.882248 systemd[1]: Reached target sockets.target - Socket Units. Jan 16 09:06:41.883215 systemd[1]: Reached target basic.target - Basic System. Jan 16 09:06:41.884216 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 16 09:06:41.884272 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 16 09:06:41.899261 systemd[1]: Starting containerd.service - containerd container runtime... Jan 16 09:06:41.917387 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 16 09:06:41.928255 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 16 09:06:41.941152 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 16 09:06:41.962209 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 16 09:06:41.968187 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 16 09:06:41.975273 jq[1438]: false Jan 16 09:06:41.980265 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 16 09:06:41.996228 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 16 09:06:42.009605 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 16 09:06:42.037334 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 16 09:06:42.043963 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 16 09:06:42.046469 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 16 09:06:42.051827 coreos-metadata[1436]: Jan 16 09:06:42.050 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 16 09:06:42.069461 coreos-metadata[1436]: Jan 16 09:06:42.066 INFO Fetch successful Jan 16 09:06:42.070423 systemd[1]: Starting update-engine.service - Update Engine... Jan 16 09:06:42.081380 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 16 09:06:42.090170 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 16 09:06:42.114306 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 16 09:06:42.116226 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 16 09:06:42.146322 extend-filesystems[1441]: Found loop4 Jan 16 09:06:42.163923 extend-filesystems[1441]: Found loop5 Jan 16 09:06:42.163923 extend-filesystems[1441]: Found loop6 Jan 16 09:06:42.163923 extend-filesystems[1441]: Found loop7 Jan 16 09:06:42.163923 extend-filesystems[1441]: Found vda Jan 16 09:06:42.163923 extend-filesystems[1441]: Found vda1 Jan 16 09:06:42.163923 extend-filesystems[1441]: Found vda2 Jan 16 09:06:42.163923 extend-filesystems[1441]: Found vda3 Jan 16 09:06:42.163923 extend-filesystems[1441]: Found usr Jan 16 09:06:42.163923 extend-filesystems[1441]: Found vda4 Jan 16 09:06:42.163923 extend-filesystems[1441]: Found vda6 Jan 16 09:06:42.163923 extend-filesystems[1441]: Found vda7 Jan 16 09:06:42.163923 extend-filesystems[1441]: Found vda9 Jan 16 09:06:42.163923 extend-filesystems[1441]: Checking size of /dev/vda9 Jan 16 09:06:42.196175 dbus-daemon[1437]: [system] SELinux support is enabled Jan 16 09:06:42.166768 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 16 09:06:42.168667 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 16 09:06:42.202465 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 16 09:06:42.258106 systemd[1]: motdgen.service: Deactivated successfully. Jan 16 09:06:42.258600 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 16 09:06:42.278889 jq[1448]: true Jan 16 09:06:42.282982 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 16 09:06:42.295491 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 16 09:06:42.295787 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 16 09:06:42.295921 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 16 09:06:42.307123 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 16 09:06:42.307348 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Jan 16 09:06:42.307391 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 16 09:06:42.328857 (ntainerd)[1461]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 16 09:06:42.344591 update_engine[1447]: I20250116 09:06:42.343892 1447 main.cc:92] Flatcar Update Engine starting Jan 16 09:06:42.348284 extend-filesystems[1441]: Resized partition /dev/vda9 Jan 16 09:06:42.358484 systemd[1]: Started update-engine.service - Update Engine. Jan 16 09:06:42.391793 update_engine[1447]: I20250116 09:06:42.360077 1447 update_check_scheduler.cc:74] Next update check in 10m42s Jan 16 09:06:42.394263 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 16 09:06:42.407747 extend-filesystems[1478]: resize2fs 1.47.1 (20-May-2024) Jan 16 09:06:42.409154 jq[1471]: true Jan 16 09:06:42.427213 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Jan 16 09:06:42.474883 systemd-networkd[1373]: eth1: Gained IPv6LL Jan 16 09:06:42.475597 systemd-timesyncd[1346]: Network configuration changed, trying to establish connection. Jan 16 09:06:42.484834 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 16 09:06:42.497145 systemd[1]: Reached target network-online.target - Network is Online. Jan 16 09:06:42.500853 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1381) Jan 16 09:06:42.524270 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 09:06:42.549710 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 16 09:06:42.773575 systemd-logind[1446]: New seat seat0. Jan 16 09:06:42.799913 bash[1500]: Updated "/home/core/.ssh/authorized_keys" Jan 16 09:06:42.802566 systemd-logind[1446]: Watching system buttons on /dev/input/event1 (Power Button) Jan 16 09:06:42.802598 systemd-logind[1446]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 16 09:06:42.804888 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 16 09:06:42.828423 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 16 09:06:42.827322 systemd[1]: Starting sshkeys.service... Jan 16 09:06:42.835262 systemd[1]: Started systemd-logind.service - User Login Management. Jan 16 09:06:42.894900 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 16 09:06:42.896727 systemd-networkd[1373]: eth0: Gained IPv6LL Jan 16 09:06:42.912055 systemd-timesyncd[1346]: Network configuration changed, trying to establish connection. Jan 16 09:06:42.916557 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 16 09:06:42.927122 extend-filesystems[1478]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 16 09:06:42.927122 extend-filesystems[1478]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 16 09:06:42.927122 extend-filesystems[1478]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 16 09:06:42.952519 extend-filesystems[1441]: Resized filesystem in /dev/vda9 Jan 16 09:06:42.952519 extend-filesystems[1441]: Found vdb Jan 16 09:06:42.928645 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 16 09:06:42.930197 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 16 09:06:42.966143 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 16 09:06:43.043709 sshd_keygen[1469]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 16 09:06:43.120751 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 16 09:06:43.143011 locksmithd[1477]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 16 09:06:43.153184 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 16 09:06:43.199516 coreos-metadata[1507]: Jan 16 09:06:43.199 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 16 09:06:43.215337 systemd[1]: issuegen.service: Deactivated successfully. Jan 16 09:06:43.215656 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 16 09:06:43.228179 coreos-metadata[1507]: Jan 16 09:06:43.227 INFO Fetch successful Jan 16 09:06:43.239408 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 16 09:06:43.269878 unknown[1507]: wrote ssh authorized keys file for user: core Jan 16 09:06:43.314129 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 16 09:06:43.344723 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 16 09:06:43.365553 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 16 09:06:43.369581 systemd[1]: Reached target getty.target - Login Prompts. Jan 16 09:06:43.403894 update-ssh-keys[1537]: Updated "/home/core/.ssh/authorized_keys" Jan 16 09:06:43.409589 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 16 09:06:43.419952 systemd[1]: Finished sshkeys.service. Jan 16 09:06:43.479514 containerd[1461]: time="2025-01-16T09:06:43.479069214Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 16 09:06:43.530599 containerd[1461]: time="2025-01-16T09:06:43.530237015Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 16 09:06:43.534894 containerd[1461]: time="2025-01-16T09:06:43.534180866Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 16 09:06:43.534894 containerd[1461]: time="2025-01-16T09:06:43.534261022Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 16 09:06:43.534894 containerd[1461]: time="2025-01-16T09:06:43.534291960Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 16 09:06:43.534894 containerd[1461]: time="2025-01-16T09:06:43.534579043Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 16 09:06:43.534894 containerd[1461]: time="2025-01-16T09:06:43.534618204Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 16 09:06:43.534894 containerd[1461]: time="2025-01-16T09:06:43.534733210Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 16 09:06:43.534894 containerd[1461]: time="2025-01-16T09:06:43.534760778Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 16 09:06:43.536532 containerd[1461]: time="2025-01-16T09:06:43.535769026Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 16 09:06:43.536532 containerd[1461]: time="2025-01-16T09:06:43.535840065Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 16 09:06:43.536532 containerd[1461]: time="2025-01-16T09:06:43.535864544Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 16 09:06:43.536532 containerd[1461]: time="2025-01-16T09:06:43.535881720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 16 09:06:43.536532 containerd[1461]: time="2025-01-16T09:06:43.536069037Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 16 09:06:43.536532 containerd[1461]: time="2025-01-16T09:06:43.536447473Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 16 09:06:43.537259 containerd[1461]: time="2025-01-16T09:06:43.537217206Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 16 09:06:43.537388 containerd[1461]: time="2025-01-16T09:06:43.537372626Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 16 09:06:43.538214 containerd[1461]: time="2025-01-16T09:06:43.537624672Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 16 09:06:43.538214 containerd[1461]: time="2025-01-16T09:06:43.537696971Z" level=info msg="metadata content store policy set" policy=shared Jan 16 09:06:43.541702 containerd[1461]: time="2025-01-16T09:06:43.541633058Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 16 09:06:43.542105 containerd[1461]: time="2025-01-16T09:06:43.542075739Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 16 09:06:43.542273 containerd[1461]: time="2025-01-16T09:06:43.542254992Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 16 09:06:43.542428 containerd[1461]: time="2025-01-16T09:06:43.542406351Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 16 09:06:43.542849 containerd[1461]: time="2025-01-16T09:06:43.542497006Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 16 09:06:43.542849 containerd[1461]: time="2025-01-16T09:06:43.542767762Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 16 09:06:43.543506 containerd[1461]: time="2025-01-16T09:06:43.543473816Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 16 09:06:43.543790 containerd[1461]: time="2025-01-16T09:06:43.543765694Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 16 09:06:43.544855 containerd[1461]: time="2025-01-16T09:06:43.543934639Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 16 09:06:43.544855 containerd[1461]: time="2025-01-16T09:06:43.543964650Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 16 09:06:43.544855 containerd[1461]: time="2025-01-16T09:06:43.543991246Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 16 09:06:43.544855 containerd[1461]: time="2025-01-16T09:06:43.544016648Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 16 09:06:43.544855 containerd[1461]: time="2025-01-16T09:06:43.544040022Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 16 09:06:43.544855 containerd[1461]: time="2025-01-16T09:06:43.544064428Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 16 09:06:43.544855 containerd[1461]: time="2025-01-16T09:06:43.544088942Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 16 09:06:43.544855 containerd[1461]: time="2025-01-16T09:06:43.544111356Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 16 09:06:43.544855 containerd[1461]: time="2025-01-16T09:06:43.544132713Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 16 09:06:43.544855 containerd[1461]: time="2025-01-16T09:06:43.544154185Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 16 09:06:43.544855 containerd[1461]: time="2025-01-16T09:06:43.544192142Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 16 09:06:43.544855 containerd[1461]: time="2025-01-16T09:06:43.544215915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 16 09:06:43.544855 containerd[1461]: time="2025-01-16T09:06:43.544239688Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 16 09:06:43.544855 containerd[1461]: time="2025-01-16T09:06:43.544272408Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 16 09:06:43.545413 containerd[1461]: time="2025-01-16T09:06:43.544293787Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 16 09:06:43.545413 containerd[1461]: time="2025-01-16T09:06:43.544315628Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 16 09:06:43.545413 containerd[1461]: time="2025-01-16T09:06:43.544338931Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 16 09:06:43.545413 containerd[1461]: time="2025-01-16T09:06:43.544361402Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 16 09:06:43.545413 containerd[1461]: time="2025-01-16T09:06:43.544383139Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 16 09:06:43.545413 containerd[1461]: time="2025-01-16T09:06:43.544406916Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 16 09:06:43.545413 containerd[1461]: time="2025-01-16T09:06:43.544427133Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 16 09:06:43.545413 containerd[1461]: time="2025-01-16T09:06:43.544464313Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 16 09:06:43.545413 containerd[1461]: time="2025-01-16T09:06:43.544489586Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 16 09:06:43.545413 containerd[1461]: time="2025-01-16T09:06:43.544526233Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 16 09:06:43.545413 containerd[1461]: time="2025-01-16T09:06:43.544566939Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 16 09:06:43.545413 containerd[1461]: time="2025-01-16T09:06:43.544585473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 16 09:06:43.545413 containerd[1461]: time="2025-01-16T09:06:43.544602923Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 16 09:06:43.547858 containerd[1461]: time="2025-01-16T09:06:43.546319746Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 16 09:06:43.547858 containerd[1461]: time="2025-01-16T09:06:43.546509483Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 16 09:06:43.547858 containerd[1461]: time="2025-01-16T09:06:43.546531071Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 16 09:06:43.547858 containerd[1461]: time="2025-01-16T09:06:43.546549684Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 16 09:06:43.547858 containerd[1461]: time="2025-01-16T09:06:43.546565099Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 16 09:06:43.547858 containerd[1461]: time="2025-01-16T09:06:43.546590076Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 16 09:06:43.547858 containerd[1461]: time="2025-01-16T09:06:43.546606412Z" level=info msg="NRI interface is disabled by configuration." Jan 16 09:06:43.547858 containerd[1461]: time="2025-01-16T09:06:43.546620587Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 16 09:06:43.548340 containerd[1461]: time="2025-01-16T09:06:43.547120417Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 16 09:06:43.548340 containerd[1461]: time="2025-01-16T09:06:43.547216108Z" level=info msg="Connect containerd service" Jan 16 09:06:43.548340 containerd[1461]: time="2025-01-16T09:06:43.547310051Z" level=info msg="using legacy CRI server" Jan 16 09:06:43.548340 containerd[1461]: time="2025-01-16T09:06:43.547324968Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 16 09:06:43.548340 containerd[1461]: time="2025-01-16T09:06:43.547527918Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 16 09:06:43.554573 containerd[1461]: time="2025-01-16T09:06:43.554268045Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 16 09:06:43.555862 containerd[1461]: time="2025-01-16T09:06:43.555354627Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 16 09:06:43.555862 containerd[1461]: time="2025-01-16T09:06:43.555469181Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 16 09:06:43.555862 containerd[1461]: time="2025-01-16T09:06:43.555610148Z" level=info msg="Start subscribing containerd event" Jan 16 09:06:43.555862 containerd[1461]: time="2025-01-16T09:06:43.555707065Z" level=info msg="Start recovering state" Jan 16 09:06:43.556035 containerd[1461]: time="2025-01-16T09:06:43.555921924Z" level=info msg="Start event monitor" Jan 16 09:06:43.556035 containerd[1461]: time="2025-01-16T09:06:43.555957206Z" level=info msg="Start snapshots syncer" Jan 16 09:06:43.556035 containerd[1461]: time="2025-01-16T09:06:43.555973342Z" level=info msg="Start cni network conf syncer for default" Jan 16 09:06:43.556035 containerd[1461]: time="2025-01-16T09:06:43.555989025Z" level=info msg="Start streaming server" Jan 16 09:06:43.556319 systemd[1]: Started containerd.service - containerd container runtime. Jan 16 09:06:43.559706 containerd[1461]: time="2025-01-16T09:06:43.559303246Z" level=info msg="containerd successfully booted in 0.085549s" Jan 16 09:06:44.729171 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 09:06:44.743068 (kubelet)[1553]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 16 09:06:44.743543 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 16 09:06:44.748439 systemd[1]: Startup finished in 1.482s (kernel) + 5.903s (initrd) + 8.148s (userspace) = 15.533s. Jan 16 09:06:45.861855 kubelet[1553]: E0116 09:06:45.861743 1553 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 16 09:06:45.865728 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 16 09:06:45.866929 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 16 09:06:45.867957 systemd[1]: kubelet.service: Consumed 1.551s CPU time. Jan 16 09:06:51.353186 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 16 09:06:51.364380 systemd[1]: Started sshd@0-143.110.238.88:22-139.178.68.195:52736.service - OpenSSH per-connection server daemon (139.178.68.195:52736). Jan 16 09:06:51.472249 sshd[1566]: Accepted publickey for core from 139.178.68.195 port 52736 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:06:51.478502 sshd[1566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:06:51.496938 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 16 09:06:51.522349 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 16 09:06:51.525621 systemd-logind[1446]: New session 1 of user core. Jan 16 09:06:51.553650 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 16 09:06:51.563431 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 16 09:06:51.584608 (systemd)[1570]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 16 09:06:51.807973 systemd[1570]: Queued start job for default target default.target. Jan 16 09:06:51.820878 systemd[1570]: Created slice app.slice - User Application Slice. Jan 16 09:06:51.820945 systemd[1570]: Reached target paths.target - Paths. Jan 16 09:06:51.820971 systemd[1570]: Reached target timers.target - Timers. Jan 16 09:06:51.825223 systemd[1570]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 16 09:06:51.854729 systemd[1570]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 16 09:06:51.855028 systemd[1570]: Reached target sockets.target - Sockets. Jan 16 09:06:51.855058 systemd[1570]: Reached target basic.target - Basic System. Jan 16 09:06:51.855154 systemd[1570]: Reached target default.target - Main User Target. Jan 16 09:06:51.855211 systemd[1570]: Startup finished in 259ms. Jan 16 09:06:51.855999 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 16 09:06:51.869176 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 16 09:06:51.964725 systemd[1]: Started sshd@1-143.110.238.88:22-139.178.68.195:52746.service - OpenSSH per-connection server daemon (139.178.68.195:52746). Jan 16 09:06:52.052784 sshd[1581]: Accepted publickey for core from 139.178.68.195 port 52746 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:06:52.055604 sshd[1581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:06:52.065181 systemd-logind[1446]: New session 2 of user core. Jan 16 09:06:52.072200 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 16 09:06:52.150538 sshd[1581]: pam_unix(sshd:session): session closed for user core Jan 16 09:06:52.161745 systemd[1]: sshd@1-143.110.238.88:22-139.178.68.195:52746.service: Deactivated successfully. Jan 16 09:06:52.164229 systemd[1]: session-2.scope: Deactivated successfully. Jan 16 09:06:52.167208 systemd-logind[1446]: Session 2 logged out. Waiting for processes to exit. Jan 16 09:06:52.174401 systemd[1]: Started sshd@2-143.110.238.88:22-139.178.68.195:52752.service - OpenSSH per-connection server daemon (139.178.68.195:52752). Jan 16 09:06:52.176578 systemd-logind[1446]: Removed session 2. Jan 16 09:06:52.235845 sshd[1588]: Accepted publickey for core from 139.178.68.195 port 52752 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:06:52.238512 sshd[1588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:06:52.247171 systemd-logind[1446]: New session 3 of user core. Jan 16 09:06:52.257329 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 16 09:06:52.326314 sshd[1588]: pam_unix(sshd:session): session closed for user core Jan 16 09:06:52.349898 systemd[1]: sshd@2-143.110.238.88:22-139.178.68.195:52752.service: Deactivated successfully. Jan 16 09:06:52.353203 systemd[1]: session-3.scope: Deactivated successfully. Jan 16 09:06:52.356922 systemd-logind[1446]: Session 3 logged out. Waiting for processes to exit. Jan 16 09:06:52.363046 systemd[1]: Started sshd@3-143.110.238.88:22-139.178.68.195:52754.service - OpenSSH per-connection server daemon (139.178.68.195:52754). Jan 16 09:06:52.365211 systemd-logind[1446]: Removed session 3. Jan 16 09:06:52.428180 sshd[1595]: Accepted publickey for core from 139.178.68.195 port 52754 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:06:52.430722 sshd[1595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:06:52.440917 systemd-logind[1446]: New session 4 of user core. Jan 16 09:06:52.451225 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 16 09:06:52.534050 sshd[1595]: pam_unix(sshd:session): session closed for user core Jan 16 09:06:52.551252 systemd[1]: sshd@3-143.110.238.88:22-139.178.68.195:52754.service: Deactivated successfully. Jan 16 09:06:52.554086 systemd[1]: session-4.scope: Deactivated successfully. Jan 16 09:06:52.558114 systemd-logind[1446]: Session 4 logged out. Waiting for processes to exit. Jan 16 09:06:52.564456 systemd[1]: Started sshd@4-143.110.238.88:22-139.178.68.195:52764.service - OpenSSH per-connection server daemon (139.178.68.195:52764). Jan 16 09:06:52.570619 systemd-logind[1446]: Removed session 4. Jan 16 09:06:52.639134 sshd[1602]: Accepted publickey for core from 139.178.68.195 port 52764 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:06:52.641563 sshd[1602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:06:52.654384 systemd-logind[1446]: New session 5 of user core. Jan 16 09:06:52.661161 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 16 09:06:52.749020 sudo[1605]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 16 09:06:52.749848 sudo[1605]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 16 09:06:52.765278 sudo[1605]: pam_unix(sudo:session): session closed for user root Jan 16 09:06:52.769724 sshd[1602]: pam_unix(sshd:session): session closed for user core Jan 16 09:06:52.782715 systemd[1]: sshd@4-143.110.238.88:22-139.178.68.195:52764.service: Deactivated successfully. Jan 16 09:06:52.785774 systemd[1]: session-5.scope: Deactivated successfully. Jan 16 09:06:52.792381 systemd-logind[1446]: Session 5 logged out. Waiting for processes to exit. Jan 16 09:06:52.804342 systemd[1]: Started sshd@5-143.110.238.88:22-139.178.68.195:52778.service - OpenSSH per-connection server daemon (139.178.68.195:52778). Jan 16 09:06:52.807903 systemd-logind[1446]: Removed session 5. Jan 16 09:06:52.873719 sshd[1610]: Accepted publickey for core from 139.178.68.195 port 52778 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:06:52.876862 sshd[1610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:06:52.890735 systemd-logind[1446]: New session 6 of user core. Jan 16 09:06:52.897199 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 16 09:06:52.968255 sudo[1614]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 16 09:06:52.969294 sudo[1614]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 16 09:06:52.976430 sudo[1614]: pam_unix(sudo:session): session closed for user root Jan 16 09:06:52.987118 sudo[1613]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 16 09:06:52.987513 sudo[1613]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 16 09:06:53.028380 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 16 09:06:53.032849 auditctl[1617]: No rules Jan 16 09:06:53.034424 systemd[1]: audit-rules.service: Deactivated successfully. Jan 16 09:06:53.034763 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 16 09:06:53.041502 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 16 09:06:53.124859 augenrules[1635]: No rules Jan 16 09:06:53.126881 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 16 09:06:53.131734 sudo[1613]: pam_unix(sudo:session): session closed for user root Jan 16 09:06:53.138338 sshd[1610]: pam_unix(sshd:session): session closed for user core Jan 16 09:06:53.153628 systemd[1]: sshd@5-143.110.238.88:22-139.178.68.195:52778.service: Deactivated successfully. Jan 16 09:06:53.158191 systemd[1]: session-6.scope: Deactivated successfully. Jan 16 09:06:53.163909 systemd-logind[1446]: Session 6 logged out. Waiting for processes to exit. Jan 16 09:06:53.170971 systemd[1]: Started sshd@6-143.110.238.88:22-139.178.68.195:52790.service - OpenSSH per-connection server daemon (139.178.68.195:52790). Jan 16 09:06:53.172933 systemd-logind[1446]: Removed session 6. Jan 16 09:06:53.236230 sshd[1643]: Accepted publickey for core from 139.178.68.195 port 52790 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:06:53.240965 sshd[1643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:06:53.250419 systemd-logind[1446]: New session 7 of user core. Jan 16 09:06:53.270181 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 16 09:06:53.346349 sudo[1646]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 16 09:06:53.346931 sudo[1646]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 16 09:06:54.478686 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 09:06:54.479474 systemd[1]: kubelet.service: Consumed 1.551s CPU time. Jan 16 09:06:54.490217 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 09:06:54.541442 systemd[1]: Reloading requested from client PID 1678 ('systemctl') (unit session-7.scope)... Jan 16 09:06:54.541470 systemd[1]: Reloading... Jan 16 09:06:54.730932 zram_generator::config[1716]: No configuration found. Jan 16 09:06:54.916513 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 16 09:06:55.026652 systemd[1]: Reloading finished in 484 ms. Jan 16 09:06:55.096620 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 16 09:06:55.096743 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 16 09:06:55.097136 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 09:06:55.105481 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 09:06:55.289112 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 09:06:55.300506 (kubelet)[1770]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 16 09:06:55.373963 kubelet[1770]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 16 09:06:55.373963 kubelet[1770]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 16 09:06:55.373963 kubelet[1770]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 16 09:06:55.374481 kubelet[1770]: I0116 09:06:55.373961 1770 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 16 09:06:56.059109 kubelet[1770]: I0116 09:06:56.059032 1770 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 16 09:06:56.059109 kubelet[1770]: I0116 09:06:56.059089 1770 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 16 09:06:56.059540 kubelet[1770]: I0116 09:06:56.059504 1770 server.go:929] "Client rotation is on, will bootstrap in background" Jan 16 09:06:56.093739 kubelet[1770]: I0116 09:06:56.093672 1770 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 16 09:06:56.112305 kubelet[1770]: E0116 09:06:56.112244 1770 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 16 09:06:56.112305 kubelet[1770]: I0116 09:06:56.112289 1770 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 16 09:06:56.119190 kubelet[1770]: I0116 09:06:56.118651 1770 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 16 09:06:56.120199 kubelet[1770]: I0116 09:06:56.120134 1770 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 16 09:06:56.120510 kubelet[1770]: I0116 09:06:56.120420 1770 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 16 09:06:56.120895 kubelet[1770]: I0116 09:06:56.120478 1770 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"143.110.238.88","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 16 09:06:56.120895 kubelet[1770]: I0116 09:06:56.120815 1770 topology_manager.go:138] "Creating topology manager with none policy" Jan 16 09:06:56.120895 kubelet[1770]: I0116 09:06:56.120827 1770 container_manager_linux.go:300] "Creating device plugin manager" Jan 16 09:06:56.121209 kubelet[1770]: I0116 09:06:56.120982 1770 state_mem.go:36] "Initialized new in-memory state store" Jan 16 09:06:56.124575 kubelet[1770]: I0116 09:06:56.124112 1770 kubelet.go:408] "Attempting to sync node with API server" Jan 16 09:06:56.124575 kubelet[1770]: I0116 09:06:56.124176 1770 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 16 09:06:56.124575 kubelet[1770]: I0116 09:06:56.124252 1770 kubelet.go:314] "Adding apiserver pod source" Jan 16 09:06:56.124575 kubelet[1770]: I0116 09:06:56.124279 1770 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 16 09:06:56.128064 kubelet[1770]: E0116 09:06:56.128015 1770 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:06:56.128064 kubelet[1770]: E0116 09:06:56.128075 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:06:56.131446 kubelet[1770]: I0116 09:06:56.131280 1770 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 16 09:06:56.133642 kubelet[1770]: I0116 09:06:56.133603 1770 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 16 09:06:56.136013 kubelet[1770]: W0116 09:06:56.135966 1770 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 16 09:06:56.137186 kubelet[1770]: I0116 09:06:56.136987 1770 server.go:1269] "Started kubelet" Jan 16 09:06:56.139064 kubelet[1770]: I0116 09:06:56.138278 1770 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 16 09:06:56.140477 kubelet[1770]: I0116 09:06:56.139936 1770 server.go:460] "Adding debug handlers to kubelet server" Jan 16 09:06:56.144301 kubelet[1770]: I0116 09:06:56.143535 1770 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 16 09:06:56.144301 kubelet[1770]: I0116 09:06:56.143602 1770 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 16 09:06:56.144301 kubelet[1770]: I0116 09:06:56.143855 1770 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 16 09:06:56.148375 kubelet[1770]: I0116 09:06:56.148336 1770 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 16 09:06:56.153025 kubelet[1770]: I0116 09:06:56.152990 1770 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 16 09:06:56.154384 kubelet[1770]: I0116 09:06:56.153141 1770 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 16 09:06:56.154791 kubelet[1770]: E0116 09:06:56.153381 1770 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"143.110.238.88\" not found" Jan 16 09:06:56.154791 kubelet[1770]: I0116 09:06:56.154653 1770 reconciler.go:26] "Reconciler: start to sync state" Jan 16 09:06:56.157373 kubelet[1770]: I0116 09:06:56.156226 1770 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 16 09:06:56.158985 kubelet[1770]: E0116 09:06:56.158541 1770 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 16 09:06:56.167631 kubelet[1770]: I0116 09:06:56.167282 1770 factory.go:221] Registration of the containerd container factory successfully Jan 16 09:06:56.167631 kubelet[1770]: I0116 09:06:56.167313 1770 factory.go:221] Registration of the systemd container factory successfully Jan 16 09:06:56.198429 kubelet[1770]: W0116 09:06:56.196208 1770 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 16 09:06:56.198429 kubelet[1770]: E0116 09:06:56.196337 1770 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Jan 16 09:06:56.205087 kubelet[1770]: E0116 09:06:56.196450 1770 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{143.110.238.88.181b210bde2791e9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:143.110.238.88,UID:143.110.238.88,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:143.110.238.88,},FirstTimestamp:2025-01-16 09:06:56.136950249 +0000 UTC m=+0.828045618,LastTimestamp:2025-01-16 09:06:56.136950249 +0000 UTC m=+0.828045618,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:143.110.238.88,}" Jan 16 09:06:56.205087 kubelet[1770]: W0116 09:06:56.201425 1770 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 16 09:06:56.205087 kubelet[1770]: E0116 09:06:56.201517 1770 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Jan 16 09:06:56.205087 kubelet[1770]: W0116 09:06:56.201621 1770 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "143.110.238.88" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 16 09:06:56.205087 kubelet[1770]: E0116 09:06:56.201643 1770 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"143.110.238.88\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Jan 16 09:06:56.205435 kubelet[1770]: E0116 09:06:56.201975 1770 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"143.110.238.88\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Jan 16 09:06:56.210532 kubelet[1770]: I0116 09:06:56.210130 1770 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 16 09:06:56.210532 kubelet[1770]: I0116 09:06:56.210167 1770 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 16 09:06:56.210532 kubelet[1770]: I0116 09:06:56.210196 1770 state_mem.go:36] "Initialized new in-memory state store" Jan 16 09:06:56.217145 kubelet[1770]: I0116 09:06:56.217109 1770 policy_none.go:49] "None policy: Start" Jan 16 09:06:56.219768 kubelet[1770]: I0116 09:06:56.219735 1770 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 16 09:06:56.220159 kubelet[1770]: I0116 09:06:56.219974 1770 state_mem.go:35] "Initializing new in-memory state store" Jan 16 09:06:56.227430 kubelet[1770]: E0116 09:06:56.227291 1770 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{143.110.238.88.181b210bdf70a9be default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:143.110.238.88,UID:143.110.238.88,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:143.110.238.88,},FirstTimestamp:2025-01-16 09:06:56.158517694 +0000 UTC m=+0.849613065,LastTimestamp:2025-01-16 09:06:56.158517694 +0000 UTC m=+0.849613065,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:143.110.238.88,}" Jan 16 09:06:56.234482 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 16 09:06:56.253918 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 16 09:06:56.255118 kubelet[1770]: E0116 09:06:56.255030 1770 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"143.110.238.88\" not found" Jan 16 09:06:56.261342 kubelet[1770]: E0116 09:06:56.261185 1770 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{143.110.238.88.181b210be248020b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:143.110.238.88,UID:143.110.238.88,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 143.110.238.88 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:143.110.238.88,},FirstTimestamp:2025-01-16 09:06:56.206184971 +0000 UTC m=+0.897280332,LastTimestamp:2025-01-16 09:06:56.206184971 +0000 UTC m=+0.897280332,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:143.110.238.88,}" Jan 16 09:06:56.268365 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 16 09:06:56.279969 kubelet[1770]: I0116 09:06:56.279316 1770 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 16 09:06:56.279969 kubelet[1770]: I0116 09:06:56.279615 1770 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 16 09:06:56.279969 kubelet[1770]: I0116 09:06:56.279634 1770 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 16 09:06:56.281595 kubelet[1770]: I0116 09:06:56.281438 1770 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 16 09:06:56.288851 kubelet[1770]: E0116 09:06:56.288609 1770 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"143.110.238.88\" not found" Jan 16 09:06:56.291954 kubelet[1770]: I0116 09:06:56.291500 1770 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 16 09:06:56.296704 kubelet[1770]: I0116 09:06:56.296662 1770 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 16 09:06:56.296957 kubelet[1770]: I0116 09:06:56.296912 1770 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 16 09:06:56.297131 kubelet[1770]: I0116 09:06:56.297121 1770 kubelet.go:2321] "Starting kubelet main sync loop" Jan 16 09:06:56.297325 kubelet[1770]: E0116 09:06:56.297291 1770 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 16 09:06:56.382071 kubelet[1770]: I0116 09:06:56.381574 1770 kubelet_node_status.go:72] "Attempting to register node" node="143.110.238.88" Jan 16 09:06:56.414353 kubelet[1770]: I0116 09:06:56.414151 1770 kubelet_node_status.go:75] "Successfully registered node" node="143.110.238.88" Jan 16 09:06:56.414353 kubelet[1770]: E0116 09:06:56.414197 1770 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"143.110.238.88\": node \"143.110.238.88\" not found" Jan 16 09:06:56.463844 kubelet[1770]: E0116 09:06:56.463767 1770 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"143.110.238.88\" not found" Jan 16 09:06:56.563994 kubelet[1770]: E0116 09:06:56.563925 1770 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"143.110.238.88\" not found" Jan 16 09:06:56.664727 kubelet[1770]: E0116 09:06:56.664569 1770 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"143.110.238.88\" not found" Jan 16 09:06:56.765520 kubelet[1770]: E0116 09:06:56.765452 1770 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"143.110.238.88\" not found" Jan 16 09:06:56.866576 kubelet[1770]: E0116 09:06:56.866503 1770 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"143.110.238.88\" not found" Jan 16 09:06:56.915276 sudo[1646]: pam_unix(sudo:session): session closed for user root Jan 16 09:06:56.919086 sshd[1643]: pam_unix(sshd:session): session closed for user core Jan 16 09:06:56.925178 systemd[1]: sshd@6-143.110.238.88:22-139.178.68.195:52790.service: Deactivated successfully. Jan 16 09:06:56.929369 systemd[1]: session-7.scope: Deactivated successfully. Jan 16 09:06:56.931163 systemd-logind[1446]: Session 7 logged out. Waiting for processes to exit. Jan 16 09:06:56.932708 systemd-logind[1446]: Removed session 7. Jan 16 09:06:56.967519 kubelet[1770]: E0116 09:06:56.967443 1770 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"143.110.238.88\" not found" Jan 16 09:06:57.062116 kubelet[1770]: I0116 09:06:57.062034 1770 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 16 09:06:57.062508 kubelet[1770]: W0116 09:06:57.062331 1770 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 16 09:06:57.062508 kubelet[1770]: W0116 09:06:57.062462 1770 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 16 09:06:57.068386 kubelet[1770]: E0116 09:06:57.068308 1770 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"143.110.238.88\" not found" Jan 16 09:06:57.128945 kubelet[1770]: E0116 09:06:57.128859 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:06:57.169502 kubelet[1770]: E0116 09:06:57.169359 1770 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"143.110.238.88\" not found" Jan 16 09:06:57.270372 kubelet[1770]: E0116 09:06:57.270264 1770 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"143.110.238.88\" not found" Jan 16 09:06:57.371028 kubelet[1770]: E0116 09:06:57.370954 1770 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"143.110.238.88\" not found" Jan 16 09:06:57.472158 kubelet[1770]: E0116 09:06:57.471975 1770 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"143.110.238.88\" not found" Jan 16 09:06:57.573432 kubelet[1770]: E0116 09:06:57.573348 1770 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"143.110.238.88\" not found" Jan 16 09:06:57.674012 kubelet[1770]: E0116 09:06:57.673933 1770 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"143.110.238.88\" not found" Jan 16 09:06:57.775961 kubelet[1770]: I0116 09:06:57.775794 1770 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 16 09:06:57.776457 containerd[1461]: time="2025-01-16T09:06:57.776347176Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 16 09:06:57.776946 kubelet[1770]: I0116 09:06:57.776784 1770 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 16 09:06:58.130120 kubelet[1770]: I0116 09:06:58.129965 1770 apiserver.go:52] "Watching apiserver" Jan 16 09:06:58.130120 kubelet[1770]: E0116 09:06:58.130047 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:06:58.139598 kubelet[1770]: E0116 09:06:58.139408 1770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2rk47" podUID="1efedbdc-2152-4ce4-a7be-f69fdc2dddc3" Jan 16 09:06:58.152497 systemd[1]: Created slice kubepods-besteffort-podac2926c4_22a8_45fc_85af_19f66c353e5b.slice - libcontainer container kubepods-besteffort-podac2926c4_22a8_45fc_85af_19f66c353e5b.slice. Jan 16 09:06:58.158695 kubelet[1770]: I0116 09:06:58.158421 1770 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 16 09:06:58.168690 kubelet[1770]: I0116 09:06:58.166540 1770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa4dcdac-771a-4ab3-83e6-3c460b024d83-tigera-ca-bundle\") pod \"calico-node-9r2q2\" (UID: \"aa4dcdac-771a-4ab3-83e6-3c460b024d83\") " pod="calico-system/calico-node-9r2q2" Jan 16 09:06:58.168690 kubelet[1770]: I0116 09:06:58.166592 1770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1efedbdc-2152-4ce4-a7be-f69fdc2dddc3-socket-dir\") pod \"csi-node-driver-2rk47\" (UID: \"1efedbdc-2152-4ce4-a7be-f69fdc2dddc3\") " pod="calico-system/csi-node-driver-2rk47" Jan 16 09:06:58.168690 kubelet[1770]: I0116 09:06:58.166625 1770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfllc\" (UniqueName: \"kubernetes.io/projected/ac2926c4-22a8-45fc-85af-19f66c353e5b-kube-api-access-vfllc\") pod \"kube-proxy-8q6n9\" (UID: \"ac2926c4-22a8-45fc-85af-19f66c353e5b\") " pod="kube-system/kube-proxy-8q6n9" Jan 16 09:06:58.168690 kubelet[1770]: I0116 09:06:58.166651 1770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/aa4dcdac-771a-4ab3-83e6-3c460b024d83-flexvol-driver-host\") pod \"calico-node-9r2q2\" (UID: \"aa4dcdac-771a-4ab3-83e6-3c460b024d83\") " pod="calico-system/calico-node-9r2q2" Jan 16 09:06:58.168690 kubelet[1770]: I0116 09:06:58.166676 1770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ac2926c4-22a8-45fc-85af-19f66c353e5b-kube-proxy\") pod \"kube-proxy-8q6n9\" (UID: \"ac2926c4-22a8-45fc-85af-19f66c353e5b\") " pod="kube-system/kube-proxy-8q6n9" Jan 16 09:06:58.168525 systemd[1]: Created slice kubepods-besteffort-podaa4dcdac_771a_4ab3_83e6_3c460b024d83.slice - libcontainer container kubepods-besteffort-podaa4dcdac_771a_4ab3_83e6_3c460b024d83.slice. Jan 16 09:06:58.169257 kubelet[1770]: I0116 09:06:58.166702 1770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ac2926c4-22a8-45fc-85af-19f66c353e5b-xtables-lock\") pod \"kube-proxy-8q6n9\" (UID: \"ac2926c4-22a8-45fc-85af-19f66c353e5b\") " pod="kube-system/kube-proxy-8q6n9" Jan 16 09:06:58.169257 kubelet[1770]: I0116 09:06:58.166725 1770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ac2926c4-22a8-45fc-85af-19f66c353e5b-lib-modules\") pod \"kube-proxy-8q6n9\" (UID: \"ac2926c4-22a8-45fc-85af-19f66c353e5b\") " pod="kube-system/kube-proxy-8q6n9" Jan 16 09:06:58.169257 kubelet[1770]: I0116 09:06:58.166792 1770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aa4dcdac-771a-4ab3-83e6-3c460b024d83-lib-modules\") pod \"calico-node-9r2q2\" (UID: \"aa4dcdac-771a-4ab3-83e6-3c460b024d83\") " pod="calico-system/calico-node-9r2q2" Jan 16 09:06:58.169257 kubelet[1770]: I0116 09:06:58.166864 1770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/aa4dcdac-771a-4ab3-83e6-3c460b024d83-node-certs\") pod \"calico-node-9r2q2\" (UID: \"aa4dcdac-771a-4ab3-83e6-3c460b024d83\") " pod="calico-system/calico-node-9r2q2" Jan 16 09:06:58.169257 kubelet[1770]: I0116 09:06:58.166889 1770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/aa4dcdac-771a-4ab3-83e6-3c460b024d83-cni-bin-dir\") pod \"calico-node-9r2q2\" (UID: \"aa4dcdac-771a-4ab3-83e6-3c460b024d83\") " pod="calico-system/calico-node-9r2q2" Jan 16 09:06:58.169429 kubelet[1770]: I0116 09:06:58.166913 1770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/aa4dcdac-771a-4ab3-83e6-3c460b024d83-cni-net-dir\") pod \"calico-node-9r2q2\" (UID: \"aa4dcdac-771a-4ab3-83e6-3c460b024d83\") " pod="calico-system/calico-node-9r2q2" Jan 16 09:06:58.169429 kubelet[1770]: I0116 09:06:58.166964 1770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/1efedbdc-2152-4ce4-a7be-f69fdc2dddc3-varrun\") pod \"csi-node-driver-2rk47\" (UID: \"1efedbdc-2152-4ce4-a7be-f69fdc2dddc3\") " pod="calico-system/csi-node-driver-2rk47" Jan 16 09:06:58.169429 kubelet[1770]: I0116 09:06:58.166989 1770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1efedbdc-2152-4ce4-a7be-f69fdc2dddc3-registration-dir\") pod \"csi-node-driver-2rk47\" (UID: \"1efedbdc-2152-4ce4-a7be-f69fdc2dddc3\") " pod="calico-system/csi-node-driver-2rk47" Jan 16 09:06:58.169429 kubelet[1770]: I0116 09:06:58.167014 1770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aa4dcdac-771a-4ab3-83e6-3c460b024d83-xtables-lock\") pod \"calico-node-9r2q2\" (UID: \"aa4dcdac-771a-4ab3-83e6-3c460b024d83\") " pod="calico-system/calico-node-9r2q2" Jan 16 09:06:58.169429 kubelet[1770]: I0116 09:06:58.167041 1770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/aa4dcdac-771a-4ab3-83e6-3c460b024d83-policysync\") pod \"calico-node-9r2q2\" (UID: \"aa4dcdac-771a-4ab3-83e6-3c460b024d83\") " pod="calico-system/calico-node-9r2q2" Jan 16 09:06:58.169609 kubelet[1770]: I0116 09:06:58.167070 1770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/aa4dcdac-771a-4ab3-83e6-3c460b024d83-var-lib-calico\") pod \"calico-node-9r2q2\" (UID: \"aa4dcdac-771a-4ab3-83e6-3c460b024d83\") " pod="calico-system/calico-node-9r2q2" Jan 16 09:06:58.169609 kubelet[1770]: I0116 09:06:58.167097 1770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhcfl\" (UniqueName: \"kubernetes.io/projected/aa4dcdac-771a-4ab3-83e6-3c460b024d83-kube-api-access-qhcfl\") pod \"calico-node-9r2q2\" (UID: \"aa4dcdac-771a-4ab3-83e6-3c460b024d83\") " pod="calico-system/calico-node-9r2q2" Jan 16 09:06:58.169609 kubelet[1770]: I0116 09:06:58.167134 1770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/aa4dcdac-771a-4ab3-83e6-3c460b024d83-var-run-calico\") pod \"calico-node-9r2q2\" (UID: \"aa4dcdac-771a-4ab3-83e6-3c460b024d83\") " pod="calico-system/calico-node-9r2q2" Jan 16 09:06:58.169609 kubelet[1770]: I0116 09:06:58.167190 1770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/aa4dcdac-771a-4ab3-83e6-3c460b024d83-cni-log-dir\") pod \"calico-node-9r2q2\" (UID: \"aa4dcdac-771a-4ab3-83e6-3c460b024d83\") " pod="calico-system/calico-node-9r2q2" Jan 16 09:06:58.169609 kubelet[1770]: I0116 09:06:58.167217 1770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1efedbdc-2152-4ce4-a7be-f69fdc2dddc3-kubelet-dir\") pod \"csi-node-driver-2rk47\" (UID: \"1efedbdc-2152-4ce4-a7be-f69fdc2dddc3\") " pod="calico-system/csi-node-driver-2rk47" Jan 16 09:06:58.169777 kubelet[1770]: I0116 09:06:58.167239 1770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6jwr\" (UniqueName: \"kubernetes.io/projected/1efedbdc-2152-4ce4-a7be-f69fdc2dddc3-kube-api-access-q6jwr\") pod \"csi-node-driver-2rk47\" (UID: \"1efedbdc-2152-4ce4-a7be-f69fdc2dddc3\") " pod="calico-system/csi-node-driver-2rk47" Jan 16 09:06:58.280396 kubelet[1770]: E0116 09:06:58.280203 1770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:58.280396 kubelet[1770]: W0116 09:06:58.280247 1770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:58.280396 kubelet[1770]: E0116 09:06:58.280303 1770 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:58.369678 kubelet[1770]: E0116 09:06:58.369528 1770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:58.369678 kubelet[1770]: W0116 09:06:58.369568 1770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:58.369678 kubelet[1770]: E0116 09:06:58.369648 1770 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:58.370147 kubelet[1770]: E0116 09:06:58.370122 1770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:58.370147 kubelet[1770]: W0116 09:06:58.370140 1770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:58.370239 kubelet[1770]: E0116 09:06:58.370163 1770 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:58.371840 kubelet[1770]: E0116 09:06:58.370447 1770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:58.371840 kubelet[1770]: W0116 09:06:58.370466 1770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:58.371840 kubelet[1770]: E0116 09:06:58.370484 1770 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:58.387345 kubelet[1770]: E0116 09:06:58.384824 1770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:58.387345 kubelet[1770]: W0116 09:06:58.384860 1770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:58.387345 kubelet[1770]: E0116 09:06:58.384896 1770 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:58.390603 kubelet[1770]: E0116 09:06:58.390558 1770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:58.390837 kubelet[1770]: W0116 09:06:58.390817 1770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:58.390955 kubelet[1770]: E0116 09:06:58.390942 1770 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:58.396421 kubelet[1770]: E0116 09:06:58.396378 1770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:58.396774 kubelet[1770]: W0116 09:06:58.396749 1770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:58.396990 kubelet[1770]: E0116 09:06:58.396936 1770 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:58.464122 kubelet[1770]: E0116 09:06:58.464068 1770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:06:58.466567 containerd[1461]: time="2025-01-16T09:06:58.466480185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8q6n9,Uid:ac2926c4-22a8-45fc-85af-19f66c353e5b,Namespace:kube-system,Attempt:0,}" Jan 16 09:06:58.475902 kubelet[1770]: E0116 09:06:58.474586 1770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:06:58.476570 containerd[1461]: time="2025-01-16T09:06:58.475235738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9r2q2,Uid:aa4dcdac-771a-4ab3-83e6-3c460b024d83,Namespace:calico-system,Attempt:0,}" Jan 16 09:06:59.131670 kubelet[1770]: E0116 09:06:59.131573 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:06:59.173669 containerd[1461]: time="2025-01-16T09:06:59.173193539Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 09:06:59.177652 containerd[1461]: time="2025-01-16T09:06:59.176346019Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 09:06:59.178968 containerd[1461]: time="2025-01-16T09:06:59.178894905Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 16 09:06:59.179428 containerd[1461]: time="2025-01-16T09:06:59.179392778Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 16 09:06:59.180318 containerd[1461]: time="2025-01-16T09:06:59.180258829Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 09:06:59.185926 containerd[1461]: time="2025-01-16T09:06:59.185857479Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 09:06:59.188414 containerd[1461]: time="2025-01-16T09:06:59.188321472Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 712.932051ms" Jan 16 09:06:59.192615 containerd[1461]: time="2025-01-16T09:06:59.192545570Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 725.894838ms" Jan 16 09:06:59.287105 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3201025318.mount: Deactivated successfully. Jan 16 09:06:59.299054 kubelet[1770]: E0116 09:06:59.298285 1770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2rk47" podUID="1efedbdc-2152-4ce4-a7be-f69fdc2dddc3" Jan 16 09:06:59.414684 containerd[1461]: time="2025-01-16T09:06:59.414347631Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 09:06:59.414684 containerd[1461]: time="2025-01-16T09:06:59.414412512Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 09:06:59.414684 containerd[1461]: time="2025-01-16T09:06:59.414464341Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:06:59.416078 containerd[1461]: time="2025-01-16T09:06:59.414573468Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:06:59.431505 containerd[1461]: time="2025-01-16T09:06:59.431149593Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 09:06:59.431505 containerd[1461]: time="2025-01-16T09:06:59.431236877Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 09:06:59.431505 containerd[1461]: time="2025-01-16T09:06:59.431261179Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:06:59.431505 containerd[1461]: time="2025-01-16T09:06:59.431356895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:06:59.565433 systemd[1]: Started cri-containerd-4b4865c550cbc7bc507c9db7f58d0cf07dba60c5bae1de14e23b9f9fe102e1ce.scope - libcontainer container 4b4865c550cbc7bc507c9db7f58d0cf07dba60c5bae1de14e23b9f9fe102e1ce. Jan 16 09:06:59.569446 systemd[1]: Started cri-containerd-cd92f16efde1e646c9c916434fc797023507f466968d07f4130a782dcad1579e.scope - libcontainer container cd92f16efde1e646c9c916434fc797023507f466968d07f4130a782dcad1579e. Jan 16 09:06:59.646881 containerd[1461]: time="2025-01-16T09:06:59.646688210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9r2q2,Uid:aa4dcdac-771a-4ab3-83e6-3c460b024d83,Namespace:calico-system,Attempt:0,} returns sandbox id \"cd92f16efde1e646c9c916434fc797023507f466968d07f4130a782dcad1579e\"" Jan 16 09:06:59.649166 containerd[1461]: time="2025-01-16T09:06:59.648970316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8q6n9,Uid:ac2926c4-22a8-45fc-85af-19f66c353e5b,Namespace:kube-system,Attempt:0,} returns sandbox id \"4b4865c550cbc7bc507c9db7f58d0cf07dba60c5bae1de14e23b9f9fe102e1ce\"" Jan 16 09:06:59.651834 kubelet[1770]: E0116 09:06:59.650720 1770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:06:59.651834 kubelet[1770]: E0116 09:06:59.651389 1770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:06:59.652976 containerd[1461]: time="2025-01-16T09:06:59.652716784Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 16 09:07:00.132439 kubelet[1770]: E0116 09:07:00.132355 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:07:01.030525 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4056473022.mount: Deactivated successfully. Jan 16 09:07:01.134680 kubelet[1770]: E0116 09:07:01.133424 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:07:01.259702 containerd[1461]: time="2025-01-16T09:07:01.259599551Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:07:01.261165 containerd[1461]: time="2025-01-16T09:07:01.261083887Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Jan 16 09:07:01.264843 containerd[1461]: time="2025-01-16T09:07:01.262833583Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:07:01.266418 containerd[1461]: time="2025-01-16T09:07:01.266344124Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:07:01.267835 containerd[1461]: time="2025-01-16T09:07:01.267731522Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.614869999s" Jan 16 09:07:01.267835 containerd[1461]: time="2025-01-16T09:07:01.267824925Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 16 09:07:01.271768 containerd[1461]: time="2025-01-16T09:07:01.271708897Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\"" Jan 16 09:07:01.273388 containerd[1461]: time="2025-01-16T09:07:01.273251569Z" level=info msg="CreateContainer within sandbox \"cd92f16efde1e646c9c916434fc797023507f466968d07f4130a782dcad1579e\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 16 09:07:01.297125 containerd[1461]: time="2025-01-16T09:07:01.296868770Z" level=info msg="CreateContainer within sandbox \"cd92f16efde1e646c9c916434fc797023507f466968d07f4130a782dcad1579e\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"fc77aeb0e038bf7f2a254e317a51b16d6b512c2cad23390eb8dbcebc418b4695\"" Jan 16 09:07:01.298152 kubelet[1770]: E0116 09:07:01.297598 1770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2rk47" podUID="1efedbdc-2152-4ce4-a7be-f69fdc2dddc3" Jan 16 09:07:01.298968 containerd[1461]: time="2025-01-16T09:07:01.298565066Z" level=info msg="StartContainer for \"fc77aeb0e038bf7f2a254e317a51b16d6b512c2cad23390eb8dbcebc418b4695\"" Jan 16 09:07:01.361137 systemd[1]: Started cri-containerd-fc77aeb0e038bf7f2a254e317a51b16d6b512c2cad23390eb8dbcebc418b4695.scope - libcontainer container fc77aeb0e038bf7f2a254e317a51b16d6b512c2cad23390eb8dbcebc418b4695. Jan 16 09:07:01.412294 containerd[1461]: time="2025-01-16T09:07:01.412213973Z" level=info msg="StartContainer for \"fc77aeb0e038bf7f2a254e317a51b16d6b512c2cad23390eb8dbcebc418b4695\" returns successfully" Jan 16 09:07:01.430151 systemd[1]: cri-containerd-fc77aeb0e038bf7f2a254e317a51b16d6b512c2cad23390eb8dbcebc418b4695.scope: Deactivated successfully. Jan 16 09:07:01.492916 containerd[1461]: time="2025-01-16T09:07:01.492571714Z" level=info msg="shim disconnected" id=fc77aeb0e038bf7f2a254e317a51b16d6b512c2cad23390eb8dbcebc418b4695 namespace=k8s.io Jan 16 09:07:01.492916 containerd[1461]: time="2025-01-16T09:07:01.492694950Z" level=warning msg="cleaning up after shim disconnected" id=fc77aeb0e038bf7f2a254e317a51b16d6b512c2cad23390eb8dbcebc418b4695 namespace=k8s.io Jan 16 09:07:01.492916 containerd[1461]: time="2025-01-16T09:07:01.492712656Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 16 09:07:01.957754 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fc77aeb0e038bf7f2a254e317a51b16d6b512c2cad23390eb8dbcebc418b4695-rootfs.mount: Deactivated successfully. Jan 16 09:07:02.134206 kubelet[1770]: E0116 09:07:02.134080 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:07:02.334700 kubelet[1770]: E0116 09:07:02.334139 1770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:07:02.908397 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3662178864.mount: Deactivated successfully. Jan 16 09:07:03.134961 kubelet[1770]: E0116 09:07:03.134620 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:07:03.298254 kubelet[1770]: E0116 09:07:03.298058 1770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2rk47" podUID="1efedbdc-2152-4ce4-a7be-f69fdc2dddc3" Jan 16 09:07:03.703045 containerd[1461]: time="2025-01-16T09:07:03.701957936Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:07:03.704749 containerd[1461]: time="2025-01-16T09:07:03.704648533Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.5: active requests=0, bytes read=30231128" Jan 16 09:07:03.706141 containerd[1461]: time="2025-01-16T09:07:03.706056768Z" level=info msg="ImageCreate event name:\"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:07:03.709709 containerd[1461]: time="2025-01-16T09:07:03.709635141Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:07:03.711214 containerd[1461]: time="2025-01-16T09:07:03.711145079Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.5\" with image id \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\", size \"30230147\" in 2.43937478s" Jan 16 09:07:03.711513 containerd[1461]: time="2025-01-16T09:07:03.711471107Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\" returns image reference \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\"" Jan 16 09:07:03.714248 containerd[1461]: time="2025-01-16T09:07:03.714191466Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 16 09:07:03.717252 containerd[1461]: time="2025-01-16T09:07:03.717183209Z" level=info msg="CreateContainer within sandbox \"4b4865c550cbc7bc507c9db7f58d0cf07dba60c5bae1de14e23b9f9fe102e1ce\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 16 09:07:03.737063 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1807112280.mount: Deactivated successfully. Jan 16 09:07:03.759455 containerd[1461]: time="2025-01-16T09:07:03.759340279Z" level=info msg="CreateContainer within sandbox \"4b4865c550cbc7bc507c9db7f58d0cf07dba60c5bae1de14e23b9f9fe102e1ce\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8289b832d2622af48b56a02f7a16769262b801a2731d450f01d59acff65c7c01\"" Jan 16 09:07:03.764903 containerd[1461]: time="2025-01-16T09:07:03.764746251Z" level=info msg="StartContainer for \"8289b832d2622af48b56a02f7a16769262b801a2731d450f01d59acff65c7c01\"" Jan 16 09:07:03.826623 systemd[1]: Started cri-containerd-8289b832d2622af48b56a02f7a16769262b801a2731d450f01d59acff65c7c01.scope - libcontainer container 8289b832d2622af48b56a02f7a16769262b801a2731d450f01d59acff65c7c01. Jan 16 09:07:03.892839 containerd[1461]: time="2025-01-16T09:07:03.892728270Z" level=info msg="StartContainer for \"8289b832d2622af48b56a02f7a16769262b801a2731d450f01d59acff65c7c01\" returns successfully" Jan 16 09:07:04.135955 kubelet[1770]: E0116 09:07:04.135876 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:07:04.350325 kubelet[1770]: E0116 09:07:04.348945 1770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:07:04.613345 systemd-resolved[1324]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Jan 16 09:07:05.136502 kubelet[1770]: E0116 09:07:05.136428 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:07:05.298790 kubelet[1770]: E0116 09:07:05.298380 1770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2rk47" podUID="1efedbdc-2152-4ce4-a7be-f69fdc2dddc3" Jan 16 09:07:05.353578 kubelet[1770]: E0116 09:07:05.353095 1770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:07:06.138121 kubelet[1770]: E0116 09:07:06.138048 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:07:07.139550 kubelet[1770]: E0116 09:07:07.139477 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:07:07.298282 kubelet[1770]: E0116 09:07:07.297502 1770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2rk47" podUID="1efedbdc-2152-4ce4-a7be-f69fdc2dddc3" Jan 16 09:07:07.685574 systemd-resolved[1324]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Jan 16 09:07:08.141099 kubelet[1770]: E0116 09:07:08.141041 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:07:08.682586 containerd[1461]: time="2025-01-16T09:07:08.682498198Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:07:08.684773 containerd[1461]: time="2025-01-16T09:07:08.684227300Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 16 09:07:08.686334 containerd[1461]: time="2025-01-16T09:07:08.685577618Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:07:08.690349 containerd[1461]: time="2025-01-16T09:07:08.690261382Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:07:08.692239 containerd[1461]: time="2025-01-16T09:07:08.692161803Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 4.977902248s" Jan 16 09:07:08.692500 containerd[1461]: time="2025-01-16T09:07:08.692464888Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 16 09:07:08.697928 containerd[1461]: time="2025-01-16T09:07:08.697674135Z" level=info msg="CreateContainer within sandbox \"cd92f16efde1e646c9c916434fc797023507f466968d07f4130a782dcad1579e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 16 09:07:08.743239 containerd[1461]: time="2025-01-16T09:07:08.743139773Z" level=info msg="CreateContainer within sandbox \"cd92f16efde1e646c9c916434fc797023507f466968d07f4130a782dcad1579e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"cc9b815f179eda552a7873387b12e57911af4ff8bed73acf5d490da36016555b\"" Jan 16 09:07:08.748342 containerd[1461]: time="2025-01-16T09:07:08.748274432Z" level=info msg="StartContainer for \"cc9b815f179eda552a7873387b12e57911af4ff8bed73acf5d490da36016555b\"" Jan 16 09:07:08.828290 systemd[1]: Started cri-containerd-cc9b815f179eda552a7873387b12e57911af4ff8bed73acf5d490da36016555b.scope - libcontainer container cc9b815f179eda552a7873387b12e57911af4ff8bed73acf5d490da36016555b. Jan 16 09:07:08.896273 containerd[1461]: time="2025-01-16T09:07:08.895680546Z" level=info msg="StartContainer for \"cc9b815f179eda552a7873387b12e57911af4ff8bed73acf5d490da36016555b\" returns successfully" Jan 16 09:07:09.142381 kubelet[1770]: E0116 09:07:09.142304 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:07:09.300915 kubelet[1770]: E0116 09:07:09.300458 1770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2rk47" podUID="1efedbdc-2152-4ce4-a7be-f69fdc2dddc3" Jan 16 09:07:09.375770 kubelet[1770]: E0116 09:07:09.375686 1770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:07:09.433582 kubelet[1770]: I0116 09:07:09.433320 1770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8q6n9" podStartSLOduration=8.37249235 podStartE2EDuration="12.433148099s" podCreationTimestamp="2025-01-16 09:06:57 +0000 UTC" firstStartedPulling="2025-01-16 09:06:59.652759231 +0000 UTC m=+4.343854593" lastFinishedPulling="2025-01-16 09:07:03.713414975 +0000 UTC m=+8.404510342" observedRunningTime="2025-01-16 09:07:04.420631107 +0000 UTC m=+9.111726477" watchObservedRunningTime="2025-01-16 09:07:09.433148099 +0000 UTC m=+14.124243550" Jan 16 09:07:09.925239 containerd[1461]: time="2025-01-16T09:07:09.916645398Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 16 09:07:09.925551 systemd[1]: cri-containerd-cc9b815f179eda552a7873387b12e57911af4ff8bed73acf5d490da36016555b.scope: Deactivated successfully. Jan 16 09:07:09.982867 kubelet[1770]: I0116 09:07:09.979152 1770 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 16 09:07:09.995318 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cc9b815f179eda552a7873387b12e57911af4ff8bed73acf5d490da36016555b-rootfs.mount: Deactivated successfully. Jan 16 09:07:10.038406 containerd[1461]: time="2025-01-16T09:07:10.034048773Z" level=info msg="shim disconnected" id=cc9b815f179eda552a7873387b12e57911af4ff8bed73acf5d490da36016555b namespace=k8s.io Jan 16 09:07:10.038406 containerd[1461]: time="2025-01-16T09:07:10.034155674Z" level=warning msg="cleaning up after shim disconnected" id=cc9b815f179eda552a7873387b12e57911af4ff8bed73acf5d490da36016555b namespace=k8s.io Jan 16 09:07:10.038406 containerd[1461]: time="2025-01-16T09:07:10.034175528Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 16 09:07:10.143460 kubelet[1770]: E0116 09:07:10.143354 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:07:10.381181 kubelet[1770]: E0116 09:07:10.380913 1770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:07:10.382793 containerd[1461]: time="2025-01-16T09:07:10.382695721Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 16 09:07:10.858335 systemd-resolved[1324]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Jan 16 09:07:11.143795 kubelet[1770]: E0116 09:07:11.143600 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:07:11.318231 systemd[1]: Created slice kubepods-besteffort-pod1efedbdc_2152_4ce4_a7be_f69fdc2dddc3.slice - libcontainer container kubepods-besteffort-pod1efedbdc_2152_4ce4_a7be_f69fdc2dddc3.slice. Jan 16 09:07:11.323688 containerd[1461]: time="2025-01-16T09:07:11.323621967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2rk47,Uid:1efedbdc-2152-4ce4-a7be-f69fdc2dddc3,Namespace:calico-system,Attempt:0,}" Jan 16 09:07:11.478012 containerd[1461]: time="2025-01-16T09:07:11.475863281Z" level=error msg="Failed to destroy network for sandbox \"7a09d4c272d67ebbac86e3dc5361d9e1fc6ea0d5bb785992f9e697890d2ee592\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 09:07:11.478965 containerd[1461]: time="2025-01-16T09:07:11.478677169Z" level=error msg="encountered an error cleaning up failed sandbox \"7a09d4c272d67ebbac86e3dc5361d9e1fc6ea0d5bb785992f9e697890d2ee592\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 09:07:11.478965 containerd[1461]: time="2025-01-16T09:07:11.478865428Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2rk47,Uid:1efedbdc-2152-4ce4-a7be-f69fdc2dddc3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7a09d4c272d67ebbac86e3dc5361d9e1fc6ea0d5bb785992f9e697890d2ee592\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 09:07:11.480313 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7a09d4c272d67ebbac86e3dc5361d9e1fc6ea0d5bb785992f9e697890d2ee592-shm.mount: Deactivated successfully. Jan 16 09:07:11.482260 kubelet[1770]: E0116 09:07:11.480914 1770 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a09d4c272d67ebbac86e3dc5361d9e1fc6ea0d5bb785992f9e697890d2ee592\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 09:07:11.482260 kubelet[1770]: E0116 09:07:11.481108 1770 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a09d4c272d67ebbac86e3dc5361d9e1fc6ea0d5bb785992f9e697890d2ee592\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2rk47" Jan 16 09:07:11.482260 kubelet[1770]: E0116 09:07:11.481148 1770 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a09d4c272d67ebbac86e3dc5361d9e1fc6ea0d5bb785992f9e697890d2ee592\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2rk47" Jan 16 09:07:11.482570 kubelet[1770]: E0116 09:07:11.481225 1770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2rk47_calico-system(1efedbdc-2152-4ce4-a7be-f69fdc2dddc3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2rk47_calico-system(1efedbdc-2152-4ce4-a7be-f69fdc2dddc3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7a09d4c272d67ebbac86e3dc5361d9e1fc6ea0d5bb785992f9e697890d2ee592\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2rk47" podUID="1efedbdc-2152-4ce4-a7be-f69fdc2dddc3" Jan 16 09:07:12.144637 kubelet[1770]: E0116 09:07:12.144570 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:07:12.391853 kubelet[1770]: I0116 09:07:12.389340 1770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a09d4c272d67ebbac86e3dc5361d9e1fc6ea0d5bb785992f9e697890d2ee592" Jan 16 09:07:12.392069 containerd[1461]: time="2025-01-16T09:07:12.390730693Z" level=info msg="StopPodSandbox for \"7a09d4c272d67ebbac86e3dc5361d9e1fc6ea0d5bb785992f9e697890d2ee592\"" Jan 16 09:07:12.392069 containerd[1461]: time="2025-01-16T09:07:12.391280143Z" level=info msg="Ensure that sandbox 7a09d4c272d67ebbac86e3dc5361d9e1fc6ea0d5bb785992f9e697890d2ee592 in task-service has been cleanup successfully" Jan 16 09:07:12.419986 systemd[1]: Created slice kubepods-besteffort-pod5fc1d419_3814_4bc4_8054_6dbd37255d77.slice - libcontainer container kubepods-besteffort-pod5fc1d419_3814_4bc4_8054_6dbd37255d77.slice. Jan 16 09:07:12.496777 kubelet[1770]: I0116 09:07:12.496237 1770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-px46n\" (UniqueName: \"kubernetes.io/projected/5fc1d419-3814-4bc4-8054-6dbd37255d77-kube-api-access-px46n\") pod \"nginx-deployment-8587fbcb89-pd78d\" (UID: \"5fc1d419-3814-4bc4-8054-6dbd37255d77\") " pod="default/nginx-deployment-8587fbcb89-pd78d" Jan 16 09:07:12.500043 containerd[1461]: time="2025-01-16T09:07:12.499952244Z" level=error msg="StopPodSandbox for \"7a09d4c272d67ebbac86e3dc5361d9e1fc6ea0d5bb785992f9e697890d2ee592\" failed" error="failed to destroy network for sandbox \"7a09d4c272d67ebbac86e3dc5361d9e1fc6ea0d5bb785992f9e697890d2ee592\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 09:07:12.501276 kubelet[1770]: E0116 09:07:12.500949 1770 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7a09d4c272d67ebbac86e3dc5361d9e1fc6ea0d5bb785992f9e697890d2ee592\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7a09d4c272d67ebbac86e3dc5361d9e1fc6ea0d5bb785992f9e697890d2ee592" Jan 16 09:07:12.501276 kubelet[1770]: E0116 09:07:12.501051 1770 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7a09d4c272d67ebbac86e3dc5361d9e1fc6ea0d5bb785992f9e697890d2ee592"} Jan 16 09:07:12.501276 kubelet[1770]: E0116 09:07:12.501136 1770 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1efedbdc-2152-4ce4-a7be-f69fdc2dddc3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a09d4c272d67ebbac86e3dc5361d9e1fc6ea0d5bb785992f9e697890d2ee592\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 16 09:07:12.501276 kubelet[1770]: E0116 09:07:12.501174 1770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1efedbdc-2152-4ce4-a7be-f69fdc2dddc3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a09d4c272d67ebbac86e3dc5361d9e1fc6ea0d5bb785992f9e697890d2ee592\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2rk47" podUID="1efedbdc-2152-4ce4-a7be-f69fdc2dddc3" Jan 16 09:07:12.731931 containerd[1461]: time="2025-01-16T09:07:12.730357276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-pd78d,Uid:5fc1d419-3814-4bc4-8054-6dbd37255d77,Namespace:default,Attempt:0,}" Jan 16 09:07:12.860874 containerd[1461]: time="2025-01-16T09:07:12.860762743Z" level=error msg="Failed to destroy network for sandbox \"912f2a1b5a15c6731e80ff9d78110bd3d86c7486f22e26f8ea1d5e80d7bd9848\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 09:07:12.861415 containerd[1461]: time="2025-01-16T09:07:12.861364512Z" level=error msg="encountered an error cleaning up failed sandbox \"912f2a1b5a15c6731e80ff9d78110bd3d86c7486f22e26f8ea1d5e80d7bd9848\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 09:07:12.861527 containerd[1461]: time="2025-01-16T09:07:12.861494265Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-pd78d,Uid:5fc1d419-3814-4bc4-8054-6dbd37255d77,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"912f2a1b5a15c6731e80ff9d78110bd3d86c7486f22e26f8ea1d5e80d7bd9848\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 09:07:12.864734 kubelet[1770]: E0116 09:07:12.863989 1770 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"912f2a1b5a15c6731e80ff9d78110bd3d86c7486f22e26f8ea1d5e80d7bd9848\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 09:07:12.864734 kubelet[1770]: E0116 09:07:12.864076 1770 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"912f2a1b5a15c6731e80ff9d78110bd3d86c7486f22e26f8ea1d5e80d7bd9848\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-pd78d" Jan 16 09:07:12.864734 kubelet[1770]: E0116 09:07:12.864228 1770 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"912f2a1b5a15c6731e80ff9d78110bd3d86c7486f22e26f8ea1d5e80d7bd9848\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-pd78d" Jan 16 09:07:12.865354 kubelet[1770]: E0116 09:07:12.864324 1770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-pd78d_default(5fc1d419-3814-4bc4-8054-6dbd37255d77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-pd78d_default(5fc1d419-3814-4bc4-8054-6dbd37255d77)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"912f2a1b5a15c6731e80ff9d78110bd3d86c7486f22e26f8ea1d5e80d7bd9848\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-pd78d" podUID="5fc1d419-3814-4bc4-8054-6dbd37255d77" Jan 16 09:07:12.866056 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-912f2a1b5a15c6731e80ff9d78110bd3d86c7486f22e26f8ea1d5e80d7bd9848-shm.mount: Deactivated successfully. Jan 16 09:07:14.130881 systemd-timesyncd[1346]: Contacted time server 23.131.160.7:123 (2.flatcar.pool.ntp.org). Jan 16 09:07:14.131013 systemd-timesyncd[1346]: Initial clock synchronization to Thu 2025-01-16 09:07:14.127750 UTC. Jan 16 09:07:14.132047 systemd-resolved[1324]: Clock change detected. Flushing caches. Jan 16 09:07:14.157272 kubelet[1770]: E0116 09:07:14.157215 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:07:14.405581 kubelet[1770]: I0116 09:07:14.405436 1770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="912f2a1b5a15c6731e80ff9d78110bd3d86c7486f22e26f8ea1d5e80d7bd9848" Jan 16 09:07:14.407502 containerd[1461]: time="2025-01-16T09:07:14.407215595Z" level=info msg="StopPodSandbox for \"912f2a1b5a15c6731e80ff9d78110bd3d86c7486f22e26f8ea1d5e80d7bd9848\"" Jan 16 09:07:14.409631 containerd[1461]: time="2025-01-16T09:07:14.409462718Z" level=info msg="Ensure that sandbox 912f2a1b5a15c6731e80ff9d78110bd3d86c7486f22e26f8ea1d5e80d7bd9848 in task-service has been cleanup successfully" Jan 16 09:07:14.482665 containerd[1461]: time="2025-01-16T09:07:14.482287051Z" level=error msg="StopPodSandbox for \"912f2a1b5a15c6731e80ff9d78110bd3d86c7486f22e26f8ea1d5e80d7bd9848\" failed" error="failed to destroy network for sandbox \"912f2a1b5a15c6731e80ff9d78110bd3d86c7486f22e26f8ea1d5e80d7bd9848\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 09:07:14.484854 kubelet[1770]: E0116 09:07:14.483690 1770 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"912f2a1b5a15c6731e80ff9d78110bd3d86c7486f22e26f8ea1d5e80d7bd9848\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="912f2a1b5a15c6731e80ff9d78110bd3d86c7486f22e26f8ea1d5e80d7bd9848" Jan 16 09:07:14.485263 kubelet[1770]: E0116 09:07:14.485160 1770 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"912f2a1b5a15c6731e80ff9d78110bd3d86c7486f22e26f8ea1d5e80d7bd9848"} Jan 16 09:07:14.485383 kubelet[1770]: E0116 09:07:14.485344 1770 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5fc1d419-3814-4bc4-8054-6dbd37255d77\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"912f2a1b5a15c6731e80ff9d78110bd3d86c7486f22e26f8ea1d5e80d7bd9848\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 16 09:07:14.485678 kubelet[1770]: E0116 09:07:14.485510 1770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5fc1d419-3814-4bc4-8054-6dbd37255d77\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"912f2a1b5a15c6731e80ff9d78110bd3d86c7486f22e26f8ea1d5e80d7bd9848\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-pd78d" podUID="5fc1d419-3814-4bc4-8054-6dbd37255d77" Jan 16 09:07:15.158459 kubelet[1770]: E0116 09:07:15.158196 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:07:16.158989 kubelet[1770]: E0116 09:07:16.158827 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:07:17.137092 kubelet[1770]: E0116 09:07:17.137029 1770 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:07:17.159200 kubelet[1770]: E0116 09:07:17.159121 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:07:18.159957 kubelet[1770]: E0116 09:07:18.159795 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:07:19.160144 kubelet[1770]: E0116 09:07:19.160030 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:07:19.188409 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1729803768.mount: Deactivated successfully. Jan 16 09:07:19.240118 containerd[1461]: time="2025-01-16T09:07:19.239993747Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:07:19.241883 containerd[1461]: time="2025-01-16T09:07:19.241780978Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 16 09:07:19.243465 containerd[1461]: time="2025-01-16T09:07:19.243373466Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:07:19.246966 containerd[1461]: time="2025-01-16T09:07:19.246730592Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:07:19.247974 containerd[1461]: time="2025-01-16T09:07:19.247662556Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 7.853758551s" Jan 16 09:07:19.247974 containerd[1461]: time="2025-01-16T09:07:19.247737183Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 16 09:07:19.294251 containerd[1461]: time="2025-01-16T09:07:19.293905513Z" level=info msg="CreateContainer within sandbox \"cd92f16efde1e646c9c916434fc797023507f466968d07f4130a782dcad1579e\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 16 09:07:19.343569 containerd[1461]: time="2025-01-16T09:07:19.343273530Z" level=info msg="CreateContainer within sandbox \"cd92f16efde1e646c9c916434fc797023507f466968d07f4130a782dcad1579e\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"9c05baffd2dfc1f2268330307c543b5b236f6ab344f5498346c6ae69f7f23fd7\"" Jan 16 09:07:19.350054 containerd[1461]: time="2025-01-16T09:07:19.345365362Z" level=info msg="StartContainer for \"9c05baffd2dfc1f2268330307c543b5b236f6ab344f5498346c6ae69f7f23fd7\"" Jan 16 09:07:19.488190 systemd[1]: Started cri-containerd-9c05baffd2dfc1f2268330307c543b5b236f6ab344f5498346c6ae69f7f23fd7.scope - libcontainer container 9c05baffd2dfc1f2268330307c543b5b236f6ab344f5498346c6ae69f7f23fd7. Jan 16 09:07:19.553273 containerd[1461]: time="2025-01-16T09:07:19.552572867Z" level=info msg="StartContainer for \"9c05baffd2dfc1f2268330307c543b5b236f6ab344f5498346c6ae69f7f23fd7\" returns successfully" Jan 16 09:07:19.679329 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 16 09:07:19.679575 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 16 09:07:20.160423 kubelet[1770]: E0116 09:07:20.160334 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:07:20.443364 kubelet[1770]: E0116 09:07:20.442703 1770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:07:20.492444 kubelet[1770]: I0116 09:07:20.492143 1770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-9r2q2" podStartSLOduration=4.905688269 podStartE2EDuration="23.492116491s" podCreationTimestamp="2025-01-16 09:06:57 +0000 UTC" firstStartedPulling="2025-01-16 09:06:59.651856772 +0000 UTC m=+4.342952134" lastFinishedPulling="2025-01-16 09:07:19.249403895 +0000 UTC m=+22.929380356" observedRunningTime="2025-01-16 09:07:20.487055896 +0000 UTC m=+24.167032381" watchObservedRunningTime="2025-01-16 09:07:20.492116491 +0000 UTC m=+24.172092974" Jan 16 09:07:21.160612 kubelet[1770]: E0116 09:07:21.160524 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:07:21.444590 kubelet[1770]: E0116 09:07:21.444436 1770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:07:22.023003 kernel: bpftool[2548]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 16 09:07:22.161355 kubelet[1770]: E0116 09:07:22.161281 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:07:22.442386 systemd-networkd[1373]: vxlan.calico: Link UP Jan 16 09:07:22.442396 systemd-networkd[1373]: vxlan.calico: Gained carrier Jan 16 09:07:22.448217 kubelet[1770]: E0116 09:07:22.447768 1770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:07:23.161909 kubelet[1770]: E0116 09:07:23.161821 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:07:23.544414 systemd-networkd[1373]: vxlan.calico: Gained IPv6LL Jan 16 09:07:24.162505 kubelet[1770]: E0116 09:07:24.162411 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:07:25.167848 kubelet[1770]: E0116 09:07:25.163620 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:07:25.311096 containerd[1461]: time="2025-01-16T09:07:25.311025814Z" level=info msg="StopPodSandbox for \"7a09d4c272d67ebbac86e3dc5361d9e1fc6ea0d5bb785992f9e697890d2ee592\"" Jan 16 09:07:25.550713 containerd[1461]: 2025-01-16 09:07:25.461 [INFO][2653] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7a09d4c272d67ebbac86e3dc5361d9e1fc6ea0d5bb785992f9e697890d2ee592" Jan 16 09:07:25.550713 containerd[1461]: 2025-01-16 09:07:25.462 [INFO][2653] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7a09d4c272d67ebbac86e3dc5361d9e1fc6ea0d5bb785992f9e697890d2ee592" iface="eth0" netns="/var/run/netns/cni-ab9377e1-0fc5-388a-2252-19c24672b421" Jan 16 09:07:25.550713 containerd[1461]: 2025-01-16 09:07:25.462 [INFO][2653] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7a09d4c272d67ebbac86e3dc5361d9e1fc6ea0d5bb785992f9e697890d2ee592" iface="eth0" netns="/var/run/netns/cni-ab9377e1-0fc5-388a-2252-19c24672b421" Jan 16 09:07:25.550713 containerd[1461]: 2025-01-16 09:07:25.463 [INFO][2653] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7a09d4c272d67ebbac86e3dc5361d9e1fc6ea0d5bb785992f9e697890d2ee592" iface="eth0" netns="/var/run/netns/cni-ab9377e1-0fc5-388a-2252-19c24672b421" Jan 16 09:07:25.550713 containerd[1461]: 2025-01-16 09:07:25.463 [INFO][2653] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7a09d4c272d67ebbac86e3dc5361d9e1fc6ea0d5bb785992f9e697890d2ee592" Jan 16 09:07:25.550713 containerd[1461]: 2025-01-16 09:07:25.464 [INFO][2653] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7a09d4c272d67ebbac86e3dc5361d9e1fc6ea0d5bb785992f9e697890d2ee592" Jan 16 09:07:25.550713 containerd[1461]: 2025-01-16 09:07:25.513 [INFO][2659] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7a09d4c272d67ebbac86e3dc5361d9e1fc6ea0d5bb785992f9e697890d2ee592" HandleID="k8s-pod-network.7a09d4c272d67ebbac86e3dc5361d9e1fc6ea0d5bb785992f9e697890d2ee592" Workload="143.110.238.88-k8s-csi--node--driver--2rk47-eth0" Jan 16 09:07:25.550713 containerd[1461]: 2025-01-16 09:07:25.514 [INFO][2659] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 16 09:07:25.550713 containerd[1461]: 2025-01-16 09:07:25.514 [INFO][2659] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 16 09:07:25.550713 containerd[1461]: 2025-01-16 09:07:25.534 [WARNING][2659] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7a09d4c272d67ebbac86e3dc5361d9e1fc6ea0d5bb785992f9e697890d2ee592" HandleID="k8s-pod-network.7a09d4c272d67ebbac86e3dc5361d9e1fc6ea0d5bb785992f9e697890d2ee592" Workload="143.110.238.88-k8s-csi--node--driver--2rk47-eth0" Jan 16 09:07:25.550713 containerd[1461]: 2025-01-16 09:07:25.535 [INFO][2659] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7a09d4c272d67ebbac86e3dc5361d9e1fc6ea0d5bb785992f9e697890d2ee592" HandleID="k8s-pod-network.7a09d4c272d67ebbac86e3dc5361d9e1fc6ea0d5bb785992f9e697890d2ee592" Workload="143.110.238.88-k8s-csi--node--driver--2rk47-eth0" Jan 16 09:07:25.550713 containerd[1461]: 2025-01-16 09:07:25.546 [INFO][2659] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 16 09:07:25.550713 containerd[1461]: 2025-01-16 09:07:25.548 [INFO][2653] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7a09d4c272d67ebbac86e3dc5361d9e1fc6ea0d5bb785992f9e697890d2ee592" Jan 16 09:07:25.551626 containerd[1461]: time="2025-01-16T09:07:25.550959116Z" level=info msg="TearDown network for sandbox \"7a09d4c272d67ebbac86e3dc5361d9e1fc6ea0d5bb785992f9e697890d2ee592\" successfully" Jan 16 09:07:25.551626 containerd[1461]: time="2025-01-16T09:07:25.551037830Z" level=info msg="StopPodSandbox for \"7a09d4c272d67ebbac86e3dc5361d9e1fc6ea0d5bb785992f9e697890d2ee592\" returns successfully" Jan 16 09:07:25.554892 containerd[1461]: time="2025-01-16T09:07:25.553283018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2rk47,Uid:1efedbdc-2152-4ce4-a7be-f69fdc2dddc3,Namespace:calico-system,Attempt:1,}" Jan 16 09:07:25.555645 systemd[1]: run-netns-cni\x2dab9377e1\x2d0fc5\x2d388a\x2d2252\x2d19c24672b421.mount: Deactivated successfully. Jan 16 09:07:25.952465 systemd-networkd[1373]: calicb356a36120: Link UP Jan 16 09:07:25.956038 systemd-networkd[1373]: calicb356a36120: Gained carrier Jan 16 09:07:26.002086 containerd[1461]: 2025-01-16 09:07:25.645 [INFO][2667] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {143.110.238.88-k8s-csi--node--driver--2rk47-eth0 csi-node-driver- calico-system 1efedbdc-2152-4ce4-a7be-f69fdc2dddc3 1096 0 2025-01-16 09:06:57 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 143.110.238.88 csi-node-driver-2rk47 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calicb356a36120 [] []}} ContainerID="eef52434182d78abf2acc0ddaf33091278c50a7af7a6aa23028c97a1f5ab51f4" Namespace="calico-system" Pod="csi-node-driver-2rk47" WorkloadEndpoint="143.110.238.88-k8s-csi--node--driver--2rk47-" Jan 16 09:07:26.002086 containerd[1461]: 2025-01-16 09:07:25.645 [INFO][2667] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="eef52434182d78abf2acc0ddaf33091278c50a7af7a6aa23028c97a1f5ab51f4" Namespace="calico-system" Pod="csi-node-driver-2rk47" WorkloadEndpoint="143.110.238.88-k8s-csi--node--driver--2rk47-eth0" Jan 16 09:07:26.002086 containerd[1461]: 2025-01-16 09:07:25.707 [INFO][2678] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eef52434182d78abf2acc0ddaf33091278c50a7af7a6aa23028c97a1f5ab51f4" HandleID="k8s-pod-network.eef52434182d78abf2acc0ddaf33091278c50a7af7a6aa23028c97a1f5ab51f4" Workload="143.110.238.88-k8s-csi--node--driver--2rk47-eth0" Jan 16 09:07:26.002086 containerd[1461]: 2025-01-16 09:07:25.766 [INFO][2678] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="eef52434182d78abf2acc0ddaf33091278c50a7af7a6aa23028c97a1f5ab51f4" HandleID="k8s-pod-network.eef52434182d78abf2acc0ddaf33091278c50a7af7a6aa23028c97a1f5ab51f4" Workload="143.110.238.88-k8s-csi--node--driver--2rk47-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001fd770), Attrs:map[string]string{"namespace":"calico-system", "node":"143.110.238.88", "pod":"csi-node-driver-2rk47", "timestamp":"2025-01-16 09:07:25.706971207 +0000 UTC"}, Hostname:"143.110.238.88", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 09:07:26.002086 containerd[1461]: 2025-01-16 09:07:25.767 [INFO][2678] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 16 09:07:26.002086 containerd[1461]: 2025-01-16 09:07:25.767 [INFO][2678] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 16 09:07:26.002086 containerd[1461]: 2025-01-16 09:07:25.767 [INFO][2678] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '143.110.238.88' Jan 16 09:07:26.002086 containerd[1461]: 2025-01-16 09:07:25.784 [INFO][2678] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.eef52434182d78abf2acc0ddaf33091278c50a7af7a6aa23028c97a1f5ab51f4" host="143.110.238.88" Jan 16 09:07:26.002086 containerd[1461]: 2025-01-16 09:07:25.862 [INFO][2678] ipam/ipam.go 372: Looking up existing affinities for host host="143.110.238.88" Jan 16 09:07:26.002086 containerd[1461]: 2025-01-16 09:07:25.882 [INFO][2678] ipam/ipam.go 489: Trying affinity for 192.168.103.64/26 host="143.110.238.88" Jan 16 09:07:26.002086 containerd[1461]: 2025-01-16 09:07:25.891 [INFO][2678] ipam/ipam.go 155: Attempting to load block cidr=192.168.103.64/26 host="143.110.238.88" Jan 16 09:07:26.002086 containerd[1461]: 2025-01-16 09:07:25.898 [INFO][2678] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.103.64/26 host="143.110.238.88" Jan 16 09:07:26.002086 containerd[1461]: 2025-01-16 09:07:25.898 [INFO][2678] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.103.64/26 handle="k8s-pod-network.eef52434182d78abf2acc0ddaf33091278c50a7af7a6aa23028c97a1f5ab51f4" host="143.110.238.88" Jan 16 09:07:26.002086 containerd[1461]: 2025-01-16 09:07:25.903 [INFO][2678] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.eef52434182d78abf2acc0ddaf33091278c50a7af7a6aa23028c97a1f5ab51f4 Jan 16 09:07:26.002086 containerd[1461]: 2025-01-16 09:07:25.925 [INFO][2678] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.103.64/26 handle="k8s-pod-network.eef52434182d78abf2acc0ddaf33091278c50a7af7a6aa23028c97a1f5ab51f4" host="143.110.238.88" Jan 16 09:07:26.002086 containerd[1461]: 2025-01-16 09:07:25.944 [INFO][2678] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.103.65/26] block=192.168.103.64/26 handle="k8s-pod-network.eef52434182d78abf2acc0ddaf33091278c50a7af7a6aa23028c97a1f5ab51f4" host="143.110.238.88" Jan 16 09:07:26.002086 containerd[1461]: 2025-01-16 09:07:25.944 [INFO][2678] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.103.65/26] handle="k8s-pod-network.eef52434182d78abf2acc0ddaf33091278c50a7af7a6aa23028c97a1f5ab51f4" host="143.110.238.88" Jan 16 09:07:26.002086 containerd[1461]: 2025-01-16 09:07:25.944 [INFO][2678] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 16 09:07:26.002086 containerd[1461]: 2025-01-16 09:07:25.945 [INFO][2678] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.103.65/26] IPv6=[] ContainerID="eef52434182d78abf2acc0ddaf33091278c50a7af7a6aa23028c97a1f5ab51f4" HandleID="k8s-pod-network.eef52434182d78abf2acc0ddaf33091278c50a7af7a6aa23028c97a1f5ab51f4" Workload="143.110.238.88-k8s-csi--node--driver--2rk47-eth0" Jan 16 09:07:26.003389 containerd[1461]: 2025-01-16 09:07:25.946 [INFO][2667] cni-plugin/k8s.go 386: Populated endpoint ContainerID="eef52434182d78abf2acc0ddaf33091278c50a7af7a6aa23028c97a1f5ab51f4" Namespace="calico-system" Pod="csi-node-driver-2rk47" WorkloadEndpoint="143.110.238.88-k8s-csi--node--driver--2rk47-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"143.110.238.88-k8s-csi--node--driver--2rk47-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1efedbdc-2152-4ce4-a7be-f69fdc2dddc3", ResourceVersion:"1096", Generation:0, CreationTimestamp:time.Date(2025, time.January, 16, 9, 6, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"143.110.238.88", ContainerID:"", Pod:"csi-node-driver-2rk47", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.103.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicb356a36120", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 16 09:07:26.003389 containerd[1461]: 2025-01-16 09:07:25.946 [INFO][2667] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.103.65/32] ContainerID="eef52434182d78abf2acc0ddaf33091278c50a7af7a6aa23028c97a1f5ab51f4" Namespace="calico-system" Pod="csi-node-driver-2rk47" WorkloadEndpoint="143.110.238.88-k8s-csi--node--driver--2rk47-eth0" Jan 16 09:07:26.003389 containerd[1461]: 2025-01-16 09:07:25.946 [INFO][2667] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicb356a36120 ContainerID="eef52434182d78abf2acc0ddaf33091278c50a7af7a6aa23028c97a1f5ab51f4" Namespace="calico-system" Pod="csi-node-driver-2rk47" WorkloadEndpoint="143.110.238.88-k8s-csi--node--driver--2rk47-eth0" Jan 16 09:07:26.003389 containerd[1461]: 2025-01-16 09:07:25.954 [INFO][2667] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eef52434182d78abf2acc0ddaf33091278c50a7af7a6aa23028c97a1f5ab51f4" Namespace="calico-system" Pod="csi-node-driver-2rk47" WorkloadEndpoint="143.110.238.88-k8s-csi--node--driver--2rk47-eth0" Jan 16 09:07:26.003389 containerd[1461]: 2025-01-16 09:07:25.954 [INFO][2667] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="eef52434182d78abf2acc0ddaf33091278c50a7af7a6aa23028c97a1f5ab51f4" Namespace="calico-system" Pod="csi-node-driver-2rk47" WorkloadEndpoint="143.110.238.88-k8s-csi--node--driver--2rk47-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"143.110.238.88-k8s-csi--node--driver--2rk47-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1efedbdc-2152-4ce4-a7be-f69fdc2dddc3", ResourceVersion:"1096", Generation:0, CreationTimestamp:time.Date(2025, time.January, 16, 9, 6, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"143.110.238.88", ContainerID:"eef52434182d78abf2acc0ddaf33091278c50a7af7a6aa23028c97a1f5ab51f4", Pod:"csi-node-driver-2rk47", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.103.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicb356a36120", MAC:"4e:50:9e:00:1c:8a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 16 09:07:26.003389 containerd[1461]: 2025-01-16 09:07:25.999 [INFO][2667] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="eef52434182d78abf2acc0ddaf33091278c50a7af7a6aa23028c97a1f5ab51f4" Namespace="calico-system" Pod="csi-node-driver-2rk47" WorkloadEndpoint="143.110.238.88-k8s-csi--node--driver--2rk47-eth0" Jan 16 09:07:26.039042 containerd[1461]: time="2025-01-16T09:07:26.038676931Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 09:07:26.039042 containerd[1461]: time="2025-01-16T09:07:26.038839644Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 09:07:26.039042 containerd[1461]: time="2025-01-16T09:07:26.038867058Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:07:26.041339 containerd[1461]: time="2025-01-16T09:07:26.040801111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:07:26.092324 systemd[1]: Started cri-containerd-eef52434182d78abf2acc0ddaf33091278c50a7af7a6aa23028c97a1f5ab51f4.scope - libcontainer container eef52434182d78abf2acc0ddaf33091278c50a7af7a6aa23028c97a1f5ab51f4. Jan 16 09:07:26.130350 containerd[1461]: time="2025-01-16T09:07:26.130240367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2rk47,Uid:1efedbdc-2152-4ce4-a7be-f69fdc2dddc3,Namespace:calico-system,Attempt:1,} returns sandbox id \"eef52434182d78abf2acc0ddaf33091278c50a7af7a6aa23028c97a1f5ab51f4\"" Jan 16 09:07:26.134781 containerd[1461]: time="2025-01-16T09:07:26.134719302Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 16 09:07:26.168527 kubelet[1770]: E0116 09:07:26.168450 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:07:26.311556 containerd[1461]: time="2025-01-16T09:07:26.311341332Z" level=info msg="StopPodSandbox for \"912f2a1b5a15c6731e80ff9d78110bd3d86c7486f22e26f8ea1d5e80d7bd9848\"" Jan 16 09:07:26.479706 containerd[1461]: 2025-01-16 09:07:26.404 [INFO][2750] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="912f2a1b5a15c6731e80ff9d78110bd3d86c7486f22e26f8ea1d5e80d7bd9848" Jan 16 09:07:26.479706 containerd[1461]: 2025-01-16 09:07:26.405 [INFO][2750] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="912f2a1b5a15c6731e80ff9d78110bd3d86c7486f22e26f8ea1d5e80d7bd9848" iface="eth0" netns="/var/run/netns/cni-aacf4e72-6581-db46-3651-a3a48c653a29" Jan 16 09:07:26.479706 containerd[1461]: 2025-01-16 09:07:26.405 [INFO][2750] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="912f2a1b5a15c6731e80ff9d78110bd3d86c7486f22e26f8ea1d5e80d7bd9848" iface="eth0" netns="/var/run/netns/cni-aacf4e72-6581-db46-3651-a3a48c653a29" Jan 16 09:07:26.479706 containerd[1461]: 2025-01-16 09:07:26.406 [INFO][2750] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="912f2a1b5a15c6731e80ff9d78110bd3d86c7486f22e26f8ea1d5e80d7bd9848" iface="eth0" netns="/var/run/netns/cni-aacf4e72-6581-db46-3651-a3a48c653a29" Jan 16 09:07:26.479706 containerd[1461]: 2025-01-16 09:07:26.406 [INFO][2750] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="912f2a1b5a15c6731e80ff9d78110bd3d86c7486f22e26f8ea1d5e80d7bd9848" Jan 16 09:07:26.479706 containerd[1461]: 2025-01-16 09:07:26.406 [INFO][2750] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="912f2a1b5a15c6731e80ff9d78110bd3d86c7486f22e26f8ea1d5e80d7bd9848" Jan 16 09:07:26.479706 containerd[1461]: 2025-01-16 09:07:26.443 [INFO][2756] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="912f2a1b5a15c6731e80ff9d78110bd3d86c7486f22e26f8ea1d5e80d7bd9848" HandleID="k8s-pod-network.912f2a1b5a15c6731e80ff9d78110bd3d86c7486f22e26f8ea1d5e80d7bd9848" Workload="143.110.238.88-k8s-nginx--deployment--8587fbcb89--pd78d-eth0" Jan 16 09:07:26.479706 containerd[1461]: 2025-01-16 09:07:26.443 [INFO][2756] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 16 09:07:26.479706 containerd[1461]: 2025-01-16 09:07:26.443 [INFO][2756] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 16 09:07:26.479706 containerd[1461]: 2025-01-16 09:07:26.455 [WARNING][2756] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="912f2a1b5a15c6731e80ff9d78110bd3d86c7486f22e26f8ea1d5e80d7bd9848" HandleID="k8s-pod-network.912f2a1b5a15c6731e80ff9d78110bd3d86c7486f22e26f8ea1d5e80d7bd9848" Workload="143.110.238.88-k8s-nginx--deployment--8587fbcb89--pd78d-eth0" Jan 16 09:07:26.479706 containerd[1461]: 2025-01-16 09:07:26.455 [INFO][2756] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="912f2a1b5a15c6731e80ff9d78110bd3d86c7486f22e26f8ea1d5e80d7bd9848" HandleID="k8s-pod-network.912f2a1b5a15c6731e80ff9d78110bd3d86c7486f22e26f8ea1d5e80d7bd9848" Workload="143.110.238.88-k8s-nginx--deployment--8587fbcb89--pd78d-eth0" Jan 16 09:07:26.479706 containerd[1461]: 2025-01-16 09:07:26.476 [INFO][2756] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 16 09:07:26.479706 containerd[1461]: 2025-01-16 09:07:26.478 [INFO][2750] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="912f2a1b5a15c6731e80ff9d78110bd3d86c7486f22e26f8ea1d5e80d7bd9848" Jan 16 09:07:26.481363 containerd[1461]: time="2025-01-16T09:07:26.479937200Z" level=info msg="TearDown network for sandbox \"912f2a1b5a15c6731e80ff9d78110bd3d86c7486f22e26f8ea1d5e80d7bd9848\" successfully" Jan 16 09:07:26.481363 containerd[1461]: time="2025-01-16T09:07:26.479969623Z" level=info msg="StopPodSandbox for \"912f2a1b5a15c6731e80ff9d78110bd3d86c7486f22e26f8ea1d5e80d7bd9848\" returns successfully" Jan 16 09:07:26.481363 containerd[1461]: time="2025-01-16T09:07:26.480795345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-pd78d,Uid:5fc1d419-3814-4bc4-8054-6dbd37255d77,Namespace:default,Attempt:1,}" Jan 16 09:07:26.483223 systemd[1]: run-netns-cni\x2daacf4e72\x2d6581\x2ddb46\x2d3651\x2da3a48c653a29.mount: Deactivated successfully. Jan 16 09:07:26.755861 systemd-networkd[1373]: cali19443baedfd: Link UP Jan 16 09:07:26.757510 systemd-networkd[1373]: cali19443baedfd: Gained carrier Jan 16 09:07:26.788806 containerd[1461]: 2025-01-16 09:07:26.562 [INFO][2764] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {143.110.238.88-k8s-nginx--deployment--8587fbcb89--pd78d-eth0 nginx-deployment-8587fbcb89- default 5fc1d419-3814-4bc4-8054-6dbd37255d77 1105 0 2025-01-16 09:07:13 +0000 UTC map[app:nginx pod-template-hash:8587fbcb89 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 143.110.238.88 nginx-deployment-8587fbcb89-pd78d eth0 default [] [] [kns.default ksa.default.default] cali19443baedfd [] []}} ContainerID="6ae8c14e37b913e8f57ba2d5bb576638665e214427f7516848f2cc3d4c799ecc" Namespace="default" Pod="nginx-deployment-8587fbcb89-pd78d" WorkloadEndpoint="143.110.238.88-k8s-nginx--deployment--8587fbcb89--pd78d-" Jan 16 09:07:26.788806 containerd[1461]: 2025-01-16 09:07:26.562 [INFO][2764] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6ae8c14e37b913e8f57ba2d5bb576638665e214427f7516848f2cc3d4c799ecc" Namespace="default" Pod="nginx-deployment-8587fbcb89-pd78d" WorkloadEndpoint="143.110.238.88-k8s-nginx--deployment--8587fbcb89--pd78d-eth0" Jan 16 09:07:26.788806 containerd[1461]: 2025-01-16 09:07:26.612 [INFO][2775] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6ae8c14e37b913e8f57ba2d5bb576638665e214427f7516848f2cc3d4c799ecc" HandleID="k8s-pod-network.6ae8c14e37b913e8f57ba2d5bb576638665e214427f7516848f2cc3d4c799ecc" Workload="143.110.238.88-k8s-nginx--deployment--8587fbcb89--pd78d-eth0" Jan 16 09:07:26.788806 containerd[1461]: 2025-01-16 09:07:26.645 [INFO][2775] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6ae8c14e37b913e8f57ba2d5bb576638665e214427f7516848f2cc3d4c799ecc" HandleID="k8s-pod-network.6ae8c14e37b913e8f57ba2d5bb576638665e214427f7516848f2cc3d4c799ecc" Workload="143.110.238.88-k8s-nginx--deployment--8587fbcb89--pd78d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003bd0e0), Attrs:map[string]string{"namespace":"default", "node":"143.110.238.88", "pod":"nginx-deployment-8587fbcb89-pd78d", "timestamp":"2025-01-16 09:07:26.612583534 +0000 UTC"}, Hostname:"143.110.238.88", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 09:07:26.788806 containerd[1461]: 2025-01-16 09:07:26.645 [INFO][2775] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 16 09:07:26.788806 containerd[1461]: 2025-01-16 09:07:26.646 [INFO][2775] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 16 09:07:26.788806 containerd[1461]: 2025-01-16 09:07:26.646 [INFO][2775] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '143.110.238.88' Jan 16 09:07:26.788806 containerd[1461]: 2025-01-16 09:07:26.651 [INFO][2775] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6ae8c14e37b913e8f57ba2d5bb576638665e214427f7516848f2cc3d4c799ecc" host="143.110.238.88" Jan 16 09:07:26.788806 containerd[1461]: 2025-01-16 09:07:26.660 [INFO][2775] ipam/ipam.go 372: Looking up existing affinities for host host="143.110.238.88" Jan 16 09:07:26.788806 containerd[1461]: 2025-01-16 09:07:26.671 [INFO][2775] ipam/ipam.go 489: Trying affinity for 192.168.103.64/26 host="143.110.238.88" Jan 16 09:07:26.788806 containerd[1461]: 2025-01-16 09:07:26.677 [INFO][2775] ipam/ipam.go 155: Attempting to load block cidr=192.168.103.64/26 host="143.110.238.88" Jan 16 09:07:26.788806 containerd[1461]: 2025-01-16 09:07:26.686 [INFO][2775] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.103.64/26 host="143.110.238.88" Jan 16 09:07:26.788806 containerd[1461]: 2025-01-16 09:07:26.686 [INFO][2775] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.103.64/26 handle="k8s-pod-network.6ae8c14e37b913e8f57ba2d5bb576638665e214427f7516848f2cc3d4c799ecc" host="143.110.238.88" Jan 16 09:07:26.788806 containerd[1461]: 2025-01-16 09:07:26.694 [INFO][2775] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6ae8c14e37b913e8f57ba2d5bb576638665e214427f7516848f2cc3d4c799ecc Jan 16 09:07:26.788806 containerd[1461]: 2025-01-16 09:07:26.721 [INFO][2775] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.103.64/26 handle="k8s-pod-network.6ae8c14e37b913e8f57ba2d5bb576638665e214427f7516848f2cc3d4c799ecc" host="143.110.238.88" Jan 16 09:07:26.788806 containerd[1461]: 2025-01-16 09:07:26.747 [INFO][2775] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.103.66/26] block=192.168.103.64/26 handle="k8s-pod-network.6ae8c14e37b913e8f57ba2d5bb576638665e214427f7516848f2cc3d4c799ecc" host="143.110.238.88" Jan 16 09:07:26.788806 containerd[1461]: 2025-01-16 09:07:26.747 [INFO][2775] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.103.66/26] handle="k8s-pod-network.6ae8c14e37b913e8f57ba2d5bb576638665e214427f7516848f2cc3d4c799ecc" host="143.110.238.88" Jan 16 09:07:26.788806 containerd[1461]: 2025-01-16 09:07:26.747 [INFO][2775] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 16 09:07:26.788806 containerd[1461]: 2025-01-16 09:07:26.747 [INFO][2775] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.103.66/26] IPv6=[] ContainerID="6ae8c14e37b913e8f57ba2d5bb576638665e214427f7516848f2cc3d4c799ecc" HandleID="k8s-pod-network.6ae8c14e37b913e8f57ba2d5bb576638665e214427f7516848f2cc3d4c799ecc" Workload="143.110.238.88-k8s-nginx--deployment--8587fbcb89--pd78d-eth0" Jan 16 09:07:26.790999 containerd[1461]: 2025-01-16 09:07:26.749 [INFO][2764] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6ae8c14e37b913e8f57ba2d5bb576638665e214427f7516848f2cc3d4c799ecc" Namespace="default" Pod="nginx-deployment-8587fbcb89-pd78d" WorkloadEndpoint="143.110.238.88-k8s-nginx--deployment--8587fbcb89--pd78d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"143.110.238.88-k8s-nginx--deployment--8587fbcb89--pd78d-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"5fc1d419-3814-4bc4-8054-6dbd37255d77", ResourceVersion:"1105", Generation:0, CreationTimestamp:time.Date(2025, time.January, 16, 9, 7, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"143.110.238.88", ContainerID:"", Pod:"nginx-deployment-8587fbcb89-pd78d", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.103.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali19443baedfd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 16 09:07:26.790999 containerd[1461]: 2025-01-16 09:07:26.750 [INFO][2764] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.103.66/32] ContainerID="6ae8c14e37b913e8f57ba2d5bb576638665e214427f7516848f2cc3d4c799ecc" Namespace="default" Pod="nginx-deployment-8587fbcb89-pd78d" WorkloadEndpoint="143.110.238.88-k8s-nginx--deployment--8587fbcb89--pd78d-eth0" Jan 16 09:07:26.790999 containerd[1461]: 2025-01-16 09:07:26.750 [INFO][2764] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali19443baedfd ContainerID="6ae8c14e37b913e8f57ba2d5bb576638665e214427f7516848f2cc3d4c799ecc" Namespace="default" Pod="nginx-deployment-8587fbcb89-pd78d" WorkloadEndpoint="143.110.238.88-k8s-nginx--deployment--8587fbcb89--pd78d-eth0" Jan 16 09:07:26.790999 containerd[1461]: 2025-01-16 09:07:26.756 [INFO][2764] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6ae8c14e37b913e8f57ba2d5bb576638665e214427f7516848f2cc3d4c799ecc" Namespace="default" Pod="nginx-deployment-8587fbcb89-pd78d" WorkloadEndpoint="143.110.238.88-k8s-nginx--deployment--8587fbcb89--pd78d-eth0" Jan 16 09:07:26.790999 containerd[1461]: 2025-01-16 09:07:26.756 [INFO][2764] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6ae8c14e37b913e8f57ba2d5bb576638665e214427f7516848f2cc3d4c799ecc" Namespace="default" Pod="nginx-deployment-8587fbcb89-pd78d" WorkloadEndpoint="143.110.238.88-k8s-nginx--deployment--8587fbcb89--pd78d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"143.110.238.88-k8s-nginx--deployment--8587fbcb89--pd78d-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"5fc1d419-3814-4bc4-8054-6dbd37255d77", ResourceVersion:"1105", Generation:0, CreationTimestamp:time.Date(2025, time.January, 16, 9, 7, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"143.110.238.88", ContainerID:"6ae8c14e37b913e8f57ba2d5bb576638665e214427f7516848f2cc3d4c799ecc", Pod:"nginx-deployment-8587fbcb89-pd78d", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.103.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali19443baedfd", MAC:"3e:0c:f5:af:c8:a7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 16 09:07:26.790999 containerd[1461]: 2025-01-16 09:07:26.783 [INFO][2764] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6ae8c14e37b913e8f57ba2d5bb576638665e214427f7516848f2cc3d4c799ecc" Namespace="default" Pod="nginx-deployment-8587fbcb89-pd78d" WorkloadEndpoint="143.110.238.88-k8s-nginx--deployment--8587fbcb89--pd78d-eth0" Jan 16 09:07:26.830617 containerd[1461]: time="2025-01-16T09:07:26.830081451Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 09:07:26.830617 containerd[1461]: time="2025-01-16T09:07:26.830394241Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 09:07:26.830617 containerd[1461]: time="2025-01-16T09:07:26.830485001Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:07:26.831834 containerd[1461]: time="2025-01-16T09:07:26.831689328Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:07:26.867235 systemd[1]: Started cri-containerd-6ae8c14e37b913e8f57ba2d5bb576638665e214427f7516848f2cc3d4c799ecc.scope - libcontainer container 6ae8c14e37b913e8f57ba2d5bb576638665e214427f7516848f2cc3d4c799ecc. Jan 16 09:07:26.943304 containerd[1461]: time="2025-01-16T09:07:26.943219176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-pd78d,Uid:5fc1d419-3814-4bc4-8054-6dbd37255d77,Namespace:default,Attempt:1,} returns sandbox id \"6ae8c14e37b913e8f57ba2d5bb576638665e214427f7516848f2cc3d4c799ecc\"" Jan 16 09:07:27.169068 kubelet[1770]: E0116 09:07:27.168670 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:07:27.831429 containerd[1461]: time="2025-01-16T09:07:27.830584532Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:07:27.833440 containerd[1461]: time="2025-01-16T09:07:27.833146818Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 16 09:07:27.834707 containerd[1461]: time="2025-01-16T09:07:27.834610542Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:07:27.841321 containerd[1461]: time="2025-01-16T09:07:27.841201940Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:07:27.842600 containerd[1461]: time="2025-01-16T09:07:27.842099913Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.707322545s" Jan 16 09:07:27.842600 containerd[1461]: time="2025-01-16T09:07:27.842158643Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 16 09:07:27.845628 containerd[1461]: time="2025-01-16T09:07:27.844997102Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 16 09:07:27.849301 containerd[1461]: time="2025-01-16T09:07:27.849210518Z" level=info msg="CreateContainer within sandbox \"eef52434182d78abf2acc0ddaf33091278c50a7af7a6aa23028c97a1f5ab51f4\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 16 09:07:27.877591 containerd[1461]: time="2025-01-16T09:07:27.877497031Z" level=info msg="CreateContainer within sandbox \"eef52434182d78abf2acc0ddaf33091278c50a7af7a6aa23028c97a1f5ab51f4\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"462305951090386e43f8285abf00f7b385412fe2bfdfdcd4fb3ce1748d56762d\"" Jan 16 09:07:27.878814 containerd[1461]: time="2025-01-16T09:07:27.878696262Z" level=info msg="StartContainer for \"462305951090386e43f8285abf00f7b385412fe2bfdfdcd4fb3ce1748d56762d\"" Jan 16 09:07:27.927794 systemd[1]: run-containerd-runc-k8s.io-462305951090386e43f8285abf00f7b385412fe2bfdfdcd4fb3ce1748d56762d-runc.xNdTff.mount: Deactivated successfully. Jan 16 09:07:27.939011 systemd[1]: Started cri-containerd-462305951090386e43f8285abf00f7b385412fe2bfdfdcd4fb3ce1748d56762d.scope - libcontainer container 462305951090386e43f8285abf00f7b385412fe2bfdfdcd4fb3ce1748d56762d. Jan 16 09:07:27.961150 systemd-networkd[1373]: calicb356a36120: Gained IPv6LL Jan 16 09:07:27.994588 containerd[1461]: time="2025-01-16T09:07:27.994401333Z" level=info msg="StartContainer for \"462305951090386e43f8285abf00f7b385412fe2bfdfdcd4fb3ce1748d56762d\" returns successfully" Jan 16 09:07:28.152523 systemd-networkd[1373]: cali19443baedfd: Gained IPv6LL Jan 16 09:07:28.169542 kubelet[1770]: E0116 09:07:28.169435 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:07:28.227030 update_engine[1447]: I20250116 09:07:28.226466 1447 update_attempter.cc:509] Updating boot flags... Jan 16 09:07:28.278305 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2570) Jan 16 09:07:28.382966 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2380) Jan 16 09:07:29.170754 kubelet[1770]: E0116 09:07:29.170610 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:07:30.173760 kubelet[1770]: E0116 09:07:30.173706 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:07:31.174541 kubelet[1770]: E0116 09:07:31.174473 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:07:31.632439 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount396454544.mount: Deactivated successfully. Jan 16 09:07:32.175154 kubelet[1770]: E0116 09:07:32.175100 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:07:33.176741 kubelet[1770]: E0116 09:07:33.176684 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:07:34.099082 containerd[1461]: time="2025-01-16T09:07:34.097238284Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:07:34.099082 containerd[1461]: time="2025-01-16T09:07:34.098625317Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=71036018" Jan 16 09:07:34.099956 containerd[1461]: time="2025-01-16T09:07:34.099887013Z" level=info msg="ImageCreate event name:\"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:07:34.107484 containerd[1461]: time="2025-01-16T09:07:34.107421479Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:07:34.109441 containerd[1461]: time="2025-01-16T09:07:34.109370652Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\", size \"71035896\" in 6.264291311s" Jan 16 09:07:34.109441 containerd[1461]: time="2025-01-16T09:07:34.109436443Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\"" Jan 16 09:07:34.112388 containerd[1461]: time="2025-01-16T09:07:34.112319244Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 16 09:07:34.115068 containerd[1461]: time="2025-01-16T09:07:34.114779039Z" level=info msg="CreateContainer within sandbox \"6ae8c14e37b913e8f57ba2d5bb576638665e214427f7516848f2cc3d4c799ecc\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 16 09:07:34.156961 containerd[1461]: time="2025-01-16T09:07:34.156751091Z" level=info msg="CreateContainer within sandbox \"6ae8c14e37b913e8f57ba2d5bb576638665e214427f7516848f2cc3d4c799ecc\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"faca286a9ae5b393cd2c7d9c78b22d44a44213eb2ae8c1628934ac1293f02815\"" Jan 16 09:07:34.158062 containerd[1461]: time="2025-01-16T09:07:34.158010018Z" level=info msg="StartContainer for \"faca286a9ae5b393cd2c7d9c78b22d44a44213eb2ae8c1628934ac1293f02815\"" Jan 16 09:07:34.178462 kubelet[1770]: E0116 09:07:34.178298 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:07:34.228457 systemd[1]: Started cri-containerd-faca286a9ae5b393cd2c7d9c78b22d44a44213eb2ae8c1628934ac1293f02815.scope - libcontainer container faca286a9ae5b393cd2c7d9c78b22d44a44213eb2ae8c1628934ac1293f02815. Jan 16 09:07:34.316480 containerd[1461]: time="2025-01-16T09:07:34.316400563Z" level=info msg="StartContainer for \"faca286a9ae5b393cd2c7d9c78b22d44a44213eb2ae8c1628934ac1293f02815\" returns successfully" Jan 16 09:07:34.534404 kubelet[1770]: I0116 09:07:34.533809 1770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-pd78d" podStartSLOduration=14.367427127 podStartE2EDuration="21.533775041s" podCreationTimestamp="2025-01-16 09:07:13 +0000 UTC" firstStartedPulling="2025-01-16 09:07:26.94567459 +0000 UTC m=+30.625651053" lastFinishedPulling="2025-01-16 09:07:34.112022487 +0000 UTC m=+37.791998967" observedRunningTime="2025-01-16 09:07:34.532370632 +0000 UTC m=+38.212347117" watchObservedRunningTime="2025-01-16 09:07:34.533775041 +0000 UTC m=+38.213751526" Jan 16 09:07:35.179822 kubelet[1770]: E0116 09:07:35.179613 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:07:35.975638 containerd[1461]: time="2025-01-16T09:07:35.974325997Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:07:35.979172 containerd[1461]: time="2025-01-16T09:07:35.979095404Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 16 09:07:35.981407 containerd[1461]: time="2025-01-16T09:07:35.981343984Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:07:35.985200 containerd[1461]: time="2025-01-16T09:07:35.985134835Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:07:35.989199 containerd[1461]: time="2025-01-16T09:07:35.987313254Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.874922605s" Jan 16 09:07:35.989199 containerd[1461]: time="2025-01-16T09:07:35.989110787Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 16 09:07:35.998099 containerd[1461]: time="2025-01-16T09:07:35.998038556Z" level=info msg="CreateContainer within sandbox \"eef52434182d78abf2acc0ddaf33091278c50a7af7a6aa23028c97a1f5ab51f4\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 16 09:07:36.025946 containerd[1461]: time="2025-01-16T09:07:36.025672941Z" level=info msg="CreateContainer within sandbox \"eef52434182d78abf2acc0ddaf33091278c50a7af7a6aa23028c97a1f5ab51f4\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"e43b9fb5cea461e3335140a5081241f3fb36dc83c1d76f5f380029331ff54b2d\"" Jan 16 09:07:36.039694 containerd[1461]: time="2025-01-16T09:07:36.027270786Z" level=info msg="StartContainer for \"e43b9fb5cea461e3335140a5081241f3fb36dc83c1d76f5f380029331ff54b2d\"" Jan 16 09:07:36.132601 systemd[1]: Started cri-containerd-e43b9fb5cea461e3335140a5081241f3fb36dc83c1d76f5f380029331ff54b2d.scope - libcontainer container e43b9fb5cea461e3335140a5081241f3fb36dc83c1d76f5f380029331ff54b2d. Jan 16 09:07:36.186725 kubelet[1770]: E0116 09:07:36.186672 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:07:36.227736 containerd[1461]: time="2025-01-16T09:07:36.227550251Z" level=info msg="StartContainer for \"e43b9fb5cea461e3335140a5081241f3fb36dc83c1d76f5f380029331ff54b2d\" returns successfully" Jan 16 09:07:36.338564 kubelet[1770]: I0116 09:07:36.338484 1770 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 16 09:07:36.338564 kubelet[1770]: I0116 09:07:36.338568 1770 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 16 09:07:37.145489 kubelet[1770]: E0116 09:07:37.136007 1770 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:07:37.187280 kubelet[1770]: E0116 09:07:37.187168 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:07:38.188412 kubelet[1770]: E0116 09:07:38.188007 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:07:39.188965 kubelet[1770]: E0116 09:07:39.188762 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:07:40.189430 kubelet[1770]: E0116 09:07:40.189352 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:07:41.190623 kubelet[1770]: E0116 09:07:41.190431 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:07:42.062567 kubelet[1770]: I0116 09:07:42.061828 1770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-2rk47" podStartSLOduration=35.201354334 podStartE2EDuration="45.061798814s" podCreationTimestamp="2025-01-16 09:06:57 +0000 UTC" firstStartedPulling="2025-01-16 09:07:26.13391223 +0000 UTC m=+29.813888690" lastFinishedPulling="2025-01-16 09:07:35.994356695 +0000 UTC m=+39.674333170" observedRunningTime="2025-01-16 09:07:36.594473116 +0000 UTC m=+40.274449607" watchObservedRunningTime="2025-01-16 09:07:42.061798814 +0000 UTC m=+45.741775298" Jan 16 09:07:42.080085 systemd[1]: Created slice kubepods-besteffort-podad424032_a754_4640_a128_3c39de009b16.slice - libcontainer container kubepods-besteffort-podad424032_a754_4640_a128_3c39de009b16.slice. Jan 16 09:07:42.191598 kubelet[1770]: E0116 09:07:42.191515 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:07:42.205419 kubelet[1770]: I0116 09:07:42.205295 1770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxtgn\" (UniqueName: \"kubernetes.io/projected/ad424032-a754-4640-a128-3c39de009b16-kube-api-access-zxtgn\") pod \"nfs-server-provisioner-0\" (UID: \"ad424032-a754-4640-a128-3c39de009b16\") " pod="default/nfs-server-provisioner-0" Jan 16 09:07:42.205678 kubelet[1770]: I0116 09:07:42.205481 1770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/ad424032-a754-4640-a128-3c39de009b16-data\") pod \"nfs-server-provisioner-0\" (UID: \"ad424032-a754-4640-a128-3c39de009b16\") " pod="default/nfs-server-provisioner-0" Jan 16 09:07:42.391443 containerd[1461]: time="2025-01-16T09:07:42.391352586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:ad424032-a754-4640-a128-3c39de009b16,Namespace:default,Attempt:0,}" Jan 16 09:07:42.428015 systemd[1]: run-containerd-runc-k8s.io-9c05baffd2dfc1f2268330307c543b5b236f6ab344f5498346c6ae69f7f23fd7-runc.NKvnKT.mount: Deactivated successfully. Jan 16 09:07:42.579845 kubelet[1770]: E0116 09:07:42.578024 1770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:07:42.880295 systemd-networkd[1373]: cali60e51b789ff: Link UP Jan 16 09:07:42.883901 systemd-networkd[1373]: cali60e51b789ff: Gained carrier Jan 16 09:07:42.922243 containerd[1461]: 2025-01-16 09:07:42.590 [INFO][3048] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {143.110.238.88-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default ad424032-a754-4640-a128-3c39de009b16 1180 0 2025-01-16 09:07:41 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 143.110.238.88 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="122180e96e84909a1b293cf577dbb90204ef3d9c9d86adfef3c59f22b5406bff" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="143.110.238.88-k8s-nfs--server--provisioner--0-" Jan 16 09:07:42.922243 containerd[1461]: 2025-01-16 09:07:42.590 [INFO][3048] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="122180e96e84909a1b293cf577dbb90204ef3d9c9d86adfef3c59f22b5406bff" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="143.110.238.88-k8s-nfs--server--provisioner--0-eth0" Jan 16 09:07:42.922243 containerd[1461]: 2025-01-16 09:07:42.669 [INFO][3063] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="122180e96e84909a1b293cf577dbb90204ef3d9c9d86adfef3c59f22b5406bff" HandleID="k8s-pod-network.122180e96e84909a1b293cf577dbb90204ef3d9c9d86adfef3c59f22b5406bff" Workload="143.110.238.88-k8s-nfs--server--provisioner--0-eth0" Jan 16 09:07:42.922243 containerd[1461]: 2025-01-16 09:07:42.717 [INFO][3063] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="122180e96e84909a1b293cf577dbb90204ef3d9c9d86adfef3c59f22b5406bff" HandleID="k8s-pod-network.122180e96e84909a1b293cf577dbb90204ef3d9c9d86adfef3c59f22b5406bff" Workload="143.110.238.88-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290700), Attrs:map[string]string{"namespace":"default", "node":"143.110.238.88", "pod":"nfs-server-provisioner-0", "timestamp":"2025-01-16 09:07:42.669719168 +0000 UTC"}, Hostname:"143.110.238.88", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 09:07:42.922243 containerd[1461]: 2025-01-16 09:07:42.717 [INFO][3063] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 16 09:07:42.922243 containerd[1461]: 2025-01-16 09:07:42.718 [INFO][3063] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 16 09:07:42.922243 containerd[1461]: 2025-01-16 09:07:42.718 [INFO][3063] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '143.110.238.88' Jan 16 09:07:42.922243 containerd[1461]: 2025-01-16 09:07:42.754 [INFO][3063] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.122180e96e84909a1b293cf577dbb90204ef3d9c9d86adfef3c59f22b5406bff" host="143.110.238.88" Jan 16 09:07:42.922243 containerd[1461]: 2025-01-16 09:07:42.770 [INFO][3063] ipam/ipam.go 372: Looking up existing affinities for host host="143.110.238.88" Jan 16 09:07:42.922243 containerd[1461]: 2025-01-16 09:07:42.787 [INFO][3063] ipam/ipam.go 489: Trying affinity for 192.168.103.64/26 host="143.110.238.88" Jan 16 09:07:42.922243 containerd[1461]: 2025-01-16 09:07:42.795 [INFO][3063] ipam/ipam.go 155: Attempting to load block cidr=192.168.103.64/26 host="143.110.238.88" Jan 16 09:07:42.922243 containerd[1461]: 2025-01-16 09:07:42.809 [INFO][3063] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.103.64/26 host="143.110.238.88" Jan 16 09:07:42.922243 containerd[1461]: 2025-01-16 09:07:42.809 [INFO][3063] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.103.64/26 handle="k8s-pod-network.122180e96e84909a1b293cf577dbb90204ef3d9c9d86adfef3c59f22b5406bff" host="143.110.238.88" Jan 16 09:07:42.922243 containerd[1461]: 2025-01-16 09:07:42.831 [INFO][3063] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.122180e96e84909a1b293cf577dbb90204ef3d9c9d86adfef3c59f22b5406bff Jan 16 09:07:42.922243 containerd[1461]: 2025-01-16 09:07:42.846 [INFO][3063] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.103.64/26 handle="k8s-pod-network.122180e96e84909a1b293cf577dbb90204ef3d9c9d86adfef3c59f22b5406bff" host="143.110.238.88" Jan 16 09:07:42.922243 containerd[1461]: 2025-01-16 09:07:42.860 [INFO][3063] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.103.67/26] block=192.168.103.64/26 handle="k8s-pod-network.122180e96e84909a1b293cf577dbb90204ef3d9c9d86adfef3c59f22b5406bff" host="143.110.238.88" Jan 16 09:07:42.922243 containerd[1461]: 2025-01-16 09:07:42.860 [INFO][3063] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.103.67/26] handle="k8s-pod-network.122180e96e84909a1b293cf577dbb90204ef3d9c9d86adfef3c59f22b5406bff" host="143.110.238.88" Jan 16 09:07:42.922243 containerd[1461]: 2025-01-16 09:07:42.860 [INFO][3063] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 16 09:07:42.922243 containerd[1461]: 2025-01-16 09:07:42.860 [INFO][3063] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.103.67/26] IPv6=[] ContainerID="122180e96e84909a1b293cf577dbb90204ef3d9c9d86adfef3c59f22b5406bff" HandleID="k8s-pod-network.122180e96e84909a1b293cf577dbb90204ef3d9c9d86adfef3c59f22b5406bff" Workload="143.110.238.88-k8s-nfs--server--provisioner--0-eth0" Jan 16 09:07:42.929065 containerd[1461]: 2025-01-16 09:07:42.863 [INFO][3048] cni-plugin/k8s.go 386: Populated endpoint ContainerID="122180e96e84909a1b293cf577dbb90204ef3d9c9d86adfef3c59f22b5406bff" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="143.110.238.88-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"143.110.238.88-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"ad424032-a754-4640-a128-3c39de009b16", ResourceVersion:"1180", Generation:0, CreationTimestamp:time.Date(2025, time.January, 16, 9, 7, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"143.110.238.88", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.103.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 16 09:07:42.929065 containerd[1461]: 2025-01-16 09:07:42.863 [INFO][3048] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.103.67/32] ContainerID="122180e96e84909a1b293cf577dbb90204ef3d9c9d86adfef3c59f22b5406bff" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="143.110.238.88-k8s-nfs--server--provisioner--0-eth0" Jan 16 09:07:42.929065 containerd[1461]: 2025-01-16 09:07:42.863 [INFO][3048] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="122180e96e84909a1b293cf577dbb90204ef3d9c9d86adfef3c59f22b5406bff" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="143.110.238.88-k8s-nfs--server--provisioner--0-eth0" Jan 16 09:07:42.929065 containerd[1461]: 2025-01-16 09:07:42.882 [INFO][3048] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="122180e96e84909a1b293cf577dbb90204ef3d9c9d86adfef3c59f22b5406bff" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="143.110.238.88-k8s-nfs--server--provisioner--0-eth0" Jan 16 09:07:42.929499 containerd[1461]: 2025-01-16 09:07:42.887 [INFO][3048] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="122180e96e84909a1b293cf577dbb90204ef3d9c9d86adfef3c59f22b5406bff" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="143.110.238.88-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"143.110.238.88-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"ad424032-a754-4640-a128-3c39de009b16", ResourceVersion:"1180", Generation:0, CreationTimestamp:time.Date(2025, time.January, 16, 9, 7, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"143.110.238.88", ContainerID:"122180e96e84909a1b293cf577dbb90204ef3d9c9d86adfef3c59f22b5406bff", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.103.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"0e:d3:97:12:cc:52", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 16 09:07:42.929499 containerd[1461]: 2025-01-16 09:07:42.917 [INFO][3048] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="122180e96e84909a1b293cf577dbb90204ef3d9c9d86adfef3c59f22b5406bff" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="143.110.238.88-k8s-nfs--server--provisioner--0-eth0" Jan 16 09:07:42.993796 containerd[1461]: time="2025-01-16T09:07:42.992479391Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 09:07:42.993796 containerd[1461]: time="2025-01-16T09:07:42.992608180Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 09:07:42.993796 containerd[1461]: time="2025-01-16T09:07:42.992634080Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:07:43.031036 containerd[1461]: time="2025-01-16T09:07:43.019477129Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:07:43.087330 systemd[1]: Started cri-containerd-122180e96e84909a1b293cf577dbb90204ef3d9c9d86adfef3c59f22b5406bff.scope - libcontainer container 122180e96e84909a1b293cf577dbb90204ef3d9c9d86adfef3c59f22b5406bff. Jan 16 09:07:43.202001 kubelet[1770]: E0116 09:07:43.201944 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:07:43.205607 containerd[1461]: time="2025-01-16T09:07:43.205339706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:ad424032-a754-4640-a128-3c39de009b16,Namespace:default,Attempt:0,} returns sandbox id \"122180e96e84909a1b293cf577dbb90204ef3d9c9d86adfef3c59f22b5406bff\"" Jan 16 09:07:43.211979 containerd[1461]: time="2025-01-16T09:07:43.211568013Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 16 09:07:44.204491 kubelet[1770]: E0116 09:07:44.204425 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:07:44.288249 systemd-networkd[1373]: cali60e51b789ff: Gained IPv6LL Jan 16 09:07:45.219881 kubelet[1770]: E0116 09:07:45.219832 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:07:46.221118 kubelet[1770]: E0116 09:07:46.221043 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:07:47.259897 kubelet[1770]: E0116 09:07:47.256084 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:07:47.444709 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2320823238.mount: Deactivated successfully. Jan 16 09:07:48.260173 kubelet[1770]: E0116 09:07:48.260101 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:07:49.273553 kubelet[1770]: E0116 09:07:49.273102 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:07:50.284619 kubelet[1770]: E0116 09:07:50.284470 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:07:51.297970 kubelet[1770]: E0116 09:07:51.297873 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:07:51.926944 containerd[1461]: time="2025-01-16T09:07:51.926615796Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:07:51.928683 containerd[1461]: time="2025-01-16T09:07:51.928592779Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Jan 16 09:07:51.952687 containerd[1461]: time="2025-01-16T09:07:51.948288032Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:07:51.959468 containerd[1461]: time="2025-01-16T09:07:51.957000184Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:07:51.959468 containerd[1461]: time="2025-01-16T09:07:51.958591854Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 8.746959325s" Jan 16 09:07:51.959468 containerd[1461]: time="2025-01-16T09:07:51.958637726Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 16 09:07:51.991071 containerd[1461]: time="2025-01-16T09:07:51.990273665Z" level=info msg="CreateContainer within sandbox \"122180e96e84909a1b293cf577dbb90204ef3d9c9d86adfef3c59f22b5406bff\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 16 09:07:52.033489 containerd[1461]: time="2025-01-16T09:07:52.033425468Z" level=info msg="CreateContainer within sandbox \"122180e96e84909a1b293cf577dbb90204ef3d9c9d86adfef3c59f22b5406bff\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"c7436bb20e511f48417d7d1b2a8a89b5f89be400858a1ab75064005539cad0c8\"" Jan 16 09:07:52.033981 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1159428506.mount: Deactivated successfully. Jan 16 09:07:52.037023 containerd[1461]: time="2025-01-16T09:07:52.036749015Z" level=info msg="StartContainer for \"c7436bb20e511f48417d7d1b2a8a89b5f89be400858a1ab75064005539cad0c8\"" Jan 16 09:07:52.121077 systemd[1]: Started cri-containerd-c7436bb20e511f48417d7d1b2a8a89b5f89be400858a1ab75064005539cad0c8.scope - libcontainer container c7436bb20e511f48417d7d1b2a8a89b5f89be400858a1ab75064005539cad0c8. Jan 16 09:07:52.197319 containerd[1461]: time="2025-01-16T09:07:52.197159836Z" level=info msg="StartContainer for \"c7436bb20e511f48417d7d1b2a8a89b5f89be400858a1ab75064005539cad0c8\" returns successfully" Jan 16 09:07:52.313571 kubelet[1770]: E0116 09:07:52.310007 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:07:53.317032 kubelet[1770]: E0116 09:07:53.316441 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:07:54.316775 kubelet[1770]: E0116 09:07:54.316630 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:07:55.319658 kubelet[1770]: E0116 09:07:55.319530 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:07:56.320673 kubelet[1770]: E0116 09:07:56.320579 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:07:57.136974 kubelet[1770]: E0116 09:07:57.136128 1770 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:07:57.176757 containerd[1461]: time="2025-01-16T09:07:57.176297568Z" level=info msg="StopPodSandbox for \"7a09d4c272d67ebbac86e3dc5361d9e1fc6ea0d5bb785992f9e697890d2ee592\"" Jan 16 09:07:57.331994 kubelet[1770]: E0116 09:07:57.327744 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:07:57.418026 containerd[1461]: 2025-01-16 09:07:57.271 [WARNING][3252] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7a09d4c272d67ebbac86e3dc5361d9e1fc6ea0d5bb785992f9e697890d2ee592" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"143.110.238.88-k8s-csi--node--driver--2rk47-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1efedbdc-2152-4ce4-a7be-f69fdc2dddc3", ResourceVersion:"1148", Generation:0, CreationTimestamp:time.Date(2025, time.January, 16, 9, 6, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"143.110.238.88", ContainerID:"eef52434182d78abf2acc0ddaf33091278c50a7af7a6aa23028c97a1f5ab51f4", Pod:"csi-node-driver-2rk47", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.103.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicb356a36120", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 16 09:07:57.418026 containerd[1461]: 2025-01-16 09:07:57.271 [INFO][3252] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7a09d4c272d67ebbac86e3dc5361d9e1fc6ea0d5bb785992f9e697890d2ee592" Jan 16 09:07:57.418026 containerd[1461]: 2025-01-16 09:07:57.271 [INFO][3252] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7a09d4c272d67ebbac86e3dc5361d9e1fc6ea0d5bb785992f9e697890d2ee592" iface="eth0" netns="" Jan 16 09:07:57.418026 containerd[1461]: 2025-01-16 09:07:57.272 [INFO][3252] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7a09d4c272d67ebbac86e3dc5361d9e1fc6ea0d5bb785992f9e697890d2ee592" Jan 16 09:07:57.418026 containerd[1461]: 2025-01-16 09:07:57.272 [INFO][3252] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7a09d4c272d67ebbac86e3dc5361d9e1fc6ea0d5bb785992f9e697890d2ee592" Jan 16 09:07:57.418026 containerd[1461]: 2025-01-16 09:07:57.364 [INFO][3258] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7a09d4c272d67ebbac86e3dc5361d9e1fc6ea0d5bb785992f9e697890d2ee592" HandleID="k8s-pod-network.7a09d4c272d67ebbac86e3dc5361d9e1fc6ea0d5bb785992f9e697890d2ee592" Workload="143.110.238.88-k8s-csi--node--driver--2rk47-eth0" Jan 16 09:07:57.418026 containerd[1461]: 2025-01-16 09:07:57.364 [INFO][3258] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 16 09:07:57.418026 containerd[1461]: 2025-01-16 09:07:57.364 [INFO][3258] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 16 09:07:57.418026 containerd[1461]: 2025-01-16 09:07:57.396 [WARNING][3258] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7a09d4c272d67ebbac86e3dc5361d9e1fc6ea0d5bb785992f9e697890d2ee592" HandleID="k8s-pod-network.7a09d4c272d67ebbac86e3dc5361d9e1fc6ea0d5bb785992f9e697890d2ee592" Workload="143.110.238.88-k8s-csi--node--driver--2rk47-eth0" Jan 16 09:07:57.418026 containerd[1461]: 2025-01-16 09:07:57.396 [INFO][3258] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7a09d4c272d67ebbac86e3dc5361d9e1fc6ea0d5bb785992f9e697890d2ee592" HandleID="k8s-pod-network.7a09d4c272d67ebbac86e3dc5361d9e1fc6ea0d5bb785992f9e697890d2ee592" Workload="143.110.238.88-k8s-csi--node--driver--2rk47-eth0" Jan 16 09:07:57.418026 containerd[1461]: 2025-01-16 09:07:57.408 [INFO][3258] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 16 09:07:57.418026 containerd[1461]: 2025-01-16 09:07:57.413 [INFO][3252] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7a09d4c272d67ebbac86e3dc5361d9e1fc6ea0d5bb785992f9e697890d2ee592" Jan 16 09:07:57.418026 containerd[1461]: time="2025-01-16T09:07:57.417763699Z" level=info msg="TearDown network for sandbox \"7a09d4c272d67ebbac86e3dc5361d9e1fc6ea0d5bb785992f9e697890d2ee592\" successfully" Jan 16 09:07:57.418026 containerd[1461]: time="2025-01-16T09:07:57.417820656Z" level=info msg="StopPodSandbox for \"7a09d4c272d67ebbac86e3dc5361d9e1fc6ea0d5bb785992f9e697890d2ee592\" returns successfully" Jan 16 09:07:57.433327 containerd[1461]: time="2025-01-16T09:07:57.432181990Z" level=info msg="RemovePodSandbox for \"7a09d4c272d67ebbac86e3dc5361d9e1fc6ea0d5bb785992f9e697890d2ee592\"" Jan 16 09:07:57.433327 containerd[1461]: time="2025-01-16T09:07:57.432520467Z" level=info msg="Forcibly stopping sandbox \"7a09d4c272d67ebbac86e3dc5361d9e1fc6ea0d5bb785992f9e697890d2ee592\"" Jan 16 09:07:57.669168 containerd[1461]: 2025-01-16 09:07:57.552 [WARNING][3280] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7a09d4c272d67ebbac86e3dc5361d9e1fc6ea0d5bb785992f9e697890d2ee592" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"143.110.238.88-k8s-csi--node--driver--2rk47-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1efedbdc-2152-4ce4-a7be-f69fdc2dddc3", ResourceVersion:"1148", Generation:0, CreationTimestamp:time.Date(2025, time.January, 16, 9, 6, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"143.110.238.88", ContainerID:"eef52434182d78abf2acc0ddaf33091278c50a7af7a6aa23028c97a1f5ab51f4", Pod:"csi-node-driver-2rk47", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.103.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicb356a36120", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 16 09:07:57.669168 containerd[1461]: 2025-01-16 09:07:57.552 [INFO][3280] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7a09d4c272d67ebbac86e3dc5361d9e1fc6ea0d5bb785992f9e697890d2ee592" Jan 16 09:07:57.669168 containerd[1461]: 2025-01-16 09:07:57.553 [INFO][3280] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7a09d4c272d67ebbac86e3dc5361d9e1fc6ea0d5bb785992f9e697890d2ee592" iface="eth0" netns="" Jan 16 09:07:57.669168 containerd[1461]: 2025-01-16 09:07:57.553 [INFO][3280] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7a09d4c272d67ebbac86e3dc5361d9e1fc6ea0d5bb785992f9e697890d2ee592" Jan 16 09:07:57.669168 containerd[1461]: 2025-01-16 09:07:57.553 [INFO][3280] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7a09d4c272d67ebbac86e3dc5361d9e1fc6ea0d5bb785992f9e697890d2ee592" Jan 16 09:07:57.669168 containerd[1461]: 2025-01-16 09:07:57.628 [INFO][3286] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7a09d4c272d67ebbac86e3dc5361d9e1fc6ea0d5bb785992f9e697890d2ee592" HandleID="k8s-pod-network.7a09d4c272d67ebbac86e3dc5361d9e1fc6ea0d5bb785992f9e697890d2ee592" Workload="143.110.238.88-k8s-csi--node--driver--2rk47-eth0" Jan 16 09:07:57.669168 containerd[1461]: 2025-01-16 09:07:57.629 [INFO][3286] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 16 09:07:57.669168 containerd[1461]: 2025-01-16 09:07:57.629 [INFO][3286] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 16 09:07:57.669168 containerd[1461]: 2025-01-16 09:07:57.646 [WARNING][3286] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7a09d4c272d67ebbac86e3dc5361d9e1fc6ea0d5bb785992f9e697890d2ee592" HandleID="k8s-pod-network.7a09d4c272d67ebbac86e3dc5361d9e1fc6ea0d5bb785992f9e697890d2ee592" Workload="143.110.238.88-k8s-csi--node--driver--2rk47-eth0" Jan 16 09:07:57.669168 containerd[1461]: 2025-01-16 09:07:57.646 [INFO][3286] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7a09d4c272d67ebbac86e3dc5361d9e1fc6ea0d5bb785992f9e697890d2ee592" HandleID="k8s-pod-network.7a09d4c272d67ebbac86e3dc5361d9e1fc6ea0d5bb785992f9e697890d2ee592" Workload="143.110.238.88-k8s-csi--node--driver--2rk47-eth0" Jan 16 09:07:57.669168 containerd[1461]: 2025-01-16 09:07:57.650 [INFO][3286] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 16 09:07:57.669168 containerd[1461]: 2025-01-16 09:07:57.664 [INFO][3280] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7a09d4c272d67ebbac86e3dc5361d9e1fc6ea0d5bb785992f9e697890d2ee592" Jan 16 09:07:57.669168 containerd[1461]: time="2025-01-16T09:07:57.667568732Z" level=info msg="TearDown network for sandbox \"7a09d4c272d67ebbac86e3dc5361d9e1fc6ea0d5bb785992f9e697890d2ee592\" successfully" Jan 16 09:07:57.771457 containerd[1461]: time="2025-01-16T09:07:57.769594113Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7a09d4c272d67ebbac86e3dc5361d9e1fc6ea0d5bb785992f9e697890d2ee592\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 16 09:07:57.771699 containerd[1461]: time="2025-01-16T09:07:57.771611866Z" level=info msg="RemovePodSandbox \"7a09d4c272d67ebbac86e3dc5361d9e1fc6ea0d5bb785992f9e697890d2ee592\" returns successfully" Jan 16 09:07:57.772727 containerd[1461]: time="2025-01-16T09:07:57.772656490Z" level=info msg="StopPodSandbox for \"912f2a1b5a15c6731e80ff9d78110bd3d86c7486f22e26f8ea1d5e80d7bd9848\"" Jan 16 09:07:57.975413 containerd[1461]: 2025-01-16 09:07:57.885 [WARNING][3307] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="912f2a1b5a15c6731e80ff9d78110bd3d86c7486f22e26f8ea1d5e80d7bd9848" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"143.110.238.88-k8s-nginx--deployment--8587fbcb89--pd78d-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"5fc1d419-3814-4bc4-8054-6dbd37255d77", ResourceVersion:"1137", Generation:0, CreationTimestamp:time.Date(2025, time.January, 16, 9, 7, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"143.110.238.88", ContainerID:"6ae8c14e37b913e8f57ba2d5bb576638665e214427f7516848f2cc3d4c799ecc", Pod:"nginx-deployment-8587fbcb89-pd78d", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.103.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali19443baedfd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 16 09:07:57.975413 containerd[1461]: 2025-01-16 09:07:57.886 [INFO][3307] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="912f2a1b5a15c6731e80ff9d78110bd3d86c7486f22e26f8ea1d5e80d7bd9848" Jan 16 09:07:57.975413 containerd[1461]: 2025-01-16 09:07:57.886 [INFO][3307] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="912f2a1b5a15c6731e80ff9d78110bd3d86c7486f22e26f8ea1d5e80d7bd9848" iface="eth0" netns="" Jan 16 09:07:57.975413 containerd[1461]: 2025-01-16 09:07:57.886 [INFO][3307] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="912f2a1b5a15c6731e80ff9d78110bd3d86c7486f22e26f8ea1d5e80d7bd9848" Jan 16 09:07:57.975413 containerd[1461]: 2025-01-16 09:07:57.886 [INFO][3307] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="912f2a1b5a15c6731e80ff9d78110bd3d86c7486f22e26f8ea1d5e80d7bd9848" Jan 16 09:07:57.975413 containerd[1461]: 2025-01-16 09:07:57.932 [INFO][3313] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="912f2a1b5a15c6731e80ff9d78110bd3d86c7486f22e26f8ea1d5e80d7bd9848" HandleID="k8s-pod-network.912f2a1b5a15c6731e80ff9d78110bd3d86c7486f22e26f8ea1d5e80d7bd9848" Workload="143.110.238.88-k8s-nginx--deployment--8587fbcb89--pd78d-eth0" Jan 16 09:07:57.975413 containerd[1461]: 2025-01-16 09:07:57.933 [INFO][3313] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 16 09:07:57.975413 containerd[1461]: 2025-01-16 09:07:57.933 [INFO][3313] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 16 09:07:57.975413 containerd[1461]: 2025-01-16 09:07:57.958 [WARNING][3313] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="912f2a1b5a15c6731e80ff9d78110bd3d86c7486f22e26f8ea1d5e80d7bd9848" HandleID="k8s-pod-network.912f2a1b5a15c6731e80ff9d78110bd3d86c7486f22e26f8ea1d5e80d7bd9848" Workload="143.110.238.88-k8s-nginx--deployment--8587fbcb89--pd78d-eth0" Jan 16 09:07:57.975413 containerd[1461]: 2025-01-16 09:07:57.959 [INFO][3313] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="912f2a1b5a15c6731e80ff9d78110bd3d86c7486f22e26f8ea1d5e80d7bd9848" HandleID="k8s-pod-network.912f2a1b5a15c6731e80ff9d78110bd3d86c7486f22e26f8ea1d5e80d7bd9848" Workload="143.110.238.88-k8s-nginx--deployment--8587fbcb89--pd78d-eth0" Jan 16 09:07:57.975413 containerd[1461]: 2025-01-16 09:07:57.964 [INFO][3313] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 16 09:07:57.975413 containerd[1461]: 2025-01-16 09:07:57.969 [INFO][3307] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="912f2a1b5a15c6731e80ff9d78110bd3d86c7486f22e26f8ea1d5e80d7bd9848" Jan 16 09:07:57.977865 containerd[1461]: time="2025-01-16T09:07:57.976042818Z" level=info msg="TearDown network for sandbox \"912f2a1b5a15c6731e80ff9d78110bd3d86c7486f22e26f8ea1d5e80d7bd9848\" successfully" Jan 16 09:07:57.977865 containerd[1461]: time="2025-01-16T09:07:57.976133009Z" level=info msg="StopPodSandbox for \"912f2a1b5a15c6731e80ff9d78110bd3d86c7486f22e26f8ea1d5e80d7bd9848\" returns successfully" Jan 16 09:07:57.985621 containerd[1461]: time="2025-01-16T09:07:57.982625162Z" level=info msg="RemovePodSandbox for \"912f2a1b5a15c6731e80ff9d78110bd3d86c7486f22e26f8ea1d5e80d7bd9848\"" Jan 16 09:07:57.985621 containerd[1461]: time="2025-01-16T09:07:57.982678468Z" level=info msg="Forcibly stopping sandbox \"912f2a1b5a15c6731e80ff9d78110bd3d86c7486f22e26f8ea1d5e80d7bd9848\"" Jan 16 09:07:58.224993 containerd[1461]: 2025-01-16 09:07:58.113 [WARNING][3331] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="912f2a1b5a15c6731e80ff9d78110bd3d86c7486f22e26f8ea1d5e80d7bd9848" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"143.110.238.88-k8s-nginx--deployment--8587fbcb89--pd78d-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"5fc1d419-3814-4bc4-8054-6dbd37255d77", ResourceVersion:"1137", Generation:0, CreationTimestamp:time.Date(2025, time.January, 16, 9, 7, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"143.110.238.88", ContainerID:"6ae8c14e37b913e8f57ba2d5bb576638665e214427f7516848f2cc3d4c799ecc", Pod:"nginx-deployment-8587fbcb89-pd78d", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.103.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali19443baedfd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 16 09:07:58.224993 containerd[1461]: 2025-01-16 09:07:58.113 [INFO][3331] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="912f2a1b5a15c6731e80ff9d78110bd3d86c7486f22e26f8ea1d5e80d7bd9848" Jan 16 09:07:58.224993 containerd[1461]: 2025-01-16 09:07:58.113 [INFO][3331] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="912f2a1b5a15c6731e80ff9d78110bd3d86c7486f22e26f8ea1d5e80d7bd9848" iface="eth0" netns="" Jan 16 09:07:58.224993 containerd[1461]: 2025-01-16 09:07:58.113 [INFO][3331] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="912f2a1b5a15c6731e80ff9d78110bd3d86c7486f22e26f8ea1d5e80d7bd9848" Jan 16 09:07:58.224993 containerd[1461]: 2025-01-16 09:07:58.113 [INFO][3331] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="912f2a1b5a15c6731e80ff9d78110bd3d86c7486f22e26f8ea1d5e80d7bd9848" Jan 16 09:07:58.224993 containerd[1461]: 2025-01-16 09:07:58.174 [INFO][3337] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="912f2a1b5a15c6731e80ff9d78110bd3d86c7486f22e26f8ea1d5e80d7bd9848" HandleID="k8s-pod-network.912f2a1b5a15c6731e80ff9d78110bd3d86c7486f22e26f8ea1d5e80d7bd9848" Workload="143.110.238.88-k8s-nginx--deployment--8587fbcb89--pd78d-eth0" Jan 16 09:07:58.224993 containerd[1461]: 2025-01-16 09:07:58.174 [INFO][3337] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 16 09:07:58.224993 containerd[1461]: 2025-01-16 09:07:58.174 [INFO][3337] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 16 09:07:58.224993 containerd[1461]: 2025-01-16 09:07:58.202 [WARNING][3337] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="912f2a1b5a15c6731e80ff9d78110bd3d86c7486f22e26f8ea1d5e80d7bd9848" HandleID="k8s-pod-network.912f2a1b5a15c6731e80ff9d78110bd3d86c7486f22e26f8ea1d5e80d7bd9848" Workload="143.110.238.88-k8s-nginx--deployment--8587fbcb89--pd78d-eth0" Jan 16 09:07:58.224993 containerd[1461]: 2025-01-16 09:07:58.202 [INFO][3337] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="912f2a1b5a15c6731e80ff9d78110bd3d86c7486f22e26f8ea1d5e80d7bd9848" HandleID="k8s-pod-network.912f2a1b5a15c6731e80ff9d78110bd3d86c7486f22e26f8ea1d5e80d7bd9848" Workload="143.110.238.88-k8s-nginx--deployment--8587fbcb89--pd78d-eth0" Jan 16 09:07:58.224993 containerd[1461]: 2025-01-16 09:07:58.221 [INFO][3337] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 16 09:07:58.224993 containerd[1461]: 2025-01-16 09:07:58.222 [INFO][3331] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="912f2a1b5a15c6731e80ff9d78110bd3d86c7486f22e26f8ea1d5e80d7bd9848" Jan 16 09:07:58.226317 containerd[1461]: time="2025-01-16T09:07:58.225058438Z" level=info msg="TearDown network for sandbox \"912f2a1b5a15c6731e80ff9d78110bd3d86c7486f22e26f8ea1d5e80d7bd9848\" successfully" Jan 16 09:07:58.228899 containerd[1461]: time="2025-01-16T09:07:58.228461294Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"912f2a1b5a15c6731e80ff9d78110bd3d86c7486f22e26f8ea1d5e80d7bd9848\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 16 09:07:58.228899 containerd[1461]: time="2025-01-16T09:07:58.228555132Z" level=info msg="RemovePodSandbox \"912f2a1b5a15c6731e80ff9d78110bd3d86c7486f22e26f8ea1d5e80d7bd9848\" returns successfully" Jan 16 09:07:58.328847 kubelet[1770]: E0116 09:07:58.328693 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:07:59.329206 kubelet[1770]: E0116 09:07:59.329090 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:08:00.329479 kubelet[1770]: E0116 09:08:00.329383 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:08:01.330305 kubelet[1770]: E0116 09:08:01.330194 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:08:01.847345 kubelet[1770]: I0116 09:08:01.843896 1770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=12.093379835 podStartE2EDuration="20.843855228s" podCreationTimestamp="2025-01-16 09:07:41 +0000 UTC" firstStartedPulling="2025-01-16 09:07:43.210214756 +0000 UTC m=+46.890191230" lastFinishedPulling="2025-01-16 09:07:51.960690145 +0000 UTC m=+55.640666623" observedRunningTime="2025-01-16 09:07:52.76048242 +0000 UTC m=+56.440458907" watchObservedRunningTime="2025-01-16 09:08:01.843855228 +0000 UTC m=+65.523831712" Jan 16 09:08:01.878437 systemd[1]: Created slice kubepods-besteffort-pod7e1b35f4_3a16_43ec_877b_a5864e241f56.slice - libcontainer container kubepods-besteffort-pod7e1b35f4_3a16_43ec_877b_a5864e241f56.slice. Jan 16 09:08:01.954591 kubelet[1770]: I0116 09:08:01.954529 1770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4c4f\" (UniqueName: \"kubernetes.io/projected/7e1b35f4-3a16-43ec-877b-a5864e241f56-kube-api-access-c4c4f\") pod \"test-pod-1\" (UID: \"7e1b35f4-3a16-43ec-877b-a5864e241f56\") " pod="default/test-pod-1" Jan 16 09:08:01.954591 kubelet[1770]: I0116 09:08:01.954600 1770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6e5bfef3-81ab-4fb1-9904-58bb80d172d3\" (UniqueName: \"kubernetes.io/nfs/7e1b35f4-3a16-43ec-877b-a5864e241f56-pvc-6e5bfef3-81ab-4fb1-9904-58bb80d172d3\") pod \"test-pod-1\" (UID: \"7e1b35f4-3a16-43ec-877b-a5864e241f56\") " pod="default/test-pod-1" Jan 16 09:08:02.218970 kernel: FS-Cache: Loaded Jan 16 09:08:02.330772 kubelet[1770]: E0116 09:08:02.330654 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:08:02.394676 kernel: RPC: Registered named UNIX socket transport module. Jan 16 09:08:02.394856 kernel: RPC: Registered udp transport module. Jan 16 09:08:02.394889 kernel: RPC: Registered tcp transport module. Jan 16 09:08:02.394972 kernel: RPC: Registered tcp-with-tls transport module. Jan 16 09:08:02.395007 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 16 09:08:03.021654 kernel: NFS: Registering the id_resolver key type Jan 16 09:08:03.023052 kernel: Key type id_resolver registered Jan 16 09:08:03.025972 kernel: Key type id_legacy registered Jan 16 09:08:03.146245 nfsidmap[3367]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.0-f-8515dcac45' Jan 16 09:08:03.165830 nfsidmap[3368]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.0-f-8515dcac45' Jan 16 09:08:03.336763 kubelet[1770]: E0116 09:08:03.330830 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:08:03.390882 containerd[1461]: time="2025-01-16T09:08:03.390808125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:7e1b35f4-3a16-43ec-877b-a5864e241f56,Namespace:default,Attempt:0,}" Jan 16 09:08:03.885055 systemd-networkd[1373]: cali5ec59c6bf6e: Link UP Jan 16 09:08:03.887500 systemd-networkd[1373]: cali5ec59c6bf6e: Gained carrier Jan 16 09:08:03.914261 containerd[1461]: 2025-01-16 09:08:03.585 [INFO][3383] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {143.110.238.88-k8s-test--pod--1-eth0 default 7e1b35f4-3a16-43ec-877b-a5864e241f56 1256 0 2025-01-16 09:07:43 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 143.110.238.88 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="9b0b542dd7bbb6c8c403248013a005dad22e8e818920f50a83287e91e1a6861e" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="143.110.238.88-k8s-test--pod--1-" Jan 16 09:08:03.914261 containerd[1461]: 2025-01-16 09:08:03.585 [INFO][3383] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9b0b542dd7bbb6c8c403248013a005dad22e8e818920f50a83287e91e1a6861e" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="143.110.238.88-k8s-test--pod--1-eth0" Jan 16 09:08:03.914261 containerd[1461]: 2025-01-16 09:08:03.694 [INFO][3394] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9b0b542dd7bbb6c8c403248013a005dad22e8e818920f50a83287e91e1a6861e" HandleID="k8s-pod-network.9b0b542dd7bbb6c8c403248013a005dad22e8e818920f50a83287e91e1a6861e" Workload="143.110.238.88-k8s-test--pod--1-eth0" Jan 16 09:08:03.914261 containerd[1461]: 2025-01-16 09:08:03.748 [INFO][3394] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9b0b542dd7bbb6c8c403248013a005dad22e8e818920f50a83287e91e1a6861e" HandleID="k8s-pod-network.9b0b542dd7bbb6c8c403248013a005dad22e8e818920f50a83287e91e1a6861e" Workload="143.110.238.88-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031b900), Attrs:map[string]string{"namespace":"default", "node":"143.110.238.88", "pod":"test-pod-1", "timestamp":"2025-01-16 09:08:03.694486554 +0000 UTC"}, Hostname:"143.110.238.88", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 09:08:03.914261 containerd[1461]: 2025-01-16 09:08:03.748 [INFO][3394] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 16 09:08:03.914261 containerd[1461]: 2025-01-16 09:08:03.748 [INFO][3394] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 16 09:08:03.914261 containerd[1461]: 2025-01-16 09:08:03.748 [INFO][3394] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '143.110.238.88' Jan 16 09:08:03.914261 containerd[1461]: 2025-01-16 09:08:03.754 [INFO][3394] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9b0b542dd7bbb6c8c403248013a005dad22e8e818920f50a83287e91e1a6861e" host="143.110.238.88" Jan 16 09:08:03.914261 containerd[1461]: 2025-01-16 09:08:03.780 [INFO][3394] ipam/ipam.go 372: Looking up existing affinities for host host="143.110.238.88" Jan 16 09:08:03.914261 containerd[1461]: 2025-01-16 09:08:03.810 [INFO][3394] ipam/ipam.go 489: Trying affinity for 192.168.103.64/26 host="143.110.238.88" Jan 16 09:08:03.914261 containerd[1461]: 2025-01-16 09:08:03.822 [INFO][3394] ipam/ipam.go 155: Attempting to load block cidr=192.168.103.64/26 host="143.110.238.88" Jan 16 09:08:03.914261 containerd[1461]: 2025-01-16 09:08:03.836 [INFO][3394] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.103.64/26 host="143.110.238.88" Jan 16 09:08:03.914261 containerd[1461]: 2025-01-16 09:08:03.836 [INFO][3394] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.103.64/26 handle="k8s-pod-network.9b0b542dd7bbb6c8c403248013a005dad22e8e818920f50a83287e91e1a6861e" host="143.110.238.88" Jan 16 09:08:03.914261 containerd[1461]: 2025-01-16 09:08:03.848 [INFO][3394] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9b0b542dd7bbb6c8c403248013a005dad22e8e818920f50a83287e91e1a6861e Jan 16 09:08:03.914261 containerd[1461]: 2025-01-16 09:08:03.862 [INFO][3394] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.103.64/26 handle="k8s-pod-network.9b0b542dd7bbb6c8c403248013a005dad22e8e818920f50a83287e91e1a6861e" host="143.110.238.88" Jan 16 09:08:03.914261 containerd[1461]: 2025-01-16 09:08:03.874 [INFO][3394] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.103.68/26] block=192.168.103.64/26 handle="k8s-pod-network.9b0b542dd7bbb6c8c403248013a005dad22e8e818920f50a83287e91e1a6861e" host="143.110.238.88" Jan 16 09:08:03.914261 containerd[1461]: 2025-01-16 09:08:03.874 [INFO][3394] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.103.68/26] handle="k8s-pod-network.9b0b542dd7bbb6c8c403248013a005dad22e8e818920f50a83287e91e1a6861e" host="143.110.238.88" Jan 16 09:08:03.914261 containerd[1461]: 2025-01-16 09:08:03.874 [INFO][3394] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 16 09:08:03.914261 containerd[1461]: 2025-01-16 09:08:03.874 [INFO][3394] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.103.68/26] IPv6=[] ContainerID="9b0b542dd7bbb6c8c403248013a005dad22e8e818920f50a83287e91e1a6861e" HandleID="k8s-pod-network.9b0b542dd7bbb6c8c403248013a005dad22e8e818920f50a83287e91e1a6861e" Workload="143.110.238.88-k8s-test--pod--1-eth0" Jan 16 09:08:03.914261 containerd[1461]: 2025-01-16 09:08:03.878 [INFO][3383] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9b0b542dd7bbb6c8c403248013a005dad22e8e818920f50a83287e91e1a6861e" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="143.110.238.88-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"143.110.238.88-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"7e1b35f4-3a16-43ec-877b-a5864e241f56", ResourceVersion:"1256", Generation:0, CreationTimestamp:time.Date(2025, time.January, 16, 9, 7, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"143.110.238.88", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.103.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 16 09:08:03.920340 containerd[1461]: 2025-01-16 09:08:03.878 [INFO][3383] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.103.68/32] ContainerID="9b0b542dd7bbb6c8c403248013a005dad22e8e818920f50a83287e91e1a6861e" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="143.110.238.88-k8s-test--pod--1-eth0" Jan 16 09:08:03.920340 containerd[1461]: 2025-01-16 09:08:03.878 [INFO][3383] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="9b0b542dd7bbb6c8c403248013a005dad22e8e818920f50a83287e91e1a6861e" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="143.110.238.88-k8s-test--pod--1-eth0" Jan 16 09:08:03.920340 containerd[1461]: 2025-01-16 09:08:03.885 [INFO][3383] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9b0b542dd7bbb6c8c403248013a005dad22e8e818920f50a83287e91e1a6861e" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="143.110.238.88-k8s-test--pod--1-eth0" Jan 16 09:08:03.920340 containerd[1461]: 2025-01-16 09:08:03.887 [INFO][3383] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9b0b542dd7bbb6c8c403248013a005dad22e8e818920f50a83287e91e1a6861e" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="143.110.238.88-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"143.110.238.88-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"7e1b35f4-3a16-43ec-877b-a5864e241f56", ResourceVersion:"1256", Generation:0, CreationTimestamp:time.Date(2025, time.January, 16, 9, 7, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"143.110.238.88", ContainerID:"9b0b542dd7bbb6c8c403248013a005dad22e8e818920f50a83287e91e1a6861e", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.103.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"5e:69:be:e1:16:ca", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 16 09:08:03.920340 containerd[1461]: 2025-01-16 09:08:03.909 [INFO][3383] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9b0b542dd7bbb6c8c403248013a005dad22e8e818920f50a83287e91e1a6861e" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="143.110.238.88-k8s-test--pod--1-eth0" Jan 16 09:08:03.989098 containerd[1461]: time="2025-01-16T09:08:03.986404084Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 09:08:03.989098 containerd[1461]: time="2025-01-16T09:08:03.986647869Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 09:08:03.989098 containerd[1461]: time="2025-01-16T09:08:03.986719554Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:08:03.989098 containerd[1461]: time="2025-01-16T09:08:03.987026707Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:08:04.080397 systemd[1]: Started cri-containerd-9b0b542dd7bbb6c8c403248013a005dad22e8e818920f50a83287e91e1a6861e.scope - libcontainer container 9b0b542dd7bbb6c8c403248013a005dad22e8e818920f50a83287e91e1a6861e. Jan 16 09:08:04.198163 systemd[1]: run-containerd-runc-k8s.io-9b0b542dd7bbb6c8c403248013a005dad22e8e818920f50a83287e91e1a6861e-runc.yJiPub.mount: Deactivated successfully. Jan 16 09:08:04.236302 containerd[1461]: time="2025-01-16T09:08:04.235903426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:7e1b35f4-3a16-43ec-877b-a5864e241f56,Namespace:default,Attempt:0,} returns sandbox id \"9b0b542dd7bbb6c8c403248013a005dad22e8e818920f50a83287e91e1a6861e\"" Jan 16 09:08:04.243157 containerd[1461]: time="2025-01-16T09:08:04.242706254Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 16 09:08:04.336650 kubelet[1770]: E0116 09:08:04.336587 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:08:04.633157 containerd[1461]: time="2025-01-16T09:08:04.629667722Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:08:04.635383 containerd[1461]: time="2025-01-16T09:08:04.635279642Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 16 09:08:04.640401 containerd[1461]: time="2025-01-16T09:08:04.640167780Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\", size \"71035896\" in 397.413272ms" Jan 16 09:08:04.640401 containerd[1461]: time="2025-01-16T09:08:04.640232011Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\"" Jan 16 09:08:04.647501 containerd[1461]: time="2025-01-16T09:08:04.647247644Z" level=info msg="CreateContainer within sandbox \"9b0b542dd7bbb6c8c403248013a005dad22e8e818920f50a83287e91e1a6861e\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 16 09:08:04.675710 containerd[1461]: time="2025-01-16T09:08:04.675664171Z" level=info msg="CreateContainer within sandbox \"9b0b542dd7bbb6c8c403248013a005dad22e8e818920f50a83287e91e1a6861e\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"f734c7a11df61fa1f6563d9b8e4fd0d3491d8c1ed8ec182263df3f1ab151818a\"" Jan 16 09:08:04.677999 containerd[1461]: time="2025-01-16T09:08:04.676975054Z" level=info msg="StartContainer for \"f734c7a11df61fa1f6563d9b8e4fd0d3491d8c1ed8ec182263df3f1ab151818a\"" Jan 16 09:08:04.743283 systemd[1]: Started cri-containerd-f734c7a11df61fa1f6563d9b8e4fd0d3491d8c1ed8ec182263df3f1ab151818a.scope - libcontainer container f734c7a11df61fa1f6563d9b8e4fd0d3491d8c1ed8ec182263df3f1ab151818a. Jan 16 09:08:04.834821 containerd[1461]: time="2025-01-16T09:08:04.834396385Z" level=info msg="StartContainer for \"f734c7a11df61fa1f6563d9b8e4fd0d3491d8c1ed8ec182263df3f1ab151818a\" returns successfully" Jan 16 09:08:05.337636 kubelet[1770]: E0116 09:08:05.337529 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:08:05.593073 systemd-networkd[1373]: cali5ec59c6bf6e: Gained IPv6LL Jan 16 09:08:05.801084 kubelet[1770]: I0116 09:08:05.800900 1770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=22.399731514 podStartE2EDuration="22.800872835s" podCreationTimestamp="2025-01-16 09:07:43 +0000 UTC" firstStartedPulling="2025-01-16 09:08:04.241951176 +0000 UTC m=+67.921927652" lastFinishedPulling="2025-01-16 09:08:04.643092499 +0000 UTC m=+68.323068973" observedRunningTime="2025-01-16 09:08:05.800340952 +0000 UTC m=+69.480317437" watchObservedRunningTime="2025-01-16 09:08:05.800872835 +0000 UTC m=+69.480849329" Jan 16 09:08:06.338307 kubelet[1770]: E0116 09:08:06.338151 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:08:07.339495 kubelet[1770]: E0116 09:08:07.339401 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:08:08.340365 kubelet[1770]: E0116 09:08:08.340257 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:08:09.341896 kubelet[1770]: E0116 09:08:09.341215 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:08:10.342331 kubelet[1770]: E0116 09:08:10.342179 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:08:11.342636 kubelet[1770]: E0116 09:08:11.342350 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:08:12.343361 kubelet[1770]: E0116 09:08:12.343225 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:08:13.345064 kubelet[1770]: E0116 09:08:13.344981 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"