Jan 30 13:41:05.904541 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 30 13:41:05.904583 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:41:05.904602 kernel: BIOS-provided physical RAM map: Jan 30 13:41:05.904621 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 30 13:41:05.904628 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 30 13:41:05.904634 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 30 13:41:05.904641 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 30 13:41:05.904647 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 30 13:41:05.904654 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Jan 30 13:41:05.904660 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Jan 30 13:41:05.904669 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Jan 30 13:41:05.904680 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Jan 30 13:41:05.904686 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Jan 30 13:41:05.904692 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Jan 30 13:41:05.904700 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Jan 30 13:41:05.904711 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 30 13:41:05.904720 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Jan 30 13:41:05.904726 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Jan 30 13:41:05.904733 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 30 13:41:05.904740 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 30 13:41:05.904746 kernel: NX (Execute Disable) protection: active Jan 30 13:41:05.904753 kernel: APIC: Static calls initialized Jan 30 13:41:05.904759 kernel: efi: EFI v2.7 by EDK II Jan 30 13:41:05.904766 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b674118 Jan 30 13:41:05.904773 kernel: SMBIOS 2.8 present. Jan 30 13:41:05.904779 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Jan 30 13:41:05.904786 kernel: Hypervisor detected: KVM Jan 30 13:41:05.904795 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 30 13:41:05.904801 kernel: kvm-clock: using sched offset of 3960587456 cycles Jan 30 13:41:05.904808 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 30 13:41:05.904815 kernel: tsc: Detected 2794.750 MHz processor Jan 30 13:41:05.904822 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 13:41:05.904829 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 13:41:05.904836 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Jan 30 13:41:05.904843 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 30 13:41:05.904850 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 13:41:05.904859 kernel: Using GB pages for direct mapping Jan 30 13:41:05.904865 kernel: Secure boot disabled Jan 30 13:41:05.904872 kernel: ACPI: Early table checksum verification disabled Jan 30 13:41:05.904879 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 30 13:41:05.904889 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 30 13:41:05.904896 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:41:05.904904 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:41:05.904913 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 30 13:41:05.904920 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:41:05.904927 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:41:05.904934 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:41:05.904941 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:41:05.904948 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 30 13:41:05.904956 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 30 13:41:05.904965 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Jan 30 13:41:05.904972 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 30 13:41:05.904979 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 30 13:41:05.904986 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 30 13:41:05.904993 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 30 13:41:05.905000 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 30 13:41:05.905007 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 30 13:41:05.905014 kernel: No NUMA configuration found Jan 30 13:41:05.905021 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Jan 30 13:41:05.905030 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Jan 30 13:41:05.905037 kernel: Zone ranges: Jan 30 13:41:05.905044 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 13:41:05.905060 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Jan 30 13:41:05.905068 kernel: Normal empty Jan 30 13:41:05.905075 kernel: Movable zone start for each node Jan 30 13:41:05.905082 kernel: Early memory node ranges Jan 30 13:41:05.905089 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 30 13:41:05.905096 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 30 13:41:05.905103 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 30 13:41:05.905113 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Jan 30 13:41:05.905119 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Jan 30 13:41:05.905126 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Jan 30 13:41:05.905134 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Jan 30 13:41:05.905141 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 13:41:05.905148 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 30 13:41:05.905155 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 30 13:41:05.905162 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 13:41:05.905169 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Jan 30 13:41:05.905179 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jan 30 13:41:05.905186 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Jan 30 13:41:05.905193 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 30 13:41:05.905200 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 30 13:41:05.905207 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 30 13:41:05.905214 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 30 13:41:05.905221 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 30 13:41:05.905229 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 13:41:05.905236 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 30 13:41:05.905245 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 30 13:41:05.905252 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 13:41:05.905259 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 30 13:41:05.905266 kernel: TSC deadline timer available Jan 30 13:41:05.905273 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 30 13:41:05.905280 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 30 13:41:05.905287 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 30 13:41:05.905294 kernel: kvm-guest: setup PV sched yield Jan 30 13:41:05.905301 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 30 13:41:05.905308 kernel: Booting paravirtualized kernel on KVM Jan 30 13:41:05.905318 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 13:41:05.905325 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 30 13:41:05.905332 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Jan 30 13:41:05.905339 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Jan 30 13:41:05.905346 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 30 13:41:05.905353 kernel: kvm-guest: PV spinlocks enabled Jan 30 13:41:05.905360 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 30 13:41:05.905369 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:41:05.905379 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 13:41:05.905386 kernel: random: crng init done Jan 30 13:41:05.905393 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 30 13:41:05.905400 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 13:41:05.905407 kernel: Fallback order for Node 0: 0 Jan 30 13:41:05.905414 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Jan 30 13:41:05.905433 kernel: Policy zone: DMA32 Jan 30 13:41:05.905440 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 13:41:05.905448 kernel: Memory: 2395616K/2567000K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 171124K reserved, 0K cma-reserved) Jan 30 13:41:05.905458 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 30 13:41:05.905465 kernel: ftrace: allocating 37921 entries in 149 pages Jan 30 13:41:05.905472 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 13:41:05.905480 kernel: Dynamic Preempt: voluntary Jan 30 13:41:05.905494 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 13:41:05.905514 kernel: rcu: RCU event tracing is enabled. Jan 30 13:41:05.905522 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 30 13:41:05.905530 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 13:41:05.905537 kernel: Rude variant of Tasks RCU enabled. Jan 30 13:41:05.905544 kernel: Tracing variant of Tasks RCU enabled. Jan 30 13:41:05.905552 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 13:41:05.905559 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 30 13:41:05.905570 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 30 13:41:05.905577 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 13:41:05.905585 kernel: Console: colour dummy device 80x25 Jan 30 13:41:05.905592 kernel: printk: console [ttyS0] enabled Jan 30 13:41:05.905599 kernel: ACPI: Core revision 20230628 Jan 30 13:41:05.905609 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 30 13:41:05.905617 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 13:41:05.905624 kernel: x2apic enabled Jan 30 13:41:05.905632 kernel: APIC: Switched APIC routing to: physical x2apic Jan 30 13:41:05.905640 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 30 13:41:05.905647 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 30 13:41:05.905655 kernel: kvm-guest: setup PV IPIs Jan 30 13:41:05.905663 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 30 13:41:05.905672 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 30 13:41:05.905683 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Jan 30 13:41:05.905692 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 30 13:41:05.905700 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 30 13:41:05.905707 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 30 13:41:05.905715 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 13:41:05.905722 kernel: Spectre V2 : Mitigation: Retpolines Jan 30 13:41:05.905730 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 13:41:05.905737 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 30 13:41:05.905745 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 30 13:41:05.905755 kernel: RETBleed: Mitigation: untrained return thunk Jan 30 13:41:05.905762 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 30 13:41:05.905770 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 30 13:41:05.905777 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 30 13:41:05.905785 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 30 13:41:05.905793 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 30 13:41:05.905801 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 30 13:41:05.905813 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 30 13:41:05.905823 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 30 13:41:05.905830 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 30 13:41:05.905838 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 30 13:41:05.905845 kernel: Freeing SMP alternatives memory: 32K Jan 30 13:41:05.905853 kernel: pid_max: default: 32768 minimum: 301 Jan 30 13:41:05.905860 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 13:41:05.905868 kernel: landlock: Up and running. Jan 30 13:41:05.905875 kernel: SELinux: Initializing. Jan 30 13:41:05.905883 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 13:41:05.905892 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 13:41:05.905900 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 30 13:41:05.905908 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 30 13:41:05.905915 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 30 13:41:05.905923 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 30 13:41:05.905930 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 30 13:41:05.905938 kernel: ... version: 0 Jan 30 13:41:05.905945 kernel: ... bit width: 48 Jan 30 13:41:05.905952 kernel: ... generic registers: 6 Jan 30 13:41:05.905962 kernel: ... value mask: 0000ffffffffffff Jan 30 13:41:05.905969 kernel: ... max period: 00007fffffffffff Jan 30 13:41:05.905977 kernel: ... fixed-purpose events: 0 Jan 30 13:41:05.905984 kernel: ... event mask: 000000000000003f Jan 30 13:41:05.905999 kernel: signal: max sigframe size: 1776 Jan 30 13:41:05.906007 kernel: rcu: Hierarchical SRCU implementation. Jan 30 13:41:05.906022 kernel: rcu: Max phase no-delay instances is 400. Jan 30 13:41:05.906030 kernel: smp: Bringing up secondary CPUs ... Jan 30 13:41:05.906037 kernel: smpboot: x86: Booting SMP configuration: Jan 30 13:41:05.906053 kernel: .... node #0, CPUs: #1 #2 #3 Jan 30 13:41:05.906060 kernel: smp: Brought up 1 node, 4 CPUs Jan 30 13:41:05.906068 kernel: smpboot: Max logical packages: 1 Jan 30 13:41:05.906075 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Jan 30 13:41:05.906086 kernel: devtmpfs: initialized Jan 30 13:41:05.906094 kernel: x86/mm: Memory block size: 128MB Jan 30 13:41:05.906101 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 30 13:41:05.906109 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 30 13:41:05.906117 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Jan 30 13:41:05.906127 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 30 13:41:05.906134 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 30 13:41:05.906142 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 13:41:05.906150 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 30 13:41:05.906157 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 13:41:05.906164 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 13:41:05.906172 kernel: audit: initializing netlink subsys (disabled) Jan 30 13:41:05.906180 kernel: audit: type=2000 audit(1738244465.268:1): state=initialized audit_enabled=0 res=1 Jan 30 13:41:05.906187 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 13:41:05.906197 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 13:41:05.906204 kernel: cpuidle: using governor menu Jan 30 13:41:05.906212 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 13:41:05.906219 kernel: dca service started, version 1.12.1 Jan 30 13:41:05.906227 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 30 13:41:05.906234 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 30 13:41:05.906242 kernel: PCI: Using configuration type 1 for base access Jan 30 13:41:05.906249 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 13:41:05.906257 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 13:41:05.906267 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 13:41:05.906274 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 13:41:05.906282 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 13:41:05.906289 kernel: ACPI: Added _OSI(Module Device) Jan 30 13:41:05.906297 kernel: ACPI: Added _OSI(Processor Device) Jan 30 13:41:05.906304 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 13:41:05.906312 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 13:41:05.906319 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 13:41:05.906327 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 30 13:41:05.906337 kernel: ACPI: Interpreter enabled Jan 30 13:41:05.906344 kernel: ACPI: PM: (supports S0 S3 S5) Jan 30 13:41:05.906351 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 13:41:05.906359 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 13:41:05.906366 kernel: PCI: Using E820 reservations for host bridge windows Jan 30 13:41:05.906374 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 30 13:41:05.906381 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 13:41:05.906569 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 30 13:41:05.906703 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 30 13:41:05.906825 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 30 13:41:05.906835 kernel: PCI host bridge to bus 0000:00 Jan 30 13:41:05.906968 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 30 13:41:05.907089 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 30 13:41:05.907201 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 30 13:41:05.907317 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 30 13:41:05.907447 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 30 13:41:05.907562 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Jan 30 13:41:05.907674 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 13:41:05.907813 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 30 13:41:05.907946 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 30 13:41:05.908076 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Jan 30 13:41:05.908208 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Jan 30 13:41:05.908328 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jan 30 13:41:05.908467 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Jan 30 13:41:05.908589 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 30 13:41:05.908725 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 30 13:41:05.908848 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Jan 30 13:41:05.908969 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Jan 30 13:41:05.909104 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Jan 30 13:41:05.909239 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 30 13:41:05.909360 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Jan 30 13:41:05.909496 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Jan 30 13:41:05.909618 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Jan 30 13:41:05.909746 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 30 13:41:05.909867 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Jan 30 13:41:05.909992 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Jan 30 13:41:05.910120 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Jan 30 13:41:05.910240 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Jan 30 13:41:05.910375 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 30 13:41:05.910559 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 30 13:41:05.910690 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 30 13:41:05.910815 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Jan 30 13:41:05.910933 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Jan 30 13:41:05.911076 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 30 13:41:05.911197 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Jan 30 13:41:05.911207 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 30 13:41:05.911215 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 30 13:41:05.911222 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 30 13:41:05.911230 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 30 13:41:05.911241 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 30 13:41:05.911249 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 30 13:41:05.911256 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 30 13:41:05.911272 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 30 13:41:05.911287 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 30 13:41:05.911302 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 30 13:41:05.911310 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 30 13:41:05.911318 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 30 13:41:05.911325 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 30 13:41:05.911336 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 30 13:41:05.911343 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 30 13:41:05.911351 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 30 13:41:05.911359 kernel: iommu: Default domain type: Translated Jan 30 13:41:05.911366 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 13:41:05.911374 kernel: efivars: Registered efivars operations Jan 30 13:41:05.911381 kernel: PCI: Using ACPI for IRQ routing Jan 30 13:41:05.911389 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 30 13:41:05.911396 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 30 13:41:05.911406 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Jan 30 13:41:05.911414 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Jan 30 13:41:05.911433 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Jan 30 13:41:05.911565 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 30 13:41:05.911687 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 30 13:41:05.911810 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 30 13:41:05.911820 kernel: vgaarb: loaded Jan 30 13:41:05.911828 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 30 13:41:05.911835 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 30 13:41:05.911847 kernel: clocksource: Switched to clocksource kvm-clock Jan 30 13:41:05.911855 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 13:41:05.911863 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 13:41:05.911870 kernel: pnp: PnP ACPI init Jan 30 13:41:05.912000 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 30 13:41:05.912011 kernel: pnp: PnP ACPI: found 6 devices Jan 30 13:41:05.912019 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 13:41:05.912026 kernel: NET: Registered PF_INET protocol family Jan 30 13:41:05.912037 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 13:41:05.912045 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 30 13:41:05.912060 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 13:41:05.912068 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 13:41:05.912076 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 30 13:41:05.912083 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 30 13:41:05.912091 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 13:41:05.912098 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 13:41:05.912106 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 13:41:05.912116 kernel: NET: Registered PF_XDP protocol family Jan 30 13:41:05.912241 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Jan 30 13:41:05.912362 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Jan 30 13:41:05.912541 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 30 13:41:05.912658 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 30 13:41:05.912767 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 30 13:41:05.912875 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 30 13:41:05.912982 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 30 13:41:05.913105 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Jan 30 13:41:05.913115 kernel: PCI: CLS 0 bytes, default 64 Jan 30 13:41:05.913123 kernel: Initialise system trusted keyrings Jan 30 13:41:05.913130 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 30 13:41:05.913138 kernel: Key type asymmetric registered Jan 30 13:41:05.913145 kernel: Asymmetric key parser 'x509' registered Jan 30 13:41:05.913153 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 13:41:05.913160 kernel: io scheduler mq-deadline registered Jan 30 13:41:05.913171 kernel: io scheduler kyber registered Jan 30 13:41:05.913178 kernel: io scheduler bfq registered Jan 30 13:41:05.913186 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 13:41:05.913194 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 30 13:41:05.913202 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 30 13:41:05.913209 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 30 13:41:05.913217 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 13:41:05.913225 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 13:41:05.913232 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 30 13:41:05.913240 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 30 13:41:05.913250 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 30 13:41:05.913372 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 30 13:41:05.913383 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 30 13:41:05.913516 kernel: rtc_cmos 00:04: registered as rtc0 Jan 30 13:41:05.913630 kernel: rtc_cmos 00:04: setting system clock to 2025-01-30T13:41:05 UTC (1738244465) Jan 30 13:41:05.913743 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 30 13:41:05.913753 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 30 13:41:05.913764 kernel: efifb: probing for efifb Jan 30 13:41:05.913772 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Jan 30 13:41:05.913779 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Jan 30 13:41:05.913787 kernel: efifb: scrolling: redraw Jan 30 13:41:05.913795 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Jan 30 13:41:05.913802 kernel: Console: switching to colour frame buffer device 100x37 Jan 30 13:41:05.913828 kernel: fb0: EFI VGA frame buffer device Jan 30 13:41:05.913838 kernel: pstore: Using crash dump compression: deflate Jan 30 13:41:05.913846 kernel: pstore: Registered efi_pstore as persistent store backend Jan 30 13:41:05.913856 kernel: NET: Registered PF_INET6 protocol family Jan 30 13:41:05.913864 kernel: Segment Routing with IPv6 Jan 30 13:41:05.913871 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 13:41:05.913879 kernel: NET: Registered PF_PACKET protocol family Jan 30 13:41:05.913887 kernel: Key type dns_resolver registered Jan 30 13:41:05.913894 kernel: IPI shorthand broadcast: enabled Jan 30 13:41:05.913902 kernel: sched_clock: Marking stable (586003321, 113634229)->(744637305, -44999755) Jan 30 13:41:05.913910 kernel: registered taskstats version 1 Jan 30 13:41:05.913918 kernel: Loading compiled-in X.509 certificates Jan 30 13:41:05.913927 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 30 13:41:05.913937 kernel: Key type .fscrypt registered Jan 30 13:41:05.913944 kernel: Key type fscrypt-provisioning registered Jan 30 13:41:05.913952 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 13:41:05.913960 kernel: ima: Allocated hash algorithm: sha1 Jan 30 13:41:05.913968 kernel: ima: No architecture policies found Jan 30 13:41:05.913975 kernel: clk: Disabling unused clocks Jan 30 13:41:05.913983 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 30 13:41:05.913991 kernel: Write protecting the kernel read-only data: 36864k Jan 30 13:41:05.914001 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 30 13:41:05.914009 kernel: Run /init as init process Jan 30 13:41:05.914016 kernel: with arguments: Jan 30 13:41:05.914024 kernel: /init Jan 30 13:41:05.914032 kernel: with environment: Jan 30 13:41:05.914041 kernel: HOME=/ Jan 30 13:41:05.914058 kernel: TERM=linux Jan 30 13:41:05.914066 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 13:41:05.914076 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:41:05.914088 systemd[1]: Detected virtualization kvm. Jan 30 13:41:05.914097 systemd[1]: Detected architecture x86-64. Jan 30 13:41:05.914105 systemd[1]: Running in initrd. Jan 30 13:41:05.914116 systemd[1]: No hostname configured, using default hostname. Jan 30 13:41:05.914126 systemd[1]: Hostname set to . Jan 30 13:41:05.914135 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:41:05.914143 systemd[1]: Queued start job for default target initrd.target. Jan 30 13:41:05.914151 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:41:05.914159 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:41:05.914168 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 13:41:05.914177 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:41:05.914185 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 13:41:05.914196 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 13:41:05.914206 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 13:41:05.914214 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 13:41:05.914223 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:41:05.914231 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:41:05.914239 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:41:05.914247 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:41:05.914258 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:41:05.914266 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:41:05.914275 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:41:05.914283 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:41:05.914291 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 13:41:05.914299 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 13:41:05.914308 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:41:05.914316 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:41:05.914327 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:41:05.914336 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:41:05.914344 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 13:41:05.914353 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:41:05.914361 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 13:41:05.914369 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 13:41:05.914377 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:41:05.914386 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:41:05.914394 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:41:05.914406 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 13:41:05.914414 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:41:05.914474 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 13:41:05.914502 systemd-journald[192]: Collecting audit messages is disabled. Jan 30 13:41:05.914525 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:41:05.914533 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:41:05.914542 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:41:05.914551 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:41:05.914562 systemd-journald[192]: Journal started Jan 30 13:41:05.914579 systemd-journald[192]: Runtime Journal (/run/log/journal/59c97fe902ae48d499cad13fa7db317c) is 6.0M, max 48.3M, 42.2M free. Jan 30 13:41:05.903384 systemd-modules-load[193]: Inserted module 'overlay' Jan 30 13:41:05.929174 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:41:05.929197 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:41:05.934872 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 13:41:05.935114 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:41:05.938001 kernel: Bridge firewalling registered Jan 30 13:41:05.938094 systemd-modules-load[193]: Inserted module 'br_netfilter' Jan 30 13:41:05.938282 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:41:05.941901 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:41:05.945821 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:41:05.948229 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:41:05.951485 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 13:41:05.952182 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:41:05.961914 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:41:05.969660 dracut-cmdline[223]: dracut-dracut-053 Jan 30 13:41:05.974310 dracut-cmdline[223]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:41:05.972608 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:41:06.006827 systemd-resolved[232]: Positive Trust Anchors: Jan 30 13:41:06.006841 systemd-resolved[232]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:41:06.006873 systemd-resolved[232]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:41:06.009365 systemd-resolved[232]: Defaulting to hostname 'linux'. Jan 30 13:41:06.010380 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:41:06.016141 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:41:06.071481 kernel: SCSI subsystem initialized Jan 30 13:41:06.080469 kernel: Loading iSCSI transport class v2.0-870. Jan 30 13:41:06.090476 kernel: iscsi: registered transport (tcp) Jan 30 13:41:06.117476 kernel: iscsi: registered transport (qla4xxx) Jan 30 13:41:06.117563 kernel: QLogic iSCSI HBA Driver Jan 30 13:41:06.175853 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 13:41:06.185689 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 13:41:06.211379 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 13:41:06.211439 kernel: device-mapper: uevent: version 1.0.3 Jan 30 13:41:06.211452 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 13:41:06.254456 kernel: raid6: avx2x4 gen() 30558 MB/s Jan 30 13:41:06.271443 kernel: raid6: avx2x2 gen() 31185 MB/s Jan 30 13:41:06.288512 kernel: raid6: avx2x1 gen() 26065 MB/s Jan 30 13:41:06.288532 kernel: raid6: using algorithm avx2x2 gen() 31185 MB/s Jan 30 13:41:06.306538 kernel: raid6: .... xor() 19750 MB/s, rmw enabled Jan 30 13:41:06.306560 kernel: raid6: using avx2x2 recovery algorithm Jan 30 13:41:06.326446 kernel: xor: automatically using best checksumming function avx Jan 30 13:41:06.478455 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 13:41:06.492580 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:41:06.505624 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:41:06.520155 systemd-udevd[412]: Using default interface naming scheme 'v255'. Jan 30 13:41:06.525400 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:41:06.533553 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 13:41:06.546385 dracut-pre-trigger[415]: rd.md=0: removing MD RAID activation Jan 30 13:41:06.579582 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:41:06.588606 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:41:06.652574 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:41:06.663650 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 13:41:06.674944 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 13:41:06.677115 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:41:06.678487 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:41:06.680941 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:41:06.691949 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 30 13:41:06.708960 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 30 13:41:06.709146 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 13:41:06.709158 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 13:41:06.709169 kernel: GPT:9289727 != 19775487 Jan 30 13:41:06.709179 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 13:41:06.709190 kernel: GPT:9289727 != 19775487 Jan 30 13:41:06.709199 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 13:41:06.709209 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:41:06.692590 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 13:41:06.703228 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:41:06.713798 kernel: AVX2 version of gcm_enc/dec engaged. Jan 30 13:41:06.713819 kernel: AES CTR mode by8 optimization enabled Jan 30 13:41:06.730446 kernel: libata version 3.00 loaded. Jan 30 13:41:06.739638 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:41:06.739861 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:41:06.744452 kernel: ahci 0000:00:1f.2: version 3.0 Jan 30 13:41:06.768019 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 30 13:41:06.768043 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (461) Jan 30 13:41:06.768054 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 30 13:41:06.768210 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 30 13:41:06.768365 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (470) Jan 30 13:41:06.768376 kernel: scsi host0: ahci Jan 30 13:41:06.768545 kernel: scsi host1: ahci Jan 30 13:41:06.768690 kernel: scsi host2: ahci Jan 30 13:41:06.768840 kernel: scsi host3: ahci Jan 30 13:41:06.769023 kernel: scsi host4: ahci Jan 30 13:41:06.769181 kernel: scsi host5: ahci Jan 30 13:41:06.769326 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Jan 30 13:41:06.769340 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Jan 30 13:41:06.769350 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Jan 30 13:41:06.769361 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Jan 30 13:41:06.769371 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Jan 30 13:41:06.769381 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Jan 30 13:41:06.744537 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:41:06.746815 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:41:06.747057 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:41:06.748491 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:41:06.760786 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:41:06.787518 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 30 13:41:06.794183 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 30 13:41:06.799612 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 30 13:41:06.800035 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 30 13:41:06.804695 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 13:41:06.819574 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 13:41:06.821966 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:41:06.822040 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:41:06.824475 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:41:06.827586 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:41:06.831119 disk-uuid[553]: Primary Header is updated. Jan 30 13:41:06.831119 disk-uuid[553]: Secondary Entries is updated. Jan 30 13:41:06.831119 disk-uuid[553]: Secondary Header is updated. Jan 30 13:41:06.833624 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:41:06.835461 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:41:06.846856 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:41:06.852577 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:41:06.876780 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:41:07.077513 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 30 13:41:07.077574 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 30 13:41:07.077585 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 30 13:41:07.078459 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 30 13:41:07.079455 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 30 13:41:07.080446 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 30 13:41:07.081454 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 30 13:41:07.081467 kernel: ata3.00: applying bridge limits Jan 30 13:41:07.082454 kernel: ata3.00: configured for UDMA/100 Jan 30 13:41:07.084446 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 30 13:41:07.131066 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 30 13:41:07.143083 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 30 13:41:07.143103 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 30 13:41:07.836451 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:41:07.836633 disk-uuid[555]: The operation has completed successfully. Jan 30 13:41:07.864738 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 13:41:07.864861 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 13:41:07.885636 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 13:41:07.889385 sh[596]: Success Jan 30 13:41:07.901450 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 30 13:41:07.933667 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 13:41:07.944852 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 13:41:07.947744 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 13:41:07.958812 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 30 13:41:07.958838 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:41:07.958849 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 13:41:07.959828 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 13:41:07.961443 kernel: BTRFS info (device dm-0): using free space tree Jan 30 13:41:07.965340 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 13:41:07.967602 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 13:41:07.972539 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 13:41:07.974542 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 13:41:07.984898 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:41:07.984950 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:41:07.984961 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:41:07.987477 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:41:07.996547 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 13:41:07.998787 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:41:08.008542 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 13:41:08.019621 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 13:41:08.069661 ignition[686]: Ignition 2.19.0 Jan 30 13:41:08.069679 ignition[686]: Stage: fetch-offline Jan 30 13:41:08.069735 ignition[686]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:41:08.069749 ignition[686]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:41:08.069862 ignition[686]: parsed url from cmdline: "" Jan 30 13:41:08.069867 ignition[686]: no config URL provided Jan 30 13:41:08.069872 ignition[686]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:41:08.069881 ignition[686]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:41:08.069908 ignition[686]: op(1): [started] loading QEMU firmware config module Jan 30 13:41:08.069913 ignition[686]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 30 13:41:08.077467 ignition[686]: op(1): [finished] loading QEMU firmware config module Jan 30 13:41:08.079553 ignition[686]: parsing config with SHA512: 182645ead7b0bd4f10bc2a7897382569d851872b7728c2e0993326431d9cbe02047f0581feaa95d779380726985d217c90d66de642bee146dc759ef664818300 Jan 30 13:41:08.082454 unknown[686]: fetched base config from "system" Jan 30 13:41:08.082466 unknown[686]: fetched user config from "qemu" Jan 30 13:41:08.082721 ignition[686]: fetch-offline: fetch-offline passed Jan 30 13:41:08.082782 ignition[686]: Ignition finished successfully Jan 30 13:41:08.084995 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:41:08.105733 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:41:08.118586 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:41:08.139990 systemd-networkd[785]: lo: Link UP Jan 30 13:41:08.140008 systemd-networkd[785]: lo: Gained carrier Jan 30 13:41:08.141559 systemd-networkd[785]: Enumeration completed Jan 30 13:41:08.141646 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:41:08.142027 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:41:08.142031 systemd-networkd[785]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:41:08.142214 systemd[1]: Reached target network.target - Network. Jan 30 13:41:08.142672 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 30 13:41:08.143452 systemd-networkd[785]: eth0: Link UP Jan 30 13:41:08.143457 systemd-networkd[785]: eth0: Gained carrier Jan 30 13:41:08.143465 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:41:08.150552 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 13:41:08.157491 systemd-networkd[785]: eth0: DHCPv4 address 10.0.0.74/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 13:41:08.162319 ignition[787]: Ignition 2.19.0 Jan 30 13:41:08.162330 ignition[787]: Stage: kargs Jan 30 13:41:08.162497 ignition[787]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:41:08.162508 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:41:08.163096 ignition[787]: kargs: kargs passed Jan 30 13:41:08.163131 ignition[787]: Ignition finished successfully Jan 30 13:41:08.166487 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 13:41:08.179595 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 13:41:08.189906 systemd-resolved[232]: Detected conflict on linux IN A 10.0.0.74 Jan 30 13:41:08.189925 systemd-resolved[232]: Hostname conflict, changing published hostname from 'linux' to 'linux3'. Jan 30 13:41:08.191737 ignition[797]: Ignition 2.19.0 Jan 30 13:41:08.191743 ignition[797]: Stage: disks Jan 30 13:41:08.191894 ignition[797]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:41:08.191905 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:41:08.192515 ignition[797]: disks: disks passed Jan 30 13:41:08.192558 ignition[797]: Ignition finished successfully Jan 30 13:41:08.198686 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 13:41:08.199141 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 13:41:08.200830 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 13:41:08.202927 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:41:08.205384 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:41:08.207230 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:41:08.216560 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 13:41:08.227691 systemd-fsck[807]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 30 13:41:08.234358 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 13:41:08.235803 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 13:41:08.322307 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 13:41:08.325449 kernel: EXT4-fs (vda9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 30 13:41:08.323875 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 13:41:08.334498 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:41:08.336564 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 13:41:08.339256 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 30 13:41:08.343383 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (815) Jan 30 13:41:08.339310 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 13:41:08.349198 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:41:08.349213 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:41:08.349224 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:41:08.339333 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:41:08.352400 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:41:08.344358 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 13:41:08.350049 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 13:41:08.354195 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:41:08.383393 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 13:41:08.386899 initrd-setup-root[846]: cut: /sysroot/etc/group: No such file or directory Jan 30 13:41:08.390213 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 13:41:08.393477 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 13:41:08.473188 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 13:41:08.485539 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 13:41:08.487118 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 13:41:08.493449 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:41:08.509636 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 13:41:08.513386 ignition[929]: INFO : Ignition 2.19.0 Jan 30 13:41:08.513386 ignition[929]: INFO : Stage: mount Jan 30 13:41:08.515103 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:41:08.515103 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:41:08.515103 ignition[929]: INFO : mount: mount passed Jan 30 13:41:08.515103 ignition[929]: INFO : Ignition finished successfully Jan 30 13:41:08.520728 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 13:41:08.533498 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 13:41:08.958601 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 13:41:08.970577 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:41:08.978442 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (941) Jan 30 13:41:08.978475 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:41:08.980313 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:41:08.980324 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:41:08.983446 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:41:08.984891 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:41:09.007542 ignition[958]: INFO : Ignition 2.19.0 Jan 30 13:41:09.007542 ignition[958]: INFO : Stage: files Jan 30 13:41:09.009375 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:41:09.009375 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:41:09.009375 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Jan 30 13:41:09.012984 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 13:41:09.012984 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 13:41:09.017770 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 13:41:09.019269 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 13:41:09.020904 unknown[958]: wrote ssh authorized keys file for user: core Jan 30 13:41:09.022030 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 13:41:09.022030 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 30 13:41:09.025390 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 13:41:09.025390 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:41:09.025390 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:41:09.025390 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 13:41:09.025390 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 13:41:09.025390 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 13:41:09.025390 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Jan 30 13:41:09.396091 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 30 13:41:09.494778 systemd-networkd[785]: eth0: Gained IPv6LL Jan 30 13:41:09.738901 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 13:41:09.738901 ignition[958]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Jan 30 13:41:09.742732 ignition[958]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 30 13:41:09.745056 ignition[958]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 30 13:41:09.745056 ignition[958]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Jan 30 13:41:09.745056 ignition[958]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Jan 30 13:41:09.769911 ignition[958]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 30 13:41:09.776142 ignition[958]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 30 13:41:09.777713 ignition[958]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Jan 30 13:41:09.777713 ignition[958]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:41:09.777713 ignition[958]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:41:09.777713 ignition[958]: INFO : files: files passed Jan 30 13:41:09.777713 ignition[958]: INFO : Ignition finished successfully Jan 30 13:41:09.779710 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 13:41:09.793546 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 13:41:09.795724 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 13:41:09.797255 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 13:41:09.797360 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 13:41:09.807385 initrd-setup-root-after-ignition[986]: grep: /sysroot/oem/oem-release: No such file or directory Jan 30 13:41:09.810062 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:41:09.810062 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:41:09.813500 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:41:09.816878 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:41:09.819714 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 13:41:09.828641 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 13:41:09.854135 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 13:41:09.854267 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 13:41:09.855054 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 13:41:09.857935 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 13:41:09.858312 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 13:41:09.861884 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 13:41:09.880167 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:41:09.888649 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 13:41:09.898398 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:41:09.898743 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:41:09.927517 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 13:41:09.927925 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 13:41:09.928054 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:41:09.933126 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 13:41:09.933478 systemd[1]: Stopped target basic.target - Basic System. Jan 30 13:41:09.933974 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 13:41:09.934308 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:41:09.934815 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 13:41:09.935162 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 13:41:09.935514 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:41:09.936018 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 13:41:09.936349 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 13:41:09.936848 systemd[1]: Stopped target swap.target - Swaps. Jan 30 13:41:09.937166 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 13:41:09.937273 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:41:09.938040 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:41:09.938388 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:41:09.938861 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 13:41:09.938980 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:41:09.961883 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 13:41:09.962006 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 13:41:09.965059 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 13:41:09.965171 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:41:09.965725 systemd[1]: Stopped target paths.target - Path Units. Jan 30 13:41:09.965984 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 13:41:09.974536 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:41:09.974946 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 13:41:09.975225 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 13:41:09.975770 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 13:41:09.975871 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:41:09.980922 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 13:41:09.981028 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:41:09.982499 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 13:41:09.982620 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:41:09.983024 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 13:41:09.983123 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 13:41:09.997582 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 13:41:09.998082 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 13:41:09.998196 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:41:09.999174 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 13:41:10.001886 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 13:41:10.002069 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:41:10.003975 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 13:41:10.004115 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:41:10.009098 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 13:41:10.009209 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 13:41:10.020936 ignition[1013]: INFO : Ignition 2.19.0 Jan 30 13:41:10.020936 ignition[1013]: INFO : Stage: umount Jan 30 13:41:10.022737 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:41:10.022737 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:41:10.022737 ignition[1013]: INFO : umount: umount passed Jan 30 13:41:10.022737 ignition[1013]: INFO : Ignition finished successfully Jan 30 13:41:10.024007 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 13:41:10.024140 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 13:41:10.027068 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 13:41:10.027484 systemd[1]: Stopped target network.target - Network. Jan 30 13:41:10.028640 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 13:41:10.028696 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 13:41:10.030850 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 13:41:10.030898 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 13:41:10.032800 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 13:41:10.032846 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 13:41:10.034740 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 13:41:10.034787 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 13:41:10.036744 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 13:41:10.038745 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 13:41:10.046480 systemd-networkd[785]: eth0: DHCPv6 lease lost Jan 30 13:41:10.048550 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 13:41:10.048707 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 13:41:10.050313 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 13:41:10.050356 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:41:10.063516 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 13:41:10.064534 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 13:41:10.064603 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:41:10.066872 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:41:10.069302 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 13:41:10.069419 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 13:41:10.074252 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:41:10.074313 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:41:10.075865 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 13:41:10.075912 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 13:41:10.077173 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 13:41:10.077221 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:41:10.085715 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 13:41:10.085906 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:41:10.088583 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 13:41:10.088663 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 13:41:10.090297 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 13:41:10.090335 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:41:10.092289 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 13:41:10.092339 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:41:10.094790 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 13:41:10.094839 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 13:41:10.096623 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:41:10.096670 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:41:10.102566 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 13:41:10.103835 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 13:41:10.103889 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:41:10.107051 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:41:10.107100 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:41:10.109602 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 13:41:10.109717 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 13:41:10.111620 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 13:41:10.111719 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 13:41:10.153460 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 13:41:10.153640 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 13:41:10.156077 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 13:41:10.157386 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 13:41:10.157462 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 13:41:10.178697 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 13:41:10.188326 systemd[1]: Switching root. Jan 30 13:41:10.224899 systemd-journald[192]: Journal stopped Jan 30 13:41:11.238209 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Jan 30 13:41:11.238310 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 13:41:11.238325 kernel: SELinux: policy capability open_perms=1 Jan 30 13:41:11.238340 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 13:41:11.238352 kernel: SELinux: policy capability always_check_network=0 Jan 30 13:41:11.238363 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 13:41:11.238379 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 13:41:11.238390 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 13:41:11.238406 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 13:41:11.238440 kernel: audit: type=1403 audit(1738244470.501:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 13:41:11.238455 systemd[1]: Successfully loaded SELinux policy in 38.780ms. Jan 30 13:41:11.239878 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.188ms. Jan 30 13:41:11.239894 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:41:11.239910 systemd[1]: Detected virtualization kvm. Jan 30 13:41:11.239929 systemd[1]: Detected architecture x86-64. Jan 30 13:41:11.239944 systemd[1]: Detected first boot. Jan 30 13:41:11.239958 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:41:11.239970 zram_generator::config[1058]: No configuration found. Jan 30 13:41:11.239983 systemd[1]: Populated /etc with preset unit settings. Jan 30 13:41:11.239996 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 30 13:41:11.240008 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 30 13:41:11.240020 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 30 13:41:11.240032 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 13:41:11.240046 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 13:41:11.240058 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 13:41:11.240070 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 13:41:11.240081 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 13:41:11.240093 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 13:41:11.240106 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 13:41:11.240118 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 13:41:11.240130 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:41:11.240142 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:41:11.240156 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 13:41:11.240168 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 13:41:11.240180 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 13:41:11.240192 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:41:11.240203 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 30 13:41:11.240217 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:41:11.240229 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 30 13:41:11.240241 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 30 13:41:11.240254 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 30 13:41:11.240268 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 13:41:11.240280 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:41:11.240292 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:41:11.240304 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:41:11.240316 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:41:11.240327 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 13:41:11.240339 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 13:41:11.240353 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:41:11.240365 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:41:11.240377 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:41:11.240389 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 13:41:11.240401 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 13:41:11.240412 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 13:41:11.240438 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 13:41:11.240450 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:41:11.240461 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 13:41:11.240478 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 13:41:11.240500 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 13:41:11.240519 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 13:41:11.240534 systemd[1]: Reached target machines.target - Containers. Jan 30 13:41:11.240549 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 13:41:11.240564 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:41:11.240580 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:41:11.240595 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 13:41:11.240608 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:41:11.240624 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:41:11.240635 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:41:11.240647 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 13:41:11.240659 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:41:11.240671 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 13:41:11.240683 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 30 13:41:11.240695 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 30 13:41:11.240707 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 30 13:41:11.240721 systemd[1]: Stopped systemd-fsck-usr.service. Jan 30 13:41:11.240733 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:41:11.240745 kernel: loop: module loaded Jan 30 13:41:11.240756 kernel: fuse: init (API version 7.39) Jan 30 13:41:11.240768 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:41:11.240781 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 13:41:11.240793 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 13:41:11.240805 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:41:11.240816 systemd[1]: verity-setup.service: Deactivated successfully. Jan 30 13:41:11.240831 systemd[1]: Stopped verity-setup.service. Jan 30 13:41:11.240843 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:41:11.240888 systemd-journald[1128]: Collecting audit messages is disabled. Jan 30 13:41:11.240909 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 13:41:11.240932 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 13:41:11.240945 systemd-journald[1128]: Journal started Jan 30 13:41:11.240966 systemd-journald[1128]: Runtime Journal (/run/log/journal/59c97fe902ae48d499cad13fa7db317c) is 6.0M, max 48.3M, 42.2M free. Jan 30 13:41:11.023455 systemd[1]: Queued start job for default target multi-user.target. Jan 30 13:41:11.043263 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 30 13:41:11.043698 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 30 13:41:11.242587 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:41:11.244728 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 13:41:11.245868 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 13:41:11.247088 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 13:41:11.248292 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 13:41:11.249610 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:41:11.251950 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 13:41:11.252126 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 13:41:11.254613 kernel: ACPI: bus type drm_connector registered Jan 30 13:41:11.254380 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:41:11.254576 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:41:11.256205 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:41:11.256373 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:41:11.257781 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:41:11.258022 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:41:11.259893 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 13:41:11.261350 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 13:41:11.261547 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 13:41:11.262958 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:41:11.263134 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:41:11.264540 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:41:11.265933 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 13:41:11.267619 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 13:41:11.283151 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 13:41:11.293517 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 13:41:11.296032 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 13:41:11.297296 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 13:41:11.297380 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:41:11.299472 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 13:41:11.301873 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 13:41:11.304107 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 13:41:11.305328 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:41:11.307987 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 13:41:11.311712 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 13:41:11.314031 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:41:11.316068 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 13:41:11.317479 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:41:11.318746 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:41:11.321046 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 13:41:11.327607 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 13:41:11.332385 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 13:41:11.333874 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 13:41:11.335382 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 13:41:11.342733 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 13:41:11.347516 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:41:11.349836 systemd-journald[1128]: Time spent on flushing to /var/log/journal/59c97fe902ae48d499cad13fa7db317c is 25.470ms for 985 entries. Jan 30 13:41:11.349836 systemd-journald[1128]: System Journal (/var/log/journal/59c97fe902ae48d499cad13fa7db317c) is 8.0M, max 195.6M, 187.6M free. Jan 30 13:41:11.384723 systemd-journald[1128]: Received client request to flush runtime journal. Jan 30 13:41:11.384854 kernel: loop0: detected capacity change from 0 to 142488 Jan 30 13:41:11.351778 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 13:41:11.359681 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 13:41:11.365017 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 13:41:11.373889 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:41:11.382070 udevadm[1183]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 30 13:41:11.387460 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 13:41:11.388244 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 13:41:11.400251 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 13:41:11.409671 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:41:11.411998 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 13:41:11.412862 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 13:41:11.416582 kernel: loop1: detected capacity change from 0 to 218376 Jan 30 13:41:11.432903 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Jan 30 13:41:11.432936 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Jan 30 13:41:11.439478 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:41:11.447491 kernel: loop2: detected capacity change from 0 to 140768 Jan 30 13:41:11.482448 kernel: loop3: detected capacity change from 0 to 142488 Jan 30 13:41:11.492612 kernel: loop4: detected capacity change from 0 to 218376 Jan 30 13:41:11.500504 kernel: loop5: detected capacity change from 0 to 140768 Jan 30 13:41:11.512519 (sd-merge)[1196]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 30 13:41:11.513201 (sd-merge)[1196]: Merged extensions into '/usr'. Jan 30 13:41:11.517045 systemd[1]: Reloading requested from client PID 1172 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 13:41:11.517058 systemd[1]: Reloading... Jan 30 13:41:11.744454 zram_generator::config[1221]: No configuration found. Jan 30 13:41:11.850271 ldconfig[1167]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 13:41:11.908773 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:41:11.962280 systemd[1]: Reloading finished in 444 ms. Jan 30 13:41:11.994777 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 13:41:11.996356 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 13:41:12.012648 systemd[1]: Starting ensure-sysext.service... Jan 30 13:41:12.014760 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:41:12.022262 systemd[1]: Reloading requested from client PID 1259 ('systemctl') (unit ensure-sysext.service)... Jan 30 13:41:12.022277 systemd[1]: Reloading... Jan 30 13:41:12.122132 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 13:41:12.122515 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 13:41:12.123494 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 13:41:12.123791 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Jan 30 13:41:12.123869 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Jan 30 13:41:12.127091 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:41:12.127229 systemd-tmpfiles[1260]: Skipping /boot Jan 30 13:41:12.144815 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:41:12.144942 systemd-tmpfiles[1260]: Skipping /boot Jan 30 13:41:12.166447 zram_generator::config[1290]: No configuration found. Jan 30 13:41:12.272286 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:41:12.321678 systemd[1]: Reloading finished in 299 ms. Jan 30 13:41:12.341636 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 13:41:12.349926 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:41:12.357504 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 13:41:12.359923 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 13:41:12.362247 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 13:41:12.366755 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:41:12.370363 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:41:12.373716 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 13:41:12.380542 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:41:12.381016 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:41:12.383627 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:41:12.387669 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:41:12.391711 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:41:12.393228 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:41:12.396657 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 13:41:12.397819 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:41:12.398862 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:41:12.399911 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:41:12.404026 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:41:12.404769 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:41:12.407199 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:41:12.407696 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:41:12.408161 systemd-udevd[1332]: Using default interface naming scheme 'v255'. Jan 30 13:41:12.409831 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 13:41:12.421321 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 13:41:12.425344 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:41:12.425713 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:41:12.429730 augenrules[1355]: No rules Jan 30 13:41:12.433867 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:41:12.436683 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:41:12.446758 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:41:12.448047 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:41:12.450852 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 13:41:12.453416 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:41:12.454533 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:41:12.456568 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 13:41:12.463937 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 13:41:12.466496 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 13:41:12.468976 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:41:12.470486 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:41:12.472402 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:41:12.472636 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:41:12.476648 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:41:12.476834 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:41:12.501095 systemd[1]: Finished ensure-sysext.service. Jan 30 13:41:12.512628 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1370) Jan 30 13:41:12.512520 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 30 13:41:12.514681 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 13:41:12.517542 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:41:12.519408 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:41:12.527645 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:41:12.533600 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:41:12.537110 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:41:12.540565 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:41:12.541891 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:41:12.545670 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:41:12.550581 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 30 13:41:12.551740 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 13:41:12.551764 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:41:12.552530 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:41:12.555473 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:41:12.557075 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:41:12.557244 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:41:12.558729 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:41:12.558910 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:41:12.560448 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:41:12.560616 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:41:12.573190 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 13:41:12.584582 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 13:41:12.585494 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 30 13:41:12.586513 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:41:12.586585 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:41:12.593242 systemd-resolved[1330]: Positive Trust Anchors: Jan 30 13:41:12.593546 systemd-resolved[1330]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:41:12.593624 systemd-resolved[1330]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:41:12.599902 systemd-resolved[1330]: Defaulting to hostname 'linux'. Jan 30 13:41:12.603492 kernel: ACPI: button: Power Button [PWRF] Jan 30 13:41:12.603379 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:41:12.605046 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:41:12.607148 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 13:41:12.612493 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 30 13:41:12.612809 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 30 13:41:12.612984 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 30 13:41:12.613152 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 30 13:41:12.620445 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 30 13:41:12.682773 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:41:12.683131 systemd-networkd[1406]: lo: Link UP Jan 30 13:41:12.683136 systemd-networkd[1406]: lo: Gained carrier Jan 30 13:41:12.684314 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 30 13:41:12.684807 systemd-networkd[1406]: Enumeration completed Jan 30 13:41:12.685211 systemd-networkd[1406]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:41:12.685215 systemd-networkd[1406]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:41:12.685983 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:41:12.687246 systemd-networkd[1406]: eth0: Link UP Jan 30 13:41:12.687370 systemd-networkd[1406]: eth0: Gained carrier Jan 30 13:41:12.687419 systemd-networkd[1406]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:41:12.688076 systemd[1]: Reached target network.target - Network. Jan 30 13:41:12.692341 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 13:41:12.694905 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 13:41:12.698486 systemd-networkd[1406]: eth0: DHCPv4 address 10.0.0.74/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 13:41:12.699541 systemd-timesyncd[1407]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 30 13:41:12.699701 systemd-timesyncd[1407]: Initial clock synchronization to Thu 2025-01-30 13:41:13.061518 UTC. Jan 30 13:41:12.702923 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:41:12.703201 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:41:12.706525 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 13:41:12.742169 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:41:12.770741 kernel: kvm_amd: TSC scaling supported Jan 30 13:41:12.770778 kernel: kvm_amd: Nested Virtualization enabled Jan 30 13:41:12.770791 kernel: kvm_amd: Nested Paging enabled Jan 30 13:41:12.770822 kernel: kvm_amd: LBR virtualization supported Jan 30 13:41:12.771967 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 30 13:41:12.771985 kernel: kvm_amd: Virtual GIF supported Jan 30 13:41:12.792452 kernel: EDAC MC: Ver: 3.0.0 Jan 30 13:41:12.807690 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:41:12.829620 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 13:41:12.842685 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 13:41:12.851997 lvm[1434]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:41:12.882540 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 13:41:12.884036 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:41:12.885172 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:41:12.886352 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 13:41:12.887629 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 13:41:12.889092 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 13:41:12.890807 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 13:41:12.892089 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 13:41:12.893336 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 13:41:12.893363 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:41:12.894293 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:41:12.895749 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 13:41:12.898329 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 13:41:12.920944 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 13:41:12.923226 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 13:41:12.924768 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 13:41:12.925964 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:41:12.926945 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:41:12.927932 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:41:12.927961 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:41:12.928912 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 13:41:12.930995 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 13:41:12.933502 lvm[1439]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:41:12.935291 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 13:41:12.938629 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 13:41:12.939075 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 13:41:12.940623 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 13:41:12.946332 jq[1442]: false Jan 30 13:41:12.946610 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 13:41:12.951559 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 13:41:12.955637 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 13:41:12.957090 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 13:41:12.957514 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 13:41:12.959085 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 13:41:12.962608 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 13:41:12.962775 extend-filesystems[1443]: Found loop3 Jan 30 13:41:12.965946 extend-filesystems[1443]: Found loop4 Jan 30 13:41:12.965946 extend-filesystems[1443]: Found loop5 Jan 30 13:41:12.965946 extend-filesystems[1443]: Found sr0 Jan 30 13:41:12.965946 extend-filesystems[1443]: Found vda Jan 30 13:41:12.965946 extend-filesystems[1443]: Found vda1 Jan 30 13:41:12.965946 extend-filesystems[1443]: Found vda2 Jan 30 13:41:12.965946 extend-filesystems[1443]: Found vda3 Jan 30 13:41:12.965946 extend-filesystems[1443]: Found usr Jan 30 13:41:12.965946 extend-filesystems[1443]: Found vda4 Jan 30 13:41:12.965946 extend-filesystems[1443]: Found vda6 Jan 30 13:41:12.965946 extend-filesystems[1443]: Found vda7 Jan 30 13:41:12.965946 extend-filesystems[1443]: Found vda9 Jan 30 13:41:12.965946 extend-filesystems[1443]: Checking size of /dev/vda9 Jan 30 13:41:12.964100 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 13:41:12.967756 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 13:41:12.967969 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 13:41:12.968304 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 13:41:12.968501 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 13:41:12.983319 dbus-daemon[1441]: [system] SELinux support is enabled Jan 30 13:41:12.984667 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 13:41:12.988268 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 13:41:12.988539 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 13:41:12.989730 jq[1452]: true Jan 30 13:41:12.990698 extend-filesystems[1443]: Resized partition /dev/vda9 Jan 30 13:41:12.997530 extend-filesystems[1469]: resize2fs 1.47.1 (20-May-2024) Jan 30 13:41:12.999620 update_engine[1451]: I20250130 13:41:12.999387 1451 main.cc:92] Flatcar Update Engine starting Jan 30 13:41:13.000910 update_engine[1451]: I20250130 13:41:13.000861 1451 update_check_scheduler.cc:74] Next update check in 2m6s Jan 30 13:41:13.002808 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 13:41:13.002869 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 13:41:13.004410 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 13:41:13.004436 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 13:41:13.005432 (ntainerd)[1470]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 13:41:13.006055 systemd[1]: Started update-engine.service - Update Engine. Jan 30 13:41:13.009506 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 30 13:41:13.016641 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 13:41:13.022310 jq[1468]: true Jan 30 13:41:13.029694 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1374) Jan 30 13:41:13.045531 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 30 13:41:13.075342 systemd-logind[1449]: Watching system buttons on /dev/input/event1 (Power Button) Jan 30 13:41:13.075367 systemd-logind[1449]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 30 13:41:13.075902 systemd-logind[1449]: New seat seat0. Jan 30 13:41:13.076589 extend-filesystems[1469]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 30 13:41:13.076589 extend-filesystems[1469]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 30 13:41:13.076589 extend-filesystems[1469]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 30 13:41:13.087869 extend-filesystems[1443]: Resized filesystem in /dev/vda9 Jan 30 13:41:13.077665 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 13:41:13.088954 sshd_keygen[1465]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 13:41:13.089059 bash[1491]: Updated "/home/core/.ssh/authorized_keys" Jan 30 13:41:13.079802 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 13:41:13.080059 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 13:41:13.082427 locksmithd[1474]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 13:41:13.089831 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 13:41:13.092846 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 30 13:41:13.106593 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 13:41:13.117825 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 13:41:13.125405 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 13:41:13.125659 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 13:41:13.128319 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 13:41:13.148714 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 13:41:13.155767 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 13:41:13.157903 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 30 13:41:13.159239 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 13:41:13.227403 containerd[1470]: time="2025-01-30T13:41:13.227257008Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 30 13:41:13.250247 containerd[1470]: time="2025-01-30T13:41:13.250035760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:41:13.252308 containerd[1470]: time="2025-01-30T13:41:13.252258407Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:41:13.252308 containerd[1470]: time="2025-01-30T13:41:13.252296934Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 13:41:13.252389 containerd[1470]: time="2025-01-30T13:41:13.252316517Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 13:41:13.252517 containerd[1470]: time="2025-01-30T13:41:13.252496440Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 13:41:13.252542 containerd[1470]: time="2025-01-30T13:41:13.252520097Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 13:41:13.252606 containerd[1470]: time="2025-01-30T13:41:13.252588061Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:41:13.252627 containerd[1470]: time="2025-01-30T13:41:13.252604765Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:41:13.252819 containerd[1470]: time="2025-01-30T13:41:13.252785662Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:41:13.252819 containerd[1470]: time="2025-01-30T13:41:13.252814629Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 13:41:13.252870 containerd[1470]: time="2025-01-30T13:41:13.252835562Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:41:13.252870 containerd[1470]: time="2025-01-30T13:41:13.252847176Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 13:41:13.252980 containerd[1470]: time="2025-01-30T13:41:13.252962528Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:41:13.253232 containerd[1470]: time="2025-01-30T13:41:13.253206458Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:41:13.253367 containerd[1470]: time="2025-01-30T13:41:13.253341737Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:41:13.253367 containerd[1470]: time="2025-01-30T13:41:13.253359855Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 13:41:13.253492 containerd[1470]: time="2025-01-30T13:41:13.253457362Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 13:41:13.253560 containerd[1470]: time="2025-01-30T13:41:13.253536710Z" level=info msg="metadata content store policy set" policy=shared Jan 30 13:41:13.259648 containerd[1470]: time="2025-01-30T13:41:13.259618927Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 13:41:13.259683 containerd[1470]: time="2025-01-30T13:41:13.259662973Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 13:41:13.259704 containerd[1470]: time="2025-01-30T13:41:13.259691605Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 13:41:13.259725 containerd[1470]: time="2025-01-30T13:41:13.259709492Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 13:41:13.259745 containerd[1470]: time="2025-01-30T13:41:13.259725598Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 13:41:13.259900 containerd[1470]: time="2025-01-30T13:41:13.259872639Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 13:41:13.260242 containerd[1470]: time="2025-01-30T13:41:13.260191047Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 13:41:13.260436 containerd[1470]: time="2025-01-30T13:41:13.260394333Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 13:41:13.260436 containerd[1470]: time="2025-01-30T13:41:13.260419393Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 13:41:13.260538 containerd[1470]: time="2025-01-30T13:41:13.260433657Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 13:41:13.260538 containerd[1470]: time="2025-01-30T13:41:13.260466372Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 13:41:13.260538 containerd[1470]: time="2025-01-30T13:41:13.260480887Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 13:41:13.260538 containerd[1470]: time="2025-01-30T13:41:13.260494479Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 13:41:13.260538 containerd[1470]: time="2025-01-30T13:41:13.260510774Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 13:41:13.260538 containerd[1470]: time="2025-01-30T13:41:13.260526608Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 13:41:13.260538 containerd[1470]: time="2025-01-30T13:41:13.260541112Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 13:41:13.260677 containerd[1470]: time="2025-01-30T13:41:13.260554370Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 13:41:13.260677 containerd[1470]: time="2025-01-30T13:41:13.260566738Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 13:41:13.260677 containerd[1470]: time="2025-01-30T13:41:13.260589818Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 13:41:13.260677 containerd[1470]: time="2025-01-30T13:41:13.260605485Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 13:41:13.260677 containerd[1470]: time="2025-01-30T13:41:13.260620209Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 13:41:13.260677 containerd[1470]: time="2025-01-30T13:41:13.260634242Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 13:41:13.260677 containerd[1470]: time="2025-01-30T13:41:13.260647594Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 13:41:13.260677 containerd[1470]: time="2025-01-30T13:41:13.260667543Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 13:41:13.260825 containerd[1470]: time="2025-01-30T13:41:13.260680330Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 13:41:13.260825 containerd[1470]: time="2025-01-30T13:41:13.260694395Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 13:41:13.260825 containerd[1470]: time="2025-01-30T13:41:13.260707359Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 13:41:13.260825 containerd[1470]: time="2025-01-30T13:41:13.260724292Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 13:41:13.260825 containerd[1470]: time="2025-01-30T13:41:13.260738671Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 13:41:13.260825 containerd[1470]: time="2025-01-30T13:41:13.260753081Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 13:41:13.260825 containerd[1470]: time="2025-01-30T13:41:13.260777920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 13:41:13.260825 containerd[1470]: time="2025-01-30T13:41:13.260792990Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 13:41:13.260825 containerd[1470]: time="2025-01-30T13:41:13.260812783Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 13:41:13.260825 containerd[1470]: time="2025-01-30T13:41:13.260825915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 13:41:13.261010 containerd[1470]: time="2025-01-30T13:41:13.260841644Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 13:41:13.261010 containerd[1470]: time="2025-01-30T13:41:13.260897891Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 13:41:13.261010 containerd[1470]: time="2025-01-30T13:41:13.260917316Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 13:41:13.261010 containerd[1470]: time="2025-01-30T13:41:13.260928888Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 13:41:13.261010 containerd[1470]: time="2025-01-30T13:41:13.260941916Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 13:41:13.261010 containerd[1470]: time="2025-01-30T13:41:13.260951697Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 13:41:13.261010 containerd[1470]: time="2025-01-30T13:41:13.260965049Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 13:41:13.261010 containerd[1470]: time="2025-01-30T13:41:13.260976569Z" level=info msg="NRI interface is disabled by configuration." Jan 30 13:41:13.261010 containerd[1470]: time="2025-01-30T13:41:13.260987607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 13:41:13.261378 containerd[1470]: time="2025-01-30T13:41:13.261317743Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 13:41:13.261522 containerd[1470]: time="2025-01-30T13:41:13.261378472Z" level=info msg="Connect containerd service" Jan 30 13:41:13.261522 containerd[1470]: time="2025-01-30T13:41:13.261415931Z" level=info msg="using legacy CRI server" Jan 30 13:41:13.261522 containerd[1470]: time="2025-01-30T13:41:13.261422539Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 13:41:13.261586 containerd[1470]: time="2025-01-30T13:41:13.261530507Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 13:41:13.262141 containerd[1470]: time="2025-01-30T13:41:13.262113790Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:41:13.262336 containerd[1470]: time="2025-01-30T13:41:13.262274171Z" level=info msg="Start subscribing containerd event" Jan 30 13:41:13.262336 containerd[1470]: time="2025-01-30T13:41:13.262328836Z" level=info msg="Start recovering state" Jan 30 13:41:13.262488 containerd[1470]: time="2025-01-30T13:41:13.262468421Z" level=info msg="Start event monitor" Jan 30 13:41:13.262511 containerd[1470]: time="2025-01-30T13:41:13.262488361Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 13:41:13.262511 containerd[1470]: time="2025-01-30T13:41:13.262500812Z" level=info msg="Start snapshots syncer" Jan 30 13:41:13.262550 containerd[1470]: time="2025-01-30T13:41:13.262514677Z" level=info msg="Start cni network conf syncer for default" Jan 30 13:41:13.262550 containerd[1470]: time="2025-01-30T13:41:13.262527160Z" level=info msg="Start streaming server" Jan 30 13:41:13.262550 containerd[1470]: time="2025-01-30T13:41:13.262541182Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 13:41:13.262673 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 13:41:13.263024 containerd[1470]: time="2025-01-30T13:41:13.262991436Z" level=info msg="containerd successfully booted in 0.037068s" Jan 30 13:41:14.615312 systemd-networkd[1406]: eth0: Gained IPv6LL Jan 30 13:41:14.618788 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 13:41:14.620686 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 13:41:14.638788 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 30 13:41:14.641711 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:41:14.643998 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 13:41:14.666633 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 13:41:14.668431 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 30 13:41:14.668707 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 30 13:41:14.671183 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 13:41:15.335413 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:41:15.337307 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 13:41:15.339260 systemd[1]: Startup finished in 719ms (kernel) + 4.784s (initrd) + 4.875s (userspace) = 10.379s. Jan 30 13:41:15.341252 (kubelet)[1546]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:41:15.765238 kubelet[1546]: E0130 13:41:15.765105 1546 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:41:15.769243 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:41:15.769506 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:41:23.165697 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 13:41:23.167055 systemd[1]: Started sshd@0-10.0.0.74:22-10.0.0.1:33768.service - OpenSSH per-connection server daemon (10.0.0.1:33768). Jan 30 13:41:23.212308 sshd[1560]: Accepted publickey for core from 10.0.0.1 port 33768 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:41:23.214689 sshd[1560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:41:23.223956 systemd-logind[1449]: New session 1 of user core. Jan 30 13:41:23.225358 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 13:41:23.235725 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 13:41:23.249171 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 13:41:23.263744 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 13:41:23.266709 (systemd)[1564]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 13:41:23.368721 systemd[1564]: Queued start job for default target default.target. Jan 30 13:41:23.377824 systemd[1564]: Created slice app.slice - User Application Slice. Jan 30 13:41:23.377851 systemd[1564]: Reached target paths.target - Paths. Jan 30 13:41:23.377865 systemd[1564]: Reached target timers.target - Timers. Jan 30 13:41:23.379563 systemd[1564]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 13:41:23.391518 systemd[1564]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 13:41:23.391681 systemd[1564]: Reached target sockets.target - Sockets. Jan 30 13:41:23.391701 systemd[1564]: Reached target basic.target - Basic System. Jan 30 13:41:23.391753 systemd[1564]: Reached target default.target - Main User Target. Jan 30 13:41:23.391805 systemd[1564]: Startup finished in 118ms. Jan 30 13:41:23.392120 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 13:41:23.393751 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 13:41:23.457025 systemd[1]: Started sshd@1-10.0.0.74:22-10.0.0.1:33774.service - OpenSSH per-connection server daemon (10.0.0.1:33774). Jan 30 13:41:23.495066 sshd[1575]: Accepted publickey for core from 10.0.0.1 port 33774 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:41:23.496999 sshd[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:41:23.501781 systemd-logind[1449]: New session 2 of user core. Jan 30 13:41:23.516705 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 13:41:23.572185 sshd[1575]: pam_unix(sshd:session): session closed for user core Jan 30 13:41:23.588928 systemd[1]: sshd@1-10.0.0.74:22-10.0.0.1:33774.service: Deactivated successfully. Jan 30 13:41:23.591007 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 13:41:23.592999 systemd-logind[1449]: Session 2 logged out. Waiting for processes to exit. Jan 30 13:41:23.606997 systemd[1]: Started sshd@2-10.0.0.74:22-10.0.0.1:33782.service - OpenSSH per-connection server daemon (10.0.0.1:33782). Jan 30 13:41:23.608160 systemd-logind[1449]: Removed session 2. Jan 30 13:41:23.640866 sshd[1582]: Accepted publickey for core from 10.0.0.1 port 33782 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:41:23.642644 sshd[1582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:41:23.647241 systemd-logind[1449]: New session 3 of user core. Jan 30 13:41:23.660616 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 13:41:23.713320 sshd[1582]: pam_unix(sshd:session): session closed for user core Jan 30 13:41:23.725457 systemd[1]: sshd@2-10.0.0.74:22-10.0.0.1:33782.service: Deactivated successfully. Jan 30 13:41:23.727148 systemd[1]: session-3.scope: Deactivated successfully. Jan 30 13:41:23.728857 systemd-logind[1449]: Session 3 logged out. Waiting for processes to exit. Jan 30 13:41:23.740665 systemd[1]: Started sshd@3-10.0.0.74:22-10.0.0.1:33784.service - OpenSSH per-connection server daemon (10.0.0.1:33784). Jan 30 13:41:23.741704 systemd-logind[1449]: Removed session 3. Jan 30 13:41:23.771625 sshd[1589]: Accepted publickey for core from 10.0.0.1 port 33784 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:41:23.773239 sshd[1589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:41:23.777550 systemd-logind[1449]: New session 4 of user core. Jan 30 13:41:23.794556 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 13:41:23.851203 sshd[1589]: pam_unix(sshd:session): session closed for user core Jan 30 13:41:23.863382 systemd[1]: sshd@3-10.0.0.74:22-10.0.0.1:33784.service: Deactivated successfully. Jan 30 13:41:23.865104 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 13:41:23.866780 systemd-logind[1449]: Session 4 logged out. Waiting for processes to exit. Jan 30 13:41:23.873697 systemd[1]: Started sshd@4-10.0.0.74:22-10.0.0.1:33798.service - OpenSSH per-connection server daemon (10.0.0.1:33798). Jan 30 13:41:23.874665 systemd-logind[1449]: Removed session 4. Jan 30 13:41:23.904966 sshd[1596]: Accepted publickey for core from 10.0.0.1 port 33798 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:41:23.906719 sshd[1596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:41:23.910660 systemd-logind[1449]: New session 5 of user core. Jan 30 13:41:23.930653 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 13:41:23.989279 sudo[1599]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 13:41:23.989651 sudo[1599]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:41:24.015381 sudo[1599]: pam_unix(sudo:session): session closed for user root Jan 30 13:41:24.017649 sshd[1596]: pam_unix(sshd:session): session closed for user core Jan 30 13:41:24.027285 systemd[1]: sshd@4-10.0.0.74:22-10.0.0.1:33798.service: Deactivated successfully. Jan 30 13:41:24.029094 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 13:41:24.030386 systemd-logind[1449]: Session 5 logged out. Waiting for processes to exit. Jan 30 13:41:24.045739 systemd[1]: Started sshd@5-10.0.0.74:22-10.0.0.1:33802.service - OpenSSH per-connection server daemon (10.0.0.1:33802). Jan 30 13:41:24.046630 systemd-logind[1449]: Removed session 5. Jan 30 13:41:24.076159 sshd[1604]: Accepted publickey for core from 10.0.0.1 port 33802 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:41:24.077881 sshd[1604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:41:24.081886 systemd-logind[1449]: New session 6 of user core. Jan 30 13:41:24.091546 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 13:41:24.145475 sudo[1608]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 13:41:24.145820 sudo[1608]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:41:24.149795 sudo[1608]: pam_unix(sudo:session): session closed for user root Jan 30 13:41:24.156220 sudo[1607]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 30 13:41:24.156573 sudo[1607]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:41:24.177649 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 30 13:41:24.179349 auditctl[1611]: No rules Jan 30 13:41:24.180762 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 13:41:24.181042 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 30 13:41:24.182912 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 13:41:24.213323 augenrules[1629]: No rules Jan 30 13:41:24.215301 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 13:41:24.216663 sudo[1607]: pam_unix(sudo:session): session closed for user root Jan 30 13:41:24.218428 sshd[1604]: pam_unix(sshd:session): session closed for user core Jan 30 13:41:24.238771 systemd[1]: sshd@5-10.0.0.74:22-10.0.0.1:33802.service: Deactivated successfully. Jan 30 13:41:24.240871 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 13:41:24.243081 systemd-logind[1449]: Session 6 logged out. Waiting for processes to exit. Jan 30 13:41:24.251767 systemd[1]: Started sshd@6-10.0.0.74:22-10.0.0.1:33804.service - OpenSSH per-connection server daemon (10.0.0.1:33804). Jan 30 13:41:24.252866 systemd-logind[1449]: Removed session 6. Jan 30 13:41:24.282358 sshd[1637]: Accepted publickey for core from 10.0.0.1 port 33804 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:41:24.283955 sshd[1637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:41:24.288514 systemd-logind[1449]: New session 7 of user core. Jan 30 13:41:24.299553 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 13:41:24.355330 sudo[1640]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 13:41:24.355812 sudo[1640]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:41:24.385696 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 30 13:41:24.420572 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 30 13:41:24.420816 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 30 13:41:25.066840 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:41:25.076688 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:41:25.186962 systemd[1]: Reloading requested from client PID 1683 ('systemctl') (unit session-7.scope)... Jan 30 13:41:25.186977 systemd[1]: Reloading... Jan 30 13:41:25.276459 zram_generator::config[1724]: No configuration found. Jan 30 13:41:25.477004 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:41:25.551263 systemd[1]: Reloading finished in 363 ms. Jan 30 13:41:25.601499 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:41:25.605271 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 13:41:25.605539 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:41:25.607021 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:41:25.757513 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:41:25.761944 (kubelet)[1771]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:41:25.829340 kubelet[1771]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:41:25.829340 kubelet[1771]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 30 13:41:25.829340 kubelet[1771]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:41:25.829724 kubelet[1771]: I0130 13:41:25.829410 1771 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:41:26.156372 kubelet[1771]: I0130 13:41:26.156230 1771 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Jan 30 13:41:26.156372 kubelet[1771]: I0130 13:41:26.156264 1771 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:41:26.156545 kubelet[1771]: I0130 13:41:26.156526 1771 server.go:954] "Client rotation is on, will bootstrap in background" Jan 30 13:41:26.190281 kubelet[1771]: I0130 13:41:26.190217 1771 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:41:26.197441 kubelet[1771]: E0130 13:41:26.197372 1771 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 13:41:26.197441 kubelet[1771]: I0130 13:41:26.197414 1771 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 13:41:26.203392 kubelet[1771]: I0130 13:41:26.203362 1771 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:41:26.204833 kubelet[1771]: I0130 13:41:26.204775 1771 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:41:26.205007 kubelet[1771]: I0130 13:41:26.204817 1771 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.74","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 13:41:26.205007 kubelet[1771]: I0130 13:41:26.204999 1771 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:41:26.205007 kubelet[1771]: I0130 13:41:26.205010 1771 container_manager_linux.go:304] "Creating device plugin manager" Jan 30 13:41:26.205239 kubelet[1771]: I0130 13:41:26.205163 1771 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:41:26.208673 kubelet[1771]: I0130 13:41:26.208633 1771 kubelet.go:446] "Attempting to sync node with API server" Jan 30 13:41:26.208673 kubelet[1771]: I0130 13:41:26.208653 1771 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:41:26.208673 kubelet[1771]: I0130 13:41:26.208670 1771 kubelet.go:352] "Adding apiserver pod source" Jan 30 13:41:26.208673 kubelet[1771]: I0130 13:41:26.208681 1771 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:41:26.208827 kubelet[1771]: E0130 13:41:26.208809 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:41:26.208859 kubelet[1771]: E0130 13:41:26.208848 1771 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:41:26.213257 kubelet[1771]: I0130 13:41:26.213227 1771 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 13:41:26.213720 kubelet[1771]: W0130 13:41:26.213688 1771 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 30 13:41:26.213800 kubelet[1771]: E0130 13:41:26.213775 1771 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Jan 30 13:41:26.213876 kubelet[1771]: W0130 13:41:26.213782 1771 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.74" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 30 13:41:26.213876 kubelet[1771]: E0130 13:41:26.213821 1771 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"10.0.0.74\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Jan 30 13:41:26.213930 kubelet[1771]: I0130 13:41:26.213906 1771 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:41:26.214630 kubelet[1771]: W0130 13:41:26.214603 1771 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 13:41:26.216979 kubelet[1771]: I0130 13:41:26.216944 1771 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 30 13:41:26.217023 kubelet[1771]: I0130 13:41:26.216995 1771 server.go:1287] "Started kubelet" Jan 30 13:41:26.217099 kubelet[1771]: I0130 13:41:26.217062 1771 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:41:26.218161 kubelet[1771]: I0130 13:41:26.218079 1771 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:41:26.219639 kubelet[1771]: I0130 13:41:26.218450 1771 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:41:26.219639 kubelet[1771]: I0130 13:41:26.218460 1771 server.go:490] "Adding debug handlers to kubelet server" Jan 30 13:41:26.219766 kubelet[1771]: I0130 13:41:26.219653 1771 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:41:26.221132 kubelet[1771]: I0130 13:41:26.219910 1771 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 13:41:26.222184 kubelet[1771]: E0130 13:41:26.221922 1771 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.74\" not found" Jan 30 13:41:26.222184 kubelet[1771]: E0130 13:41:26.221948 1771 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:41:26.222184 kubelet[1771]: I0130 13:41:26.221972 1771 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 30 13:41:26.222184 kubelet[1771]: I0130 13:41:26.222060 1771 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 13:41:26.222184 kubelet[1771]: I0130 13:41:26.222121 1771 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:41:26.222721 kubelet[1771]: I0130 13:41:26.222692 1771 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:41:26.222872 kubelet[1771]: I0130 13:41:26.222845 1771 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:41:26.224219 kubelet[1771]: I0130 13:41:26.224119 1771 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:41:26.235581 kubelet[1771]: I0130 13:41:26.235547 1771 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 30 13:41:26.235581 kubelet[1771]: I0130 13:41:26.235571 1771 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 30 13:41:26.235731 kubelet[1771]: I0130 13:41:26.235597 1771 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:41:26.251696 kubelet[1771]: E0130 13:41:26.251584 1771 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.74\" not found" node="10.0.0.74" Jan 30 13:41:26.322751 kubelet[1771]: E0130 13:41:26.322693 1771 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.74\" not found" Jan 30 13:41:26.423154 kubelet[1771]: E0130 13:41:26.423020 1771 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.74\" not found" Jan 30 13:41:26.523443 kubelet[1771]: E0130 13:41:26.523363 1771 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.74\" not found" Jan 30 13:41:26.585922 kubelet[1771]: E0130 13:41:26.585887 1771 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.0.0.74" not found Jan 30 13:41:26.624147 kubelet[1771]: E0130 13:41:26.624109 1771 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.74\" not found" Jan 30 13:41:26.724729 kubelet[1771]: E0130 13:41:26.724613 1771 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.74\" not found" Jan 30 13:41:26.777596 kubelet[1771]: I0130 13:41:26.777545 1771 policy_none.go:49] "None policy: Start" Jan 30 13:41:26.777596 kubelet[1771]: I0130 13:41:26.777582 1771 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 30 13:41:26.777596 kubelet[1771]: I0130 13:41:26.777594 1771 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:41:26.800301 kubelet[1771]: I0130 13:41:26.800242 1771 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:41:26.801868 kubelet[1771]: I0130 13:41:26.801832 1771 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:41:26.801868 kubelet[1771]: I0130 13:41:26.801869 1771 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 30 13:41:26.802011 kubelet[1771]: I0130 13:41:26.801892 1771 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 30 13:41:26.802011 kubelet[1771]: I0130 13:41:26.801900 1771 kubelet.go:2388] "Starting kubelet main sync loop" Jan 30 13:41:26.802054 kubelet[1771]: E0130 13:41:26.802021 1771 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:41:26.813768 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 30 13:41:26.825503 kubelet[1771]: E0130 13:41:26.825470 1771 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.74\" not found" Jan 30 13:41:26.832896 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 30 13:41:26.835681 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 30 13:41:26.856603 kubelet[1771]: I0130 13:41:26.856537 1771 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:41:26.856912 kubelet[1771]: I0130 13:41:26.856767 1771 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 13:41:26.856912 kubelet[1771]: I0130 13:41:26.856778 1771 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:41:26.857354 kubelet[1771]: I0130 13:41:26.857009 1771 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:41:26.858046 kubelet[1771]: E0130 13:41:26.858015 1771 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 30 13:41:26.858139 kubelet[1771]: E0130 13:41:26.858053 1771 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.74\" not found" Jan 30 13:41:26.947700 kubelet[1771]: E0130 13:41:26.947667 1771 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.0.0.74" not found Jan 30 13:41:26.958363 kubelet[1771]: I0130 13:41:26.958340 1771 kubelet_node_status.go:76] "Attempting to register node" node="10.0.0.74" Jan 30 13:41:26.962161 kubelet[1771]: I0130 13:41:26.962132 1771 kubelet_node_status.go:79] "Successfully registered node" node="10.0.0.74" Jan 30 13:41:26.962161 kubelet[1771]: E0130 13:41:26.962154 1771 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"10.0.0.74\": node \"10.0.0.74\" not found" Jan 30 13:41:26.965838 kubelet[1771]: E0130 13:41:26.965810 1771 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.74\" not found" Jan 30 13:41:27.066364 kubelet[1771]: E0130 13:41:27.066168 1771 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.74\" not found" Jan 30 13:41:27.159826 kubelet[1771]: I0130 13:41:27.159784 1771 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 30 13:41:27.160008 kubelet[1771]: W0130 13:41:27.159981 1771 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 30 13:41:27.160111 kubelet[1771]: W0130 13:41:27.160074 1771 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 30 13:41:27.167019 kubelet[1771]: E0130 13:41:27.166988 1771 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.74\" not found" Jan 30 13:41:27.209708 kubelet[1771]: E0130 13:41:27.209689 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:41:27.267801 kubelet[1771]: E0130 13:41:27.267780 1771 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.74\" not found" Jan 30 13:41:27.276076 sudo[1640]: pam_unix(sudo:session): session closed for user root Jan 30 13:41:27.277672 sshd[1637]: pam_unix(sshd:session): session closed for user core Jan 30 13:41:27.281094 systemd[1]: sshd@6-10.0.0.74:22-10.0.0.1:33804.service: Deactivated successfully. Jan 30 13:41:27.282814 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 13:41:27.283349 systemd-logind[1449]: Session 7 logged out. Waiting for processes to exit. Jan 30 13:41:27.284123 systemd-logind[1449]: Removed session 7. Jan 30 13:41:27.368968 kubelet[1771]: E0130 13:41:27.368838 1771 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.74\" not found" Jan 30 13:41:27.469860 kubelet[1771]: E0130 13:41:27.469807 1771 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.74\" not found" Jan 30 13:41:27.570970 kubelet[1771]: E0130 13:41:27.570918 1771 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.74\" not found" Jan 30 13:41:27.672657 kubelet[1771]: I0130 13:41:27.672546 1771 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 30 13:41:27.673033 containerd[1470]: time="2025-01-30T13:41:27.672978813Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 13:41:27.673408 kubelet[1771]: I0130 13:41:27.673196 1771 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 30 13:41:28.209765 kubelet[1771]: I0130 13:41:28.209732 1771 apiserver.go:52] "Watching apiserver" Jan 30 13:41:28.210139 kubelet[1771]: E0130 13:41:28.209769 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:41:28.211836 kubelet[1771]: E0130 13:41:28.211801 1771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rjlb4" podUID="d23cfe7e-0895-464a-a0d2-82dfd5749cfe" Jan 30 13:41:28.218115 systemd[1]: Created slice kubepods-besteffort-poded32f3ee_4d2c_4887_80ad_78066d6b2108.slice - libcontainer container kubepods-besteffort-poded32f3ee_4d2c_4887_80ad_78066d6b2108.slice. Jan 30 13:41:28.222477 kubelet[1771]: I0130 13:41:28.222413 1771 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 13:41:28.231093 systemd[1]: Created slice kubepods-besteffort-poda733e06a_a301_447e_8e96_794761d663ed.slice - libcontainer container kubepods-besteffort-poda733e06a_a301_447e_8e96_794761d663ed.slice. Jan 30 13:41:28.233739 kubelet[1771]: I0130 13:41:28.233719 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a733e06a-a301-447e-8e96-794761d663ed-cni-bin-dir\") pod \"calico-node-9vmdx\" (UID: \"a733e06a-a301-447e-8e96-794761d663ed\") " pod="calico-system/calico-node-9vmdx" Jan 30 13:41:28.233799 kubelet[1771]: I0130 13:41:28.233748 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnzr6\" (UniqueName: \"kubernetes.io/projected/a733e06a-a301-447e-8e96-794761d663ed-kube-api-access-xnzr6\") pod \"calico-node-9vmdx\" (UID: \"a733e06a-a301-447e-8e96-794761d663ed\") " pod="calico-system/calico-node-9vmdx" Jan 30 13:41:28.233799 kubelet[1771]: I0130 13:41:28.233770 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d23cfe7e-0895-464a-a0d2-82dfd5749cfe-varrun\") pod \"csi-node-driver-rjlb4\" (UID: \"d23cfe7e-0895-464a-a0d2-82dfd5749cfe\") " pod="calico-system/csi-node-driver-rjlb4" Jan 30 13:41:28.233799 kubelet[1771]: I0130 13:41:28.233790 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ed32f3ee-4d2c-4887-80ad-78066d6b2108-lib-modules\") pod \"kube-proxy-vcddq\" (UID: \"ed32f3ee-4d2c-4887-80ad-78066d6b2108\") " pod="kube-system/kube-proxy-vcddq" Jan 30 13:41:28.233870 kubelet[1771]: I0130 13:41:28.233806 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a733e06a-a301-447e-8e96-794761d663ed-lib-modules\") pod \"calico-node-9vmdx\" (UID: \"a733e06a-a301-447e-8e96-794761d663ed\") " pod="calico-system/calico-node-9vmdx" Jan 30 13:41:28.233870 kubelet[1771]: I0130 13:41:28.233822 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a733e06a-a301-447e-8e96-794761d663ed-var-run-calico\") pod \"calico-node-9vmdx\" (UID: \"a733e06a-a301-447e-8e96-794761d663ed\") " pod="calico-system/calico-node-9vmdx" Jan 30 13:41:28.233870 kubelet[1771]: I0130 13:41:28.233840 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a733e06a-a301-447e-8e96-794761d663ed-var-lib-calico\") pod \"calico-node-9vmdx\" (UID: \"a733e06a-a301-447e-8e96-794761d663ed\") " pod="calico-system/calico-node-9vmdx" Jan 30 13:41:28.233930 kubelet[1771]: I0130 13:41:28.233881 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a733e06a-a301-447e-8e96-794761d663ed-cni-net-dir\") pod \"calico-node-9vmdx\" (UID: \"a733e06a-a301-447e-8e96-794761d663ed\") " pod="calico-system/calico-node-9vmdx" Jan 30 13:41:28.233930 kubelet[1771]: I0130 13:41:28.233915 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a733e06a-a301-447e-8e96-794761d663ed-flexvol-driver-host\") pod \"calico-node-9vmdx\" (UID: \"a733e06a-a301-447e-8e96-794761d663ed\") " pod="calico-system/calico-node-9vmdx" Jan 30 13:41:28.233975 kubelet[1771]: I0130 13:41:28.233935 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sshq4\" (UniqueName: \"kubernetes.io/projected/d23cfe7e-0895-464a-a0d2-82dfd5749cfe-kube-api-access-sshq4\") pod \"csi-node-driver-rjlb4\" (UID: \"d23cfe7e-0895-464a-a0d2-82dfd5749cfe\") " pod="calico-system/csi-node-driver-rjlb4" Jan 30 13:41:28.233975 kubelet[1771]: I0130 13:41:28.233955 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24ghc\" (UniqueName: \"kubernetes.io/projected/ed32f3ee-4d2c-4887-80ad-78066d6b2108-kube-api-access-24ghc\") pod \"kube-proxy-vcddq\" (UID: \"ed32f3ee-4d2c-4887-80ad-78066d6b2108\") " pod="kube-system/kube-proxy-vcddq" Jan 30 13:41:28.233975 kubelet[1771]: I0130 13:41:28.233969 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a733e06a-a301-447e-8e96-794761d663ed-cni-log-dir\") pod \"calico-node-9vmdx\" (UID: \"a733e06a-a301-447e-8e96-794761d663ed\") " pod="calico-system/calico-node-9vmdx" Jan 30 13:41:28.234032 kubelet[1771]: I0130 13:41:28.233986 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ed32f3ee-4d2c-4887-80ad-78066d6b2108-kube-proxy\") pod \"kube-proxy-vcddq\" (UID: \"ed32f3ee-4d2c-4887-80ad-78066d6b2108\") " pod="kube-system/kube-proxy-vcddq" Jan 30 13:41:28.234032 kubelet[1771]: I0130 13:41:28.234000 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ed32f3ee-4d2c-4887-80ad-78066d6b2108-xtables-lock\") pod \"kube-proxy-vcddq\" (UID: \"ed32f3ee-4d2c-4887-80ad-78066d6b2108\") " pod="kube-system/kube-proxy-vcddq" Jan 30 13:41:28.234032 kubelet[1771]: I0130 13:41:28.234018 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a733e06a-a301-447e-8e96-794761d663ed-xtables-lock\") pod \"calico-node-9vmdx\" (UID: \"a733e06a-a301-447e-8e96-794761d663ed\") " pod="calico-system/calico-node-9vmdx" Jan 30 13:41:28.234093 kubelet[1771]: I0130 13:41:28.234035 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a733e06a-a301-447e-8e96-794761d663ed-policysync\") pod \"calico-node-9vmdx\" (UID: \"a733e06a-a301-447e-8e96-794761d663ed\") " pod="calico-system/calico-node-9vmdx" Jan 30 13:41:28.234093 kubelet[1771]: I0130 13:41:28.234067 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a733e06a-a301-447e-8e96-794761d663ed-tigera-ca-bundle\") pod \"calico-node-9vmdx\" (UID: \"a733e06a-a301-447e-8e96-794761d663ed\") " pod="calico-system/calico-node-9vmdx" Jan 30 13:41:28.234093 kubelet[1771]: I0130 13:41:28.234081 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d23cfe7e-0895-464a-a0d2-82dfd5749cfe-kubelet-dir\") pod \"csi-node-driver-rjlb4\" (UID: \"d23cfe7e-0895-464a-a0d2-82dfd5749cfe\") " pod="calico-system/csi-node-driver-rjlb4" Jan 30 13:41:28.234154 kubelet[1771]: I0130 13:41:28.234096 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d23cfe7e-0895-464a-a0d2-82dfd5749cfe-socket-dir\") pod \"csi-node-driver-rjlb4\" (UID: \"d23cfe7e-0895-464a-a0d2-82dfd5749cfe\") " pod="calico-system/csi-node-driver-rjlb4" Jan 30 13:41:28.234154 kubelet[1771]: I0130 13:41:28.234110 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d23cfe7e-0895-464a-a0d2-82dfd5749cfe-registration-dir\") pod \"csi-node-driver-rjlb4\" (UID: \"d23cfe7e-0895-464a-a0d2-82dfd5749cfe\") " pod="calico-system/csi-node-driver-rjlb4" Jan 30 13:41:28.234154 kubelet[1771]: I0130 13:41:28.234125 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a733e06a-a301-447e-8e96-794761d663ed-node-certs\") pod \"calico-node-9vmdx\" (UID: \"a733e06a-a301-447e-8e96-794761d663ed\") " pod="calico-system/calico-node-9vmdx" Jan 30 13:41:28.335848 kubelet[1771]: E0130 13:41:28.335809 1771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:41:28.335848 kubelet[1771]: W0130 13:41:28.335831 1771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:41:28.335848 kubelet[1771]: E0130 13:41:28.335851 1771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:41:28.338232 kubelet[1771]: E0130 13:41:28.338204 1771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:41:28.338232 kubelet[1771]: W0130 13:41:28.338223 1771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:41:28.338377 kubelet[1771]: E0130 13:41:28.338242 1771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:41:28.344660 kubelet[1771]: E0130 13:41:28.344627 1771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:41:28.344660 kubelet[1771]: W0130 13:41:28.344648 1771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:41:28.344660 kubelet[1771]: E0130 13:41:28.344662 1771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:41:28.345398 kubelet[1771]: E0130 13:41:28.345381 1771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:41:28.345398 kubelet[1771]: W0130 13:41:28.345396 1771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:41:28.345488 kubelet[1771]: E0130 13:41:28.345416 1771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:41:28.345744 kubelet[1771]: E0130 13:41:28.345715 1771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:41:28.345744 kubelet[1771]: W0130 13:41:28.345739 1771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:41:28.345851 kubelet[1771]: E0130 13:41:28.345763 1771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:41:28.529846 kubelet[1771]: E0130 13:41:28.529732 1771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:41:28.530521 containerd[1470]: time="2025-01-30T13:41:28.530477304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vcddq,Uid:ed32f3ee-4d2c-4887-80ad-78066d6b2108,Namespace:kube-system,Attempt:0,}" Jan 30 13:41:28.533021 kubelet[1771]: E0130 13:41:28.532975 1771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:41:28.533326 containerd[1470]: time="2025-01-30T13:41:28.533288607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9vmdx,Uid:a733e06a-a301-447e-8e96-794761d663ed,Namespace:calico-system,Attempt:0,}" Jan 30 13:41:29.085230 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount805775646.mount: Deactivated successfully. Jan 30 13:41:29.091961 containerd[1470]: time="2025-01-30T13:41:29.091897765Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:41:29.092738 containerd[1470]: time="2025-01-30T13:41:29.092696057Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:41:29.093446 containerd[1470]: time="2025-01-30T13:41:29.093388761Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 30 13:41:29.094316 containerd[1470]: time="2025-01-30T13:41:29.094282850Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:41:29.095177 containerd[1470]: time="2025-01-30T13:41:29.095133135Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:41:29.097179 containerd[1470]: time="2025-01-30T13:41:29.097146743Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:41:29.100633 containerd[1470]: time="2025-01-30T13:41:29.100542365Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 569.962404ms" Jan 30 13:41:29.100869 containerd[1470]: time="2025-01-30T13:41:29.100829718Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 567.430586ms" Jan 30 13:41:29.209969 kubelet[1771]: E0130 13:41:29.209909 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:41:29.472220 containerd[1470]: time="2025-01-30T13:41:29.471771993Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:41:29.472220 containerd[1470]: time="2025-01-30T13:41:29.471829164Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:41:29.472220 containerd[1470]: time="2025-01-30T13:41:29.471842671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:41:29.472220 containerd[1470]: time="2025-01-30T13:41:29.471930139Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:41:29.472497 containerd[1470]: time="2025-01-30T13:41:29.471316845Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:41:29.472497 containerd[1470]: time="2025-01-30T13:41:29.472295582Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:41:29.472497 containerd[1470]: time="2025-01-30T13:41:29.472311356Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:41:29.472497 containerd[1470]: time="2025-01-30T13:41:29.472386727Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:41:29.645844 systemd[1]: Started cri-containerd-5310b3aa149daa50d2ab63dd0710b390f0aecf731dedd66d2fcd1ad469e12986.scope - libcontainer container 5310b3aa149daa50d2ab63dd0710b390f0aecf731dedd66d2fcd1ad469e12986. Jan 30 13:41:29.650812 systemd[1]: Started cri-containerd-257666c490d7f1b065ef92aebc0a78fc41dcacfebf63e6dbb833563d997ba78f.scope - libcontainer container 257666c490d7f1b065ef92aebc0a78fc41dcacfebf63e6dbb833563d997ba78f. Jan 30 13:41:29.676907 containerd[1470]: time="2025-01-30T13:41:29.676855407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9vmdx,Uid:a733e06a-a301-447e-8e96-794761d663ed,Namespace:calico-system,Attempt:0,} returns sandbox id \"5310b3aa149daa50d2ab63dd0710b390f0aecf731dedd66d2fcd1ad469e12986\"" Jan 30 13:41:29.677972 kubelet[1771]: E0130 13:41:29.677947 1771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:41:29.679387 containerd[1470]: time="2025-01-30T13:41:29.679316237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vcddq,Uid:ed32f3ee-4d2c-4887-80ad-78066d6b2108,Namespace:kube-system,Attempt:0,} returns sandbox id \"257666c490d7f1b065ef92aebc0a78fc41dcacfebf63e6dbb833563d997ba78f\"" Jan 30 13:41:29.679387 containerd[1470]: time="2025-01-30T13:41:29.679318181Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 30 13:41:29.679712 kubelet[1771]: E0130 13:41:29.679696 1771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:41:29.802878 kubelet[1771]: E0130 13:41:29.802759 1771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rjlb4" podUID="d23cfe7e-0895-464a-a0d2-82dfd5749cfe" Jan 30 13:41:30.210453 kubelet[1771]: E0130 13:41:30.210318 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:41:31.164590 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3641388146.mount: Deactivated successfully. Jan 30 13:41:31.210646 kubelet[1771]: E0130 13:41:31.210593 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:41:31.252082 containerd[1470]: time="2025-01-30T13:41:31.252012470Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:41:31.252854 containerd[1470]: time="2025-01-30T13:41:31.252813500Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Jan 30 13:41:31.254341 containerd[1470]: time="2025-01-30T13:41:31.254299097Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:41:31.256818 containerd[1470]: time="2025-01-30T13:41:31.256773249Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:41:31.257482 containerd[1470]: time="2025-01-30T13:41:31.257443592Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.578038005s" Jan 30 13:41:31.257528 containerd[1470]: time="2025-01-30T13:41:31.257479817Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 30 13:41:31.258439 containerd[1470]: time="2025-01-30T13:41:31.258409008Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\"" Jan 30 13:41:31.259702 containerd[1470]: time="2025-01-30T13:41:31.259663311Z" level=info msg="CreateContainer within sandbox \"5310b3aa149daa50d2ab63dd0710b390f0aecf731dedd66d2fcd1ad469e12986\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 30 13:41:31.277800 containerd[1470]: time="2025-01-30T13:41:31.277739729Z" level=info msg="CreateContainer within sandbox \"5310b3aa149daa50d2ab63dd0710b390f0aecf731dedd66d2fcd1ad469e12986\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"af138b2cb2232362bc0fbecded465a26697b1769054f48c3c09713e7ec4104d7\"" Jan 30 13:41:31.278494 containerd[1470]: time="2025-01-30T13:41:31.278395929Z" level=info msg="StartContainer for \"af138b2cb2232362bc0fbecded465a26697b1769054f48c3c09713e7ec4104d7\"" Jan 30 13:41:31.315584 systemd[1]: Started cri-containerd-af138b2cb2232362bc0fbecded465a26697b1769054f48c3c09713e7ec4104d7.scope - libcontainer container af138b2cb2232362bc0fbecded465a26697b1769054f48c3c09713e7ec4104d7. Jan 30 13:41:31.348136 containerd[1470]: time="2025-01-30T13:41:31.348096252Z" level=info msg="StartContainer for \"af138b2cb2232362bc0fbecded465a26697b1769054f48c3c09713e7ec4104d7\" returns successfully" Jan 30 13:41:31.363158 systemd[1]: cri-containerd-af138b2cb2232362bc0fbecded465a26697b1769054f48c3c09713e7ec4104d7.scope: Deactivated successfully. Jan 30 13:41:31.472950 containerd[1470]: time="2025-01-30T13:41:31.472817907Z" level=info msg="shim disconnected" id=af138b2cb2232362bc0fbecded465a26697b1769054f48c3c09713e7ec4104d7 namespace=k8s.io Jan 30 13:41:31.472950 containerd[1470]: time="2025-01-30T13:41:31.472870479Z" level=warning msg="cleaning up after shim disconnected" id=af138b2cb2232362bc0fbecded465a26697b1769054f48c3c09713e7ec4104d7 namespace=k8s.io Jan 30 13:41:31.472950 containerd[1470]: time="2025-01-30T13:41:31.472878949Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:41:31.802730 kubelet[1771]: E0130 13:41:31.802591 1771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rjlb4" podUID="d23cfe7e-0895-464a-a0d2-82dfd5749cfe" Jan 30 13:41:31.814262 kubelet[1771]: E0130 13:41:31.814228 1771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:41:32.142956 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-af138b2cb2232362bc0fbecded465a26697b1769054f48c3c09713e7ec4104d7-rootfs.mount: Deactivated successfully. Jan 30 13:41:32.211043 kubelet[1771]: E0130 13:41:32.210987 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:41:32.566887 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1074130382.mount: Deactivated successfully. Jan 30 13:41:32.942121 containerd[1470]: time="2025-01-30T13:41:32.942005419Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:41:32.942716 containerd[1470]: time="2025-01-30T13:41:32.942682388Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.1: active requests=0, bytes read=30909466" Jan 30 13:41:32.943697 containerd[1470]: time="2025-01-30T13:41:32.943674651Z" level=info msg="ImageCreate event name:\"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:41:32.945504 containerd[1470]: time="2025-01-30T13:41:32.945471157Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:41:32.946078 containerd[1470]: time="2025-01-30T13:41:32.946049912Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.1\" with image id \"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\", repo tag \"registry.k8s.io/kube-proxy:v1.32.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\", size \"30908485\" in 1.687542785s" Jan 30 13:41:32.946122 containerd[1470]: time="2025-01-30T13:41:32.946079483Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\" returns image reference \"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\"" Jan 30 13:41:32.946863 containerd[1470]: time="2025-01-30T13:41:32.946838317Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 30 13:41:32.947915 containerd[1470]: time="2025-01-30T13:41:32.947891753Z" level=info msg="CreateContainer within sandbox \"257666c490d7f1b065ef92aebc0a78fc41dcacfebf63e6dbb833563d997ba78f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 13:41:32.962585 containerd[1470]: time="2025-01-30T13:41:32.962555315Z" level=info msg="CreateContainer within sandbox \"257666c490d7f1b065ef92aebc0a78fc41dcacfebf63e6dbb833563d997ba78f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1ac18d1e23d238074ea30061fb41aab12c1b2dcc7e521c4bd3a77ee67b13c168\"" Jan 30 13:41:32.962838 containerd[1470]: time="2025-01-30T13:41:32.962821663Z" level=info msg="StartContainer for \"1ac18d1e23d238074ea30061fb41aab12c1b2dcc7e521c4bd3a77ee67b13c168\"" Jan 30 13:41:33.153556 systemd[1]: Started cri-containerd-1ac18d1e23d238074ea30061fb41aab12c1b2dcc7e521c4bd3a77ee67b13c168.scope - libcontainer container 1ac18d1e23d238074ea30061fb41aab12c1b2dcc7e521c4bd3a77ee67b13c168. Jan 30 13:41:33.181419 containerd[1470]: time="2025-01-30T13:41:33.181369129Z" level=info msg="StartContainer for \"1ac18d1e23d238074ea30061fb41aab12c1b2dcc7e521c4bd3a77ee67b13c168\" returns successfully" Jan 30 13:41:33.212285 kubelet[1771]: E0130 13:41:33.211869 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:41:33.802978 kubelet[1771]: E0130 13:41:33.802910 1771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rjlb4" podUID="d23cfe7e-0895-464a-a0d2-82dfd5749cfe" Jan 30 13:41:33.817493 kubelet[1771]: E0130 13:41:33.817474 1771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:41:33.827505 kubelet[1771]: I0130 13:41:33.827451 1771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vcddq" podStartSLOduration=3.560783136 podStartE2EDuration="6.82741721s" podCreationTimestamp="2025-01-30 13:41:27 +0000 UTC" firstStartedPulling="2025-01-30 13:41:29.680118115 +0000 UTC m=+3.912378483" lastFinishedPulling="2025-01-30 13:41:32.946752189 +0000 UTC m=+7.179012557" observedRunningTime="2025-01-30 13:41:33.827318709 +0000 UTC m=+8.059579077" watchObservedRunningTime="2025-01-30 13:41:33.82741721 +0000 UTC m=+8.059677578" Jan 30 13:41:34.212360 kubelet[1771]: E0130 13:41:34.212264 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:41:34.818506 kubelet[1771]: E0130 13:41:34.818474 1771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:41:35.213459 kubelet[1771]: E0130 13:41:35.213289 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:41:35.802798 kubelet[1771]: E0130 13:41:35.802748 1771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rjlb4" podUID="d23cfe7e-0895-464a-a0d2-82dfd5749cfe" Jan 30 13:41:36.100416 containerd[1470]: time="2025-01-30T13:41:36.100289367Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:41:36.101131 containerd[1470]: time="2025-01-30T13:41:36.101082421Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 30 13:41:36.102265 containerd[1470]: time="2025-01-30T13:41:36.102219842Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:41:36.104668 containerd[1470]: time="2025-01-30T13:41:36.104630540Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:41:36.105374 containerd[1470]: time="2025-01-30T13:41:36.105332684Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 3.158431753s" Jan 30 13:41:36.105400 containerd[1470]: time="2025-01-30T13:41:36.105369912Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 30 13:41:36.107130 containerd[1470]: time="2025-01-30T13:41:36.107101016Z" level=info msg="CreateContainer within sandbox \"5310b3aa149daa50d2ab63dd0710b390f0aecf731dedd66d2fcd1ad469e12986\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 30 13:41:36.122770 containerd[1470]: time="2025-01-30T13:41:36.122724103Z" level=info msg="CreateContainer within sandbox \"5310b3aa149daa50d2ab63dd0710b390f0aecf731dedd66d2fcd1ad469e12986\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f865fe92f90ca930851083abc6f72a25aa5069ac251f79e4245f4014b3509ac0\"" Jan 30 13:41:36.123370 containerd[1470]: time="2025-01-30T13:41:36.123321351Z" level=info msg="StartContainer for \"f865fe92f90ca930851083abc6f72a25aa5069ac251f79e4245f4014b3509ac0\"" Jan 30 13:41:36.154569 systemd[1]: Started cri-containerd-f865fe92f90ca930851083abc6f72a25aa5069ac251f79e4245f4014b3509ac0.scope - libcontainer container f865fe92f90ca930851083abc6f72a25aa5069ac251f79e4245f4014b3509ac0. Jan 30 13:41:36.185344 containerd[1470]: time="2025-01-30T13:41:36.185307001Z" level=info msg="StartContainer for \"f865fe92f90ca930851083abc6f72a25aa5069ac251f79e4245f4014b3509ac0\" returns successfully" Jan 30 13:41:36.214534 kubelet[1771]: E0130 13:41:36.214491 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:41:36.824107 kubelet[1771]: E0130 13:41:36.824079 1771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:41:36.874209 containerd[1470]: time="2025-01-30T13:41:36.874164652Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:41:36.876636 systemd[1]: cri-containerd-f865fe92f90ca930851083abc6f72a25aa5069ac251f79e4245f4014b3509ac0.scope: Deactivated successfully. Jan 30 13:41:36.896479 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f865fe92f90ca930851083abc6f72a25aa5069ac251f79e4245f4014b3509ac0-rootfs.mount: Deactivated successfully. Jan 30 13:41:36.924823 kubelet[1771]: I0130 13:41:36.924782 1771 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Jan 30 13:41:37.215584 kubelet[1771]: E0130 13:41:37.215404 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:41:37.473538 containerd[1470]: time="2025-01-30T13:41:37.473368471Z" level=info msg="shim disconnected" id=f865fe92f90ca930851083abc6f72a25aa5069ac251f79e4245f4014b3509ac0 namespace=k8s.io Jan 30 13:41:37.473538 containerd[1470]: time="2025-01-30T13:41:37.473447312Z" level=warning msg="cleaning up after shim disconnected" id=f865fe92f90ca930851083abc6f72a25aa5069ac251f79e4245f4014b3509ac0 namespace=k8s.io Jan 30 13:41:37.473538 containerd[1470]: time="2025-01-30T13:41:37.473457882Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:41:37.808857 systemd[1]: Created slice kubepods-besteffort-podd23cfe7e_0895_464a_a0d2_82dfd5749cfe.slice - libcontainer container kubepods-besteffort-podd23cfe7e_0895_464a_a0d2_82dfd5749cfe.slice. Jan 30 13:41:37.811353 containerd[1470]: time="2025-01-30T13:41:37.811314252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rjlb4,Uid:d23cfe7e-0895-464a-a0d2-82dfd5749cfe,Namespace:calico-system,Attempt:0,}" Jan 30 13:41:37.828744 kubelet[1771]: E0130 13:41:37.828697 1771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:41:37.830111 containerd[1470]: time="2025-01-30T13:41:37.829730351Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 30 13:41:37.883120 containerd[1470]: time="2025-01-30T13:41:37.883034728Z" level=error msg="Failed to destroy network for sandbox \"55c829fd4b7bffb8757bf00085b606c1c3a0a1c46100d348fee15fbfe473418a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:41:37.883485 containerd[1470]: time="2025-01-30T13:41:37.883448992Z" level=error msg="encountered an error cleaning up failed sandbox \"55c829fd4b7bffb8757bf00085b606c1c3a0a1c46100d348fee15fbfe473418a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:41:37.883695 containerd[1470]: time="2025-01-30T13:41:37.883497652Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rjlb4,Uid:d23cfe7e-0895-464a-a0d2-82dfd5749cfe,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"55c829fd4b7bffb8757bf00085b606c1c3a0a1c46100d348fee15fbfe473418a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:41:37.883739 kubelet[1771]: E0130 13:41:37.883683 1771 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55c829fd4b7bffb8757bf00085b606c1c3a0a1c46100d348fee15fbfe473418a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:41:37.883794 kubelet[1771]: E0130 13:41:37.883764 1771 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55c829fd4b7bffb8757bf00085b606c1c3a0a1c46100d348fee15fbfe473418a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rjlb4" Jan 30 13:41:37.883794 kubelet[1771]: E0130 13:41:37.883787 1771 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55c829fd4b7bffb8757bf00085b606c1c3a0a1c46100d348fee15fbfe473418a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rjlb4" Jan 30 13:41:37.883867 kubelet[1771]: E0130 13:41:37.883825 1771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rjlb4_calico-system(d23cfe7e-0895-464a-a0d2-82dfd5749cfe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rjlb4_calico-system(d23cfe7e-0895-464a-a0d2-82dfd5749cfe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"55c829fd4b7bffb8757bf00085b606c1c3a0a1c46100d348fee15fbfe473418a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rjlb4" podUID="d23cfe7e-0895-464a-a0d2-82dfd5749cfe" Jan 30 13:41:37.885203 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-55c829fd4b7bffb8757bf00085b606c1c3a0a1c46100d348fee15fbfe473418a-shm.mount: Deactivated successfully. Jan 30 13:41:38.216349 kubelet[1771]: E0130 13:41:38.216180 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:41:38.704384 systemd[1]: Created slice kubepods-besteffort-pod1a712060_ee7d_427b_b665_7f281cead12d.slice - libcontainer container kubepods-besteffort-pod1a712060_ee7d_427b_b665_7f281cead12d.slice. Jan 30 13:41:38.802578 kubelet[1771]: I0130 13:41:38.802530 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5zck\" (UniqueName: \"kubernetes.io/projected/1a712060-ee7d-427b-b665-7f281cead12d-kube-api-access-x5zck\") pod \"nginx-deployment-7fcdb87857-ssnr9\" (UID: \"1a712060-ee7d-427b-b665-7f281cead12d\") " pod="default/nginx-deployment-7fcdb87857-ssnr9" Jan 30 13:41:38.830765 kubelet[1771]: I0130 13:41:38.830721 1771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55c829fd4b7bffb8757bf00085b606c1c3a0a1c46100d348fee15fbfe473418a" Jan 30 13:41:38.831486 containerd[1470]: time="2025-01-30T13:41:38.831412561Z" level=info msg="StopPodSandbox for \"55c829fd4b7bffb8757bf00085b606c1c3a0a1c46100d348fee15fbfe473418a\"" Jan 30 13:41:38.831853 containerd[1470]: time="2025-01-30T13:41:38.831683594Z" level=info msg="Ensure that sandbox 55c829fd4b7bffb8757bf00085b606c1c3a0a1c46100d348fee15fbfe473418a in task-service has been cleanup successfully" Jan 30 13:41:38.860825 containerd[1470]: time="2025-01-30T13:41:38.860774715Z" level=error msg="StopPodSandbox for \"55c829fd4b7bffb8757bf00085b606c1c3a0a1c46100d348fee15fbfe473418a\" failed" error="failed to destroy network for sandbox \"55c829fd4b7bffb8757bf00085b606c1c3a0a1c46100d348fee15fbfe473418a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:41:38.861261 kubelet[1771]: E0130 13:41:38.861214 1771 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"55c829fd4b7bffb8757bf00085b606c1c3a0a1c46100d348fee15fbfe473418a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="55c829fd4b7bffb8757bf00085b606c1c3a0a1c46100d348fee15fbfe473418a" Jan 30 13:41:38.861402 kubelet[1771]: E0130 13:41:38.861309 1771 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"55c829fd4b7bffb8757bf00085b606c1c3a0a1c46100d348fee15fbfe473418a"} Jan 30 13:41:38.861466 kubelet[1771]: E0130 13:41:38.861441 1771 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d23cfe7e-0895-464a-a0d2-82dfd5749cfe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"55c829fd4b7bffb8757bf00085b606c1c3a0a1c46100d348fee15fbfe473418a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:41:38.861556 kubelet[1771]: E0130 13:41:38.861478 1771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d23cfe7e-0895-464a-a0d2-82dfd5749cfe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"55c829fd4b7bffb8757bf00085b606c1c3a0a1c46100d348fee15fbfe473418a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rjlb4" podUID="d23cfe7e-0895-464a-a0d2-82dfd5749cfe" Jan 30 13:41:39.012741 containerd[1470]: time="2025-01-30T13:41:39.012436123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-ssnr9,Uid:1a712060-ee7d-427b-b665-7f281cead12d,Namespace:default,Attempt:0,}" Jan 30 13:41:39.190849 containerd[1470]: time="2025-01-30T13:41:39.190789710Z" level=error msg="Failed to destroy network for sandbox \"4b11a0eac68b859dee2fca96b072305b29bafec92820186bbd9d1a22e0ce6b21\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:41:39.191251 containerd[1470]: time="2025-01-30T13:41:39.191201289Z" level=error msg="encountered an error cleaning up failed sandbox \"4b11a0eac68b859dee2fca96b072305b29bafec92820186bbd9d1a22e0ce6b21\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:41:39.191351 containerd[1470]: time="2025-01-30T13:41:39.191252647Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-ssnr9,Uid:1a712060-ee7d-427b-b665-7f281cead12d,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4b11a0eac68b859dee2fca96b072305b29bafec92820186bbd9d1a22e0ce6b21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:41:39.192187 kubelet[1771]: E0130 13:41:39.192117 1771 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b11a0eac68b859dee2fca96b072305b29bafec92820186bbd9d1a22e0ce6b21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:41:39.192187 kubelet[1771]: E0130 13:41:39.192186 1771 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b11a0eac68b859dee2fca96b072305b29bafec92820186bbd9d1a22e0ce6b21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-ssnr9" Jan 30 13:41:39.192361 kubelet[1771]: E0130 13:41:39.192208 1771 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b11a0eac68b859dee2fca96b072305b29bafec92820186bbd9d1a22e0ce6b21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-ssnr9" Jan 30 13:41:39.192361 kubelet[1771]: E0130 13:41:39.192292 1771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-ssnr9_default(1a712060-ee7d-427b-b665-7f281cead12d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-ssnr9_default(1a712060-ee7d-427b-b665-7f281cead12d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4b11a0eac68b859dee2fca96b072305b29bafec92820186bbd9d1a22e0ce6b21\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-ssnr9" podUID="1a712060-ee7d-427b-b665-7f281cead12d" Jan 30 13:41:39.193252 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4b11a0eac68b859dee2fca96b072305b29bafec92820186bbd9d1a22e0ce6b21-shm.mount: Deactivated successfully. Jan 30 13:41:39.216548 kubelet[1771]: E0130 13:41:39.216487 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:41:39.833100 kubelet[1771]: I0130 13:41:39.833057 1771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b11a0eac68b859dee2fca96b072305b29bafec92820186bbd9d1a22e0ce6b21" Jan 30 13:41:39.833591 containerd[1470]: time="2025-01-30T13:41:39.833552153Z" level=info msg="StopPodSandbox for \"4b11a0eac68b859dee2fca96b072305b29bafec92820186bbd9d1a22e0ce6b21\"" Jan 30 13:41:39.833918 containerd[1470]: time="2025-01-30T13:41:39.833751204Z" level=info msg="Ensure that sandbox 4b11a0eac68b859dee2fca96b072305b29bafec92820186bbd9d1a22e0ce6b21 in task-service has been cleanup successfully" Jan 30 13:41:39.861560 containerd[1470]: time="2025-01-30T13:41:39.861506365Z" level=error msg="StopPodSandbox for \"4b11a0eac68b859dee2fca96b072305b29bafec92820186bbd9d1a22e0ce6b21\" failed" error="failed to destroy network for sandbox \"4b11a0eac68b859dee2fca96b072305b29bafec92820186bbd9d1a22e0ce6b21\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:41:39.861793 kubelet[1771]: E0130 13:41:39.861740 1771 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4b11a0eac68b859dee2fca96b072305b29bafec92820186bbd9d1a22e0ce6b21\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4b11a0eac68b859dee2fca96b072305b29bafec92820186bbd9d1a22e0ce6b21" Jan 30 13:41:39.861851 kubelet[1771]: E0130 13:41:39.861804 1771 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4b11a0eac68b859dee2fca96b072305b29bafec92820186bbd9d1a22e0ce6b21"} Jan 30 13:41:39.861874 kubelet[1771]: E0130 13:41:39.861848 1771 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1a712060-ee7d-427b-b665-7f281cead12d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4b11a0eac68b859dee2fca96b072305b29bafec92820186bbd9d1a22e0ce6b21\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:41:39.861936 kubelet[1771]: E0130 13:41:39.861876 1771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1a712060-ee7d-427b-b665-7f281cead12d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4b11a0eac68b859dee2fca96b072305b29bafec92820186bbd9d1a22e0ce6b21\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-ssnr9" podUID="1a712060-ee7d-427b-b665-7f281cead12d" Jan 30 13:41:40.217780 kubelet[1771]: E0130 13:41:40.217630 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:41:41.218260 kubelet[1771]: E0130 13:41:41.218214 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:41:42.218591 kubelet[1771]: E0130 13:41:42.218549 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:41:42.814461 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3364057799.mount: Deactivated successfully. Jan 30 13:41:43.219917 kubelet[1771]: E0130 13:41:43.219749 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:41:43.637622 containerd[1470]: time="2025-01-30T13:41:43.637522836Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:41:43.638246 containerd[1470]: time="2025-01-30T13:41:43.638214331Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 30 13:41:43.639469 containerd[1470]: time="2025-01-30T13:41:43.639394433Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:41:43.641697 containerd[1470]: time="2025-01-30T13:41:43.641660504Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:41:43.642267 containerd[1470]: time="2025-01-30T13:41:43.642215762Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 5.812441164s" Jan 30 13:41:43.642267 containerd[1470]: time="2025-01-30T13:41:43.642261325Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 30 13:41:43.653146 containerd[1470]: time="2025-01-30T13:41:43.653094961Z" level=info msg="CreateContainer within sandbox \"5310b3aa149daa50d2ab63dd0710b390f0aecf731dedd66d2fcd1ad469e12986\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 30 13:41:43.671107 containerd[1470]: time="2025-01-30T13:41:43.671067761Z" level=info msg="CreateContainer within sandbox \"5310b3aa149daa50d2ab63dd0710b390f0aecf731dedd66d2fcd1ad469e12986\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"541700cccaee641e34952999f83fdfbe4da3c07eb496c683801e131868f0d32a\"" Jan 30 13:41:43.671702 containerd[1470]: time="2025-01-30T13:41:43.671666485Z" level=info msg="StartContainer for \"541700cccaee641e34952999f83fdfbe4da3c07eb496c683801e131868f0d32a\"" Jan 30 13:41:43.723575 systemd[1]: Started cri-containerd-541700cccaee641e34952999f83fdfbe4da3c07eb496c683801e131868f0d32a.scope - libcontainer container 541700cccaee641e34952999f83fdfbe4da3c07eb496c683801e131868f0d32a. Jan 30 13:41:43.786875 containerd[1470]: time="2025-01-30T13:41:43.786832678Z" level=info msg="StartContainer for \"541700cccaee641e34952999f83fdfbe4da3c07eb496c683801e131868f0d32a\" returns successfully" Jan 30 13:41:43.843658 kubelet[1771]: E0130 13:41:43.843580 1771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:41:43.868268 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 30 13:41:43.868388 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 30 13:41:44.220939 kubelet[1771]: E0130 13:41:44.220880 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:41:44.845098 kubelet[1771]: I0130 13:41:44.845053 1771 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:41:44.845518 kubelet[1771]: E0130 13:41:44.845471 1771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:41:45.234921 kubelet[1771]: E0130 13:41:45.234765 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:41:45.446467 kernel: bpftool[2557]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 30 13:41:45.672450 systemd-networkd[1406]: vxlan.calico: Link UP Jan 30 13:41:45.672463 systemd-networkd[1406]: vxlan.calico: Gained carrier Jan 30 13:41:46.209817 kubelet[1771]: E0130 13:41:46.209753 1771 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:41:46.235317 kubelet[1771]: E0130 13:41:46.235287 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:41:47.062645 systemd-networkd[1406]: vxlan.calico: Gained IPv6LL Jan 30 13:41:47.236509 kubelet[1771]: E0130 13:41:47.236444 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:41:48.237258 kubelet[1771]: E0130 13:41:48.237200 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:41:49.237721 kubelet[1771]: E0130 13:41:49.237679 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:41:50.237988 kubelet[1771]: E0130 13:41:50.237928 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:41:50.803386 containerd[1470]: time="2025-01-30T13:41:50.803329561Z" level=info msg="StopPodSandbox for \"55c829fd4b7bffb8757bf00085b606c1c3a0a1c46100d348fee15fbfe473418a\"" Jan 30 13:41:50.842747 kubelet[1771]: I0130 13:41:50.842684 1771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-9vmdx" podStartSLOduration=9.878442565 podStartE2EDuration="23.842662122s" podCreationTimestamp="2025-01-30 13:41:27 +0000 UTC" firstStartedPulling="2025-01-30 13:41:29.678764818 +0000 UTC m=+3.911025186" lastFinishedPulling="2025-01-30 13:41:43.642984375 +0000 UTC m=+17.875244743" observedRunningTime="2025-01-30 13:41:43.856862696 +0000 UTC m=+18.089123064" watchObservedRunningTime="2025-01-30 13:41:50.842662122 +0000 UTC m=+25.074922480" Jan 30 13:41:50.878078 containerd[1470]: 2025-01-30 13:41:50.842 [INFO][2650] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="55c829fd4b7bffb8757bf00085b606c1c3a0a1c46100d348fee15fbfe473418a" Jan 30 13:41:50.878078 containerd[1470]: 2025-01-30 13:41:50.842 [INFO][2650] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="55c829fd4b7bffb8757bf00085b606c1c3a0a1c46100d348fee15fbfe473418a" iface="eth0" netns="/var/run/netns/cni-38035b29-cc1a-e0cd-c998-9cb13b299f1e" Jan 30 13:41:50.878078 containerd[1470]: 2025-01-30 13:41:50.843 [INFO][2650] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="55c829fd4b7bffb8757bf00085b606c1c3a0a1c46100d348fee15fbfe473418a" iface="eth0" netns="/var/run/netns/cni-38035b29-cc1a-e0cd-c998-9cb13b299f1e" Jan 30 13:41:50.878078 containerd[1470]: 2025-01-30 13:41:50.843 [INFO][2650] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="55c829fd4b7bffb8757bf00085b606c1c3a0a1c46100d348fee15fbfe473418a" iface="eth0" netns="/var/run/netns/cni-38035b29-cc1a-e0cd-c998-9cb13b299f1e" Jan 30 13:41:50.878078 containerd[1470]: 2025-01-30 13:41:50.843 [INFO][2650] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="55c829fd4b7bffb8757bf00085b606c1c3a0a1c46100d348fee15fbfe473418a" Jan 30 13:41:50.878078 containerd[1470]: 2025-01-30 13:41:50.843 [INFO][2650] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="55c829fd4b7bffb8757bf00085b606c1c3a0a1c46100d348fee15fbfe473418a" Jan 30 13:41:50.878078 containerd[1470]: 2025-01-30 13:41:50.864 [INFO][2657] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="55c829fd4b7bffb8757bf00085b606c1c3a0a1c46100d348fee15fbfe473418a" HandleID="k8s-pod-network.55c829fd4b7bffb8757bf00085b606c1c3a0a1c46100d348fee15fbfe473418a" Workload="10.0.0.74-k8s-csi--node--driver--rjlb4-eth0" Jan 30 13:41:50.878078 containerd[1470]: 2025-01-30 13:41:50.864 [INFO][2657] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:41:50.878078 containerd[1470]: 2025-01-30 13:41:50.864 [INFO][2657] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:41:50.878078 containerd[1470]: 2025-01-30 13:41:50.869 [WARNING][2657] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="55c829fd4b7bffb8757bf00085b606c1c3a0a1c46100d348fee15fbfe473418a" HandleID="k8s-pod-network.55c829fd4b7bffb8757bf00085b606c1c3a0a1c46100d348fee15fbfe473418a" Workload="10.0.0.74-k8s-csi--node--driver--rjlb4-eth0" Jan 30 13:41:50.878078 containerd[1470]: 2025-01-30 13:41:50.869 [INFO][2657] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="55c829fd4b7bffb8757bf00085b606c1c3a0a1c46100d348fee15fbfe473418a" HandleID="k8s-pod-network.55c829fd4b7bffb8757bf00085b606c1c3a0a1c46100d348fee15fbfe473418a" Workload="10.0.0.74-k8s-csi--node--driver--rjlb4-eth0" Jan 30 13:41:50.878078 containerd[1470]: 2025-01-30 13:41:50.871 [INFO][2657] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:41:50.878078 containerd[1470]: 2025-01-30 13:41:50.875 [INFO][2650] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="55c829fd4b7bffb8757bf00085b606c1c3a0a1c46100d348fee15fbfe473418a" Jan 30 13:41:50.878539 containerd[1470]: time="2025-01-30T13:41:50.878242480Z" level=info msg="TearDown network for sandbox \"55c829fd4b7bffb8757bf00085b606c1c3a0a1c46100d348fee15fbfe473418a\" successfully" Jan 30 13:41:50.878539 containerd[1470]: time="2025-01-30T13:41:50.878269149Z" level=info msg="StopPodSandbox for \"55c829fd4b7bffb8757bf00085b606c1c3a0a1c46100d348fee15fbfe473418a\" returns successfully" Jan 30 13:41:50.878985 containerd[1470]: time="2025-01-30T13:41:50.878966143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rjlb4,Uid:d23cfe7e-0895-464a-a0d2-82dfd5749cfe,Namespace:calico-system,Attempt:1,}" Jan 30 13:41:50.880004 systemd[1]: run-netns-cni\x2d38035b29\x2dcc1a\x2de0cd\x2dc998\x2d9cb13b299f1e.mount: Deactivated successfully. Jan 30 13:41:50.976735 systemd-networkd[1406]: cali95ed7256829: Link UP Jan 30 13:41:50.977460 systemd-networkd[1406]: cali95ed7256829: Gained carrier Jan 30 13:41:50.989903 containerd[1470]: 2025-01-30 13:41:50.919 [INFO][2665] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.74-k8s-csi--node--driver--rjlb4-eth0 csi-node-driver- calico-system d23cfe7e-0895-464a-a0d2-82dfd5749cfe 1006 0 2025-01-30 13:41:27 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:84cddb44f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.0.0.74 csi-node-driver-rjlb4 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali95ed7256829 [] []}} ContainerID="d63a545197a17ba0738947ee17a5746dbdf81b128804156cc094323061bc0bb6" Namespace="calico-system" Pod="csi-node-driver-rjlb4" WorkloadEndpoint="10.0.0.74-k8s-csi--node--driver--rjlb4-" Jan 30 13:41:50.989903 containerd[1470]: 2025-01-30 13:41:50.919 [INFO][2665] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d63a545197a17ba0738947ee17a5746dbdf81b128804156cc094323061bc0bb6" Namespace="calico-system" Pod="csi-node-driver-rjlb4" WorkloadEndpoint="10.0.0.74-k8s-csi--node--driver--rjlb4-eth0" Jan 30 13:41:50.989903 containerd[1470]: 2025-01-30 13:41:50.943 [INFO][2678] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d63a545197a17ba0738947ee17a5746dbdf81b128804156cc094323061bc0bb6" HandleID="k8s-pod-network.d63a545197a17ba0738947ee17a5746dbdf81b128804156cc094323061bc0bb6" Workload="10.0.0.74-k8s-csi--node--driver--rjlb4-eth0" Jan 30 13:41:50.989903 containerd[1470]: 2025-01-30 13:41:50.950 [INFO][2678] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d63a545197a17ba0738947ee17a5746dbdf81b128804156cc094323061bc0bb6" HandleID="k8s-pod-network.d63a545197a17ba0738947ee17a5746dbdf81b128804156cc094323061bc0bb6" Workload="10.0.0.74-k8s-csi--node--driver--rjlb4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000362a40), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.74", "pod":"csi-node-driver-rjlb4", "timestamp":"2025-01-30 13:41:50.94303731 +0000 UTC"}, Hostname:"10.0.0.74", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:41:50.989903 containerd[1470]: 2025-01-30 13:41:50.950 [INFO][2678] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:41:50.989903 containerd[1470]: 2025-01-30 13:41:50.950 [INFO][2678] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:41:50.989903 containerd[1470]: 2025-01-30 13:41:50.950 [INFO][2678] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.74' Jan 30 13:41:50.989903 containerd[1470]: 2025-01-30 13:41:50.952 [INFO][2678] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d63a545197a17ba0738947ee17a5746dbdf81b128804156cc094323061bc0bb6" host="10.0.0.74" Jan 30 13:41:50.989903 containerd[1470]: 2025-01-30 13:41:50.955 [INFO][2678] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.74" Jan 30 13:41:50.989903 containerd[1470]: 2025-01-30 13:41:50.959 [INFO][2678] ipam/ipam.go 489: Trying affinity for 192.168.67.128/26 host="10.0.0.74" Jan 30 13:41:50.989903 containerd[1470]: 2025-01-30 13:41:50.960 [INFO][2678] ipam/ipam.go 155: Attempting to load block cidr=192.168.67.128/26 host="10.0.0.74" Jan 30 13:41:50.989903 containerd[1470]: 2025-01-30 13:41:50.962 [INFO][2678] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.67.128/26 host="10.0.0.74" Jan 30 13:41:50.989903 containerd[1470]: 2025-01-30 13:41:50.962 [INFO][2678] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.67.128/26 handle="k8s-pod-network.d63a545197a17ba0738947ee17a5746dbdf81b128804156cc094323061bc0bb6" host="10.0.0.74" Jan 30 13:41:50.989903 containerd[1470]: 2025-01-30 13:41:50.963 [INFO][2678] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d63a545197a17ba0738947ee17a5746dbdf81b128804156cc094323061bc0bb6 Jan 30 13:41:50.989903 containerd[1470]: 2025-01-30 13:41:50.967 [INFO][2678] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.67.128/26 handle="k8s-pod-network.d63a545197a17ba0738947ee17a5746dbdf81b128804156cc094323061bc0bb6" host="10.0.0.74" Jan 30 13:41:50.989903 containerd[1470]: 2025-01-30 13:41:50.971 [INFO][2678] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.67.129/26] block=192.168.67.128/26 handle="k8s-pod-network.d63a545197a17ba0738947ee17a5746dbdf81b128804156cc094323061bc0bb6" host="10.0.0.74" Jan 30 13:41:50.989903 containerd[1470]: 2025-01-30 13:41:50.971 [INFO][2678] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.67.129/26] handle="k8s-pod-network.d63a545197a17ba0738947ee17a5746dbdf81b128804156cc094323061bc0bb6" host="10.0.0.74" Jan 30 13:41:50.989903 containerd[1470]: 2025-01-30 13:41:50.971 [INFO][2678] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:41:50.989903 containerd[1470]: 2025-01-30 13:41:50.971 [INFO][2678] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.67.129/26] IPv6=[] ContainerID="d63a545197a17ba0738947ee17a5746dbdf81b128804156cc094323061bc0bb6" HandleID="k8s-pod-network.d63a545197a17ba0738947ee17a5746dbdf81b128804156cc094323061bc0bb6" Workload="10.0.0.74-k8s-csi--node--driver--rjlb4-eth0" Jan 30 13:41:50.991219 containerd[1470]: 2025-01-30 13:41:50.974 [INFO][2665] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d63a545197a17ba0738947ee17a5746dbdf81b128804156cc094323061bc0bb6" Namespace="calico-system" Pod="csi-node-driver-rjlb4" WorkloadEndpoint="10.0.0.74-k8s-csi--node--driver--rjlb4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.74-k8s-csi--node--driver--rjlb4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d23cfe7e-0895-464a-a0d2-82dfd5749cfe", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 41, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.74", ContainerID:"", Pod:"csi-node-driver-rjlb4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.67.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali95ed7256829", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:41:50.991219 containerd[1470]: 2025-01-30 13:41:50.974 [INFO][2665] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.67.129/32] ContainerID="d63a545197a17ba0738947ee17a5746dbdf81b128804156cc094323061bc0bb6" Namespace="calico-system" Pod="csi-node-driver-rjlb4" WorkloadEndpoint="10.0.0.74-k8s-csi--node--driver--rjlb4-eth0" Jan 30 13:41:50.991219 containerd[1470]: 2025-01-30 13:41:50.974 [INFO][2665] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali95ed7256829 ContainerID="d63a545197a17ba0738947ee17a5746dbdf81b128804156cc094323061bc0bb6" Namespace="calico-system" Pod="csi-node-driver-rjlb4" WorkloadEndpoint="10.0.0.74-k8s-csi--node--driver--rjlb4-eth0" Jan 30 13:41:50.991219 containerd[1470]: 2025-01-30 13:41:50.976 [INFO][2665] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d63a545197a17ba0738947ee17a5746dbdf81b128804156cc094323061bc0bb6" Namespace="calico-system" Pod="csi-node-driver-rjlb4" WorkloadEndpoint="10.0.0.74-k8s-csi--node--driver--rjlb4-eth0" Jan 30 13:41:50.991219 containerd[1470]: 2025-01-30 13:41:50.977 [INFO][2665] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d63a545197a17ba0738947ee17a5746dbdf81b128804156cc094323061bc0bb6" Namespace="calico-system" Pod="csi-node-driver-rjlb4" WorkloadEndpoint="10.0.0.74-k8s-csi--node--driver--rjlb4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.74-k8s-csi--node--driver--rjlb4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d23cfe7e-0895-464a-a0d2-82dfd5749cfe", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 41, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.74", ContainerID:"d63a545197a17ba0738947ee17a5746dbdf81b128804156cc094323061bc0bb6", Pod:"csi-node-driver-rjlb4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.67.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali95ed7256829", MAC:"66:df:c7:4a:b7:e9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:41:50.991219 containerd[1470]: 2025-01-30 13:41:50.985 [INFO][2665] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d63a545197a17ba0738947ee17a5746dbdf81b128804156cc094323061bc0bb6" Namespace="calico-system" Pod="csi-node-driver-rjlb4" WorkloadEndpoint="10.0.0.74-k8s-csi--node--driver--rjlb4-eth0" Jan 30 13:41:51.009450 containerd[1470]: time="2025-01-30T13:41:51.009299058Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:41:51.010110 containerd[1470]: time="2025-01-30T13:41:51.010049112Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:41:51.010110 containerd[1470]: time="2025-01-30T13:41:51.010075047Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:41:51.010277 containerd[1470]: time="2025-01-30T13:41:51.010167764Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:41:51.030575 systemd[1]: Started cri-containerd-d63a545197a17ba0738947ee17a5746dbdf81b128804156cc094323061bc0bb6.scope - libcontainer container d63a545197a17ba0738947ee17a5746dbdf81b128804156cc094323061bc0bb6. Jan 30 13:41:51.040533 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:41:51.051322 containerd[1470]: time="2025-01-30T13:41:51.051284320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rjlb4,Uid:d23cfe7e-0895-464a-a0d2-82dfd5749cfe,Namespace:calico-system,Attempt:1,} returns sandbox id \"d63a545197a17ba0738947ee17a5746dbdf81b128804156cc094323061bc0bb6\"" Jan 30 13:41:51.052653 containerd[1470]: time="2025-01-30T13:41:51.052625892Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 30 13:41:51.239221 kubelet[1771]: E0130 13:41:51.239084 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:41:51.803486 containerd[1470]: time="2025-01-30T13:41:51.803416457Z" level=info msg="StopPodSandbox for \"4b11a0eac68b859dee2fca96b072305b29bafec92820186bbd9d1a22e0ce6b21\"" Jan 30 13:41:51.895515 containerd[1470]: 2025-01-30 13:41:51.839 [INFO][2768] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4b11a0eac68b859dee2fca96b072305b29bafec92820186bbd9d1a22e0ce6b21" Jan 30 13:41:51.895515 containerd[1470]: 2025-01-30 13:41:51.839 [INFO][2768] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4b11a0eac68b859dee2fca96b072305b29bafec92820186bbd9d1a22e0ce6b21" iface="eth0" netns="/var/run/netns/cni-554f6575-1f6e-cc94-b869-960bf2957de0" Jan 30 13:41:51.895515 containerd[1470]: 2025-01-30 13:41:51.840 [INFO][2768] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4b11a0eac68b859dee2fca96b072305b29bafec92820186bbd9d1a22e0ce6b21" iface="eth0" netns="/var/run/netns/cni-554f6575-1f6e-cc94-b869-960bf2957de0" Jan 30 13:41:51.895515 containerd[1470]: 2025-01-30 13:41:51.840 [INFO][2768] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4b11a0eac68b859dee2fca96b072305b29bafec92820186bbd9d1a22e0ce6b21" iface="eth0" netns="/var/run/netns/cni-554f6575-1f6e-cc94-b869-960bf2957de0" Jan 30 13:41:51.895515 containerd[1470]: 2025-01-30 13:41:51.840 [INFO][2768] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4b11a0eac68b859dee2fca96b072305b29bafec92820186bbd9d1a22e0ce6b21" Jan 30 13:41:51.895515 containerd[1470]: 2025-01-30 13:41:51.840 [INFO][2768] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4b11a0eac68b859dee2fca96b072305b29bafec92820186bbd9d1a22e0ce6b21" Jan 30 13:41:51.895515 containerd[1470]: 2025-01-30 13:41:51.885 [INFO][2775] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4b11a0eac68b859dee2fca96b072305b29bafec92820186bbd9d1a22e0ce6b21" HandleID="k8s-pod-network.4b11a0eac68b859dee2fca96b072305b29bafec92820186bbd9d1a22e0ce6b21" Workload="10.0.0.74-k8s-nginx--deployment--7fcdb87857--ssnr9-eth0" Jan 30 13:41:51.895515 containerd[1470]: 2025-01-30 13:41:51.885 [INFO][2775] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:41:51.895515 containerd[1470]: 2025-01-30 13:41:51.885 [INFO][2775] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:41:51.895515 containerd[1470]: 2025-01-30 13:41:51.890 [WARNING][2775] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4b11a0eac68b859dee2fca96b072305b29bafec92820186bbd9d1a22e0ce6b21" HandleID="k8s-pod-network.4b11a0eac68b859dee2fca96b072305b29bafec92820186bbd9d1a22e0ce6b21" Workload="10.0.0.74-k8s-nginx--deployment--7fcdb87857--ssnr9-eth0" Jan 30 13:41:51.895515 containerd[1470]: 2025-01-30 13:41:51.890 [INFO][2775] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4b11a0eac68b859dee2fca96b072305b29bafec92820186bbd9d1a22e0ce6b21" HandleID="k8s-pod-network.4b11a0eac68b859dee2fca96b072305b29bafec92820186bbd9d1a22e0ce6b21" Workload="10.0.0.74-k8s-nginx--deployment--7fcdb87857--ssnr9-eth0" Jan 30 13:41:51.895515 containerd[1470]: 2025-01-30 13:41:51.891 [INFO][2775] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:41:51.895515 containerd[1470]: 2025-01-30 13:41:51.893 [INFO][2768] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4b11a0eac68b859dee2fca96b072305b29bafec92820186bbd9d1a22e0ce6b21" Jan 30 13:41:51.896081 containerd[1470]: time="2025-01-30T13:41:51.895736516Z" level=info msg="TearDown network for sandbox \"4b11a0eac68b859dee2fca96b072305b29bafec92820186bbd9d1a22e0ce6b21\" successfully" Jan 30 13:41:51.896081 containerd[1470]: time="2025-01-30T13:41:51.895763736Z" level=info msg="StopPodSandbox for \"4b11a0eac68b859dee2fca96b072305b29bafec92820186bbd9d1a22e0ce6b21\" returns successfully" Jan 30 13:41:51.896674 containerd[1470]: time="2025-01-30T13:41:51.896647760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-ssnr9,Uid:1a712060-ee7d-427b-b665-7f281cead12d,Namespace:default,Attempt:1,}" Jan 30 13:41:51.897329 systemd[1]: run-netns-cni\x2d554f6575\x2d1f6e\x2dcc94\x2db869\x2d960bf2957de0.mount: Deactivated successfully. Jan 30 13:41:52.010558 systemd-networkd[1406]: cali8b11d4f2589: Link UP Jan 30 13:41:52.011169 systemd-networkd[1406]: cali8b11d4f2589: Gained carrier Jan 30 13:41:52.020802 containerd[1470]: 2025-01-30 13:41:51.945 [INFO][2782] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.74-k8s-nginx--deployment--7fcdb87857--ssnr9-eth0 nginx-deployment-7fcdb87857- default 1a712060-ee7d-427b-b665-7f281cead12d 1013 0 2025-01-30 13:41:38 +0000 UTC map[app:nginx pod-template-hash:7fcdb87857 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.74 nginx-deployment-7fcdb87857-ssnr9 eth0 default [] [] [kns.default ksa.default.default] cali8b11d4f2589 [] []}} ContainerID="31e85362de0d400abdca744f34822dd134bd9472e435ab174cf34bad6eba433c" Namespace="default" Pod="nginx-deployment-7fcdb87857-ssnr9" WorkloadEndpoint="10.0.0.74-k8s-nginx--deployment--7fcdb87857--ssnr9-" Jan 30 13:41:52.020802 containerd[1470]: 2025-01-30 13:41:51.945 [INFO][2782] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="31e85362de0d400abdca744f34822dd134bd9472e435ab174cf34bad6eba433c" Namespace="default" Pod="nginx-deployment-7fcdb87857-ssnr9" WorkloadEndpoint="10.0.0.74-k8s-nginx--deployment--7fcdb87857--ssnr9-eth0" Jan 30 13:41:52.020802 containerd[1470]: 2025-01-30 13:41:51.972 [INFO][2796] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="31e85362de0d400abdca744f34822dd134bd9472e435ab174cf34bad6eba433c" HandleID="k8s-pod-network.31e85362de0d400abdca744f34822dd134bd9472e435ab174cf34bad6eba433c" Workload="10.0.0.74-k8s-nginx--deployment--7fcdb87857--ssnr9-eth0" Jan 30 13:41:52.020802 containerd[1470]: 2025-01-30 13:41:51.980 [INFO][2796] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="31e85362de0d400abdca744f34822dd134bd9472e435ab174cf34bad6eba433c" HandleID="k8s-pod-network.31e85362de0d400abdca744f34822dd134bd9472e435ab174cf34bad6eba433c" Workload="10.0.0.74-k8s-nginx--deployment--7fcdb87857--ssnr9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f4ea0), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.74", "pod":"nginx-deployment-7fcdb87857-ssnr9", "timestamp":"2025-01-30 13:41:51.97288055 +0000 UTC"}, Hostname:"10.0.0.74", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:41:52.020802 containerd[1470]: 2025-01-30 13:41:51.980 [INFO][2796] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:41:52.020802 containerd[1470]: 2025-01-30 13:41:51.980 [INFO][2796] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:41:52.020802 containerd[1470]: 2025-01-30 13:41:51.980 [INFO][2796] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.74' Jan 30 13:41:52.020802 containerd[1470]: 2025-01-30 13:41:51.982 [INFO][2796] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.31e85362de0d400abdca744f34822dd134bd9472e435ab174cf34bad6eba433c" host="10.0.0.74" Jan 30 13:41:52.020802 containerd[1470]: 2025-01-30 13:41:51.986 [INFO][2796] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.74" Jan 30 13:41:52.020802 containerd[1470]: 2025-01-30 13:41:51.991 [INFO][2796] ipam/ipam.go 489: Trying affinity for 192.168.67.128/26 host="10.0.0.74" Jan 30 13:41:52.020802 containerd[1470]: 2025-01-30 13:41:51.993 [INFO][2796] ipam/ipam.go 155: Attempting to load block cidr=192.168.67.128/26 host="10.0.0.74" Jan 30 13:41:52.020802 containerd[1470]: 2025-01-30 13:41:51.994 [INFO][2796] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.67.128/26 host="10.0.0.74" Jan 30 13:41:52.020802 containerd[1470]: 2025-01-30 13:41:51.995 [INFO][2796] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.67.128/26 handle="k8s-pod-network.31e85362de0d400abdca744f34822dd134bd9472e435ab174cf34bad6eba433c" host="10.0.0.74" Jan 30 13:41:52.020802 containerd[1470]: 2025-01-30 13:41:51.996 [INFO][2796] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.31e85362de0d400abdca744f34822dd134bd9472e435ab174cf34bad6eba433c Jan 30 13:41:52.020802 containerd[1470]: 2025-01-30 13:41:52.000 [INFO][2796] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.67.128/26 handle="k8s-pod-network.31e85362de0d400abdca744f34822dd134bd9472e435ab174cf34bad6eba433c" host="10.0.0.74" Jan 30 13:41:52.020802 containerd[1470]: 2025-01-30 13:41:52.005 [INFO][2796] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.67.130/26] block=192.168.67.128/26 handle="k8s-pod-network.31e85362de0d400abdca744f34822dd134bd9472e435ab174cf34bad6eba433c" host="10.0.0.74" Jan 30 13:41:52.020802 containerd[1470]: 2025-01-30 13:41:52.005 [INFO][2796] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.67.130/26] handle="k8s-pod-network.31e85362de0d400abdca744f34822dd134bd9472e435ab174cf34bad6eba433c" host="10.0.0.74" Jan 30 13:41:52.020802 containerd[1470]: 2025-01-30 13:41:52.005 [INFO][2796] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:41:52.020802 containerd[1470]: 2025-01-30 13:41:52.005 [INFO][2796] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.67.130/26] IPv6=[] ContainerID="31e85362de0d400abdca744f34822dd134bd9472e435ab174cf34bad6eba433c" HandleID="k8s-pod-network.31e85362de0d400abdca744f34822dd134bd9472e435ab174cf34bad6eba433c" Workload="10.0.0.74-k8s-nginx--deployment--7fcdb87857--ssnr9-eth0" Jan 30 13:41:52.021453 containerd[1470]: 2025-01-30 13:41:52.008 [INFO][2782] cni-plugin/k8s.go 386: Populated endpoint ContainerID="31e85362de0d400abdca744f34822dd134bd9472e435ab174cf34bad6eba433c" Namespace="default" Pod="nginx-deployment-7fcdb87857-ssnr9" WorkloadEndpoint="10.0.0.74-k8s-nginx--deployment--7fcdb87857--ssnr9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.74-k8s-nginx--deployment--7fcdb87857--ssnr9-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"1a712060-ee7d-427b-b665-7f281cead12d", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 41, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.74", ContainerID:"", Pod:"nginx-deployment-7fcdb87857-ssnr9", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.67.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali8b11d4f2589", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:41:52.021453 containerd[1470]: 2025-01-30 13:41:52.008 [INFO][2782] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.67.130/32] ContainerID="31e85362de0d400abdca744f34822dd134bd9472e435ab174cf34bad6eba433c" Namespace="default" Pod="nginx-deployment-7fcdb87857-ssnr9" WorkloadEndpoint="10.0.0.74-k8s-nginx--deployment--7fcdb87857--ssnr9-eth0" Jan 30 13:41:52.021453 containerd[1470]: 2025-01-30 13:41:52.008 [INFO][2782] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8b11d4f2589 ContainerID="31e85362de0d400abdca744f34822dd134bd9472e435ab174cf34bad6eba433c" Namespace="default" Pod="nginx-deployment-7fcdb87857-ssnr9" WorkloadEndpoint="10.0.0.74-k8s-nginx--deployment--7fcdb87857--ssnr9-eth0" Jan 30 13:41:52.021453 containerd[1470]: 2025-01-30 13:41:52.010 [INFO][2782] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="31e85362de0d400abdca744f34822dd134bd9472e435ab174cf34bad6eba433c" Namespace="default" Pod="nginx-deployment-7fcdb87857-ssnr9" WorkloadEndpoint="10.0.0.74-k8s-nginx--deployment--7fcdb87857--ssnr9-eth0" Jan 30 13:41:52.021453 containerd[1470]: 2025-01-30 13:41:52.011 [INFO][2782] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="31e85362de0d400abdca744f34822dd134bd9472e435ab174cf34bad6eba433c" Namespace="default" Pod="nginx-deployment-7fcdb87857-ssnr9" WorkloadEndpoint="10.0.0.74-k8s-nginx--deployment--7fcdb87857--ssnr9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.74-k8s-nginx--deployment--7fcdb87857--ssnr9-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"1a712060-ee7d-427b-b665-7f281cead12d", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 41, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.74", ContainerID:"31e85362de0d400abdca744f34822dd134bd9472e435ab174cf34bad6eba433c", Pod:"nginx-deployment-7fcdb87857-ssnr9", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.67.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali8b11d4f2589", MAC:"92:34:5f:72:a0:a6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:41:52.021453 containerd[1470]: 2025-01-30 13:41:52.017 [INFO][2782] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="31e85362de0d400abdca744f34822dd134bd9472e435ab174cf34bad6eba433c" Namespace="default" Pod="nginx-deployment-7fcdb87857-ssnr9" WorkloadEndpoint="10.0.0.74-k8s-nginx--deployment--7fcdb87857--ssnr9-eth0" Jan 30 13:41:52.044884 containerd[1470]: time="2025-01-30T13:41:52.044771565Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:41:52.044884 containerd[1470]: time="2025-01-30T13:41:52.044852267Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:41:52.044884 containerd[1470]: time="2025-01-30T13:41:52.044888880Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:41:52.045115 containerd[1470]: time="2025-01-30T13:41:52.044977293Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:41:52.064626 systemd[1]: Started cri-containerd-31e85362de0d400abdca744f34822dd134bd9472e435ab174cf34bad6eba433c.scope - libcontainer container 31e85362de0d400abdca744f34822dd134bd9472e435ab174cf34bad6eba433c. Jan 30 13:41:52.077414 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:41:52.114635 containerd[1470]: time="2025-01-30T13:41:52.114589506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-ssnr9,Uid:1a712060-ee7d-427b-b665-7f281cead12d,Namespace:default,Attempt:1,} returns sandbox id \"31e85362de0d400abdca744f34822dd134bd9472e435ab174cf34bad6eba433c\"" Jan 30 13:41:52.239713 kubelet[1771]: E0130 13:41:52.239654 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:41:52.306310 containerd[1470]: time="2025-01-30T13:41:52.306251223Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:41:52.306935 containerd[1470]: time="2025-01-30T13:41:52.306886330Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 30 13:41:52.307905 containerd[1470]: time="2025-01-30T13:41:52.307862575Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:41:52.309839 containerd[1470]: time="2025-01-30T13:41:52.309801832Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:41:52.310373 containerd[1470]: time="2025-01-30T13:41:52.310333970Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.257680188s" Jan 30 13:41:52.310411 containerd[1470]: time="2025-01-30T13:41:52.310371204Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 30 13:41:52.311505 containerd[1470]: time="2025-01-30T13:41:52.311389004Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 30 13:41:52.312475 containerd[1470]: time="2025-01-30T13:41:52.312443626Z" level=info msg="CreateContainer within sandbox \"d63a545197a17ba0738947ee17a5746dbdf81b128804156cc094323061bc0bb6\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 30 13:41:52.327412 containerd[1470]: time="2025-01-30T13:41:52.327326233Z" level=info msg="CreateContainer within sandbox \"d63a545197a17ba0738947ee17a5746dbdf81b128804156cc094323061bc0bb6\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"9b1d3cf2c25193a202fd95700c50e3cf6afc331248a490989a9b2566fb680adc\"" Jan 30 13:41:52.327982 containerd[1470]: time="2025-01-30T13:41:52.327917029Z" level=info msg="StartContainer for \"9b1d3cf2c25193a202fd95700c50e3cf6afc331248a490989a9b2566fb680adc\"" Jan 30 13:41:52.357550 systemd[1]: Started cri-containerd-9b1d3cf2c25193a202fd95700c50e3cf6afc331248a490989a9b2566fb680adc.scope - libcontainer container 9b1d3cf2c25193a202fd95700c50e3cf6afc331248a490989a9b2566fb680adc. Jan 30 13:41:52.387180 containerd[1470]: time="2025-01-30T13:41:52.387131383Z" level=info msg="StartContainer for \"9b1d3cf2c25193a202fd95700c50e3cf6afc331248a490989a9b2566fb680adc\" returns successfully" Jan 30 13:41:52.822662 systemd-networkd[1406]: cali95ed7256829: Gained IPv6LL Jan 30 13:41:53.240875 kubelet[1771]: E0130 13:41:53.240757 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:41:53.334569 systemd-networkd[1406]: cali8b11d4f2589: Gained IPv6LL Jan 30 13:41:54.241740 kubelet[1771]: E0130 13:41:54.241673 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:41:55.242219 kubelet[1771]: E0130 13:41:55.242122 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:41:55.570266 kubelet[1771]: I0130 13:41:55.570231 1771 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:41:55.570961 kubelet[1771]: E0130 13:41:55.570627 1771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:41:55.741750 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount952192767.mount: Deactivated successfully. Jan 30 13:41:55.879386 kubelet[1771]: E0130 13:41:55.878634 1771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:41:56.242551 kubelet[1771]: E0130 13:41:56.242394 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:41:56.910707 containerd[1470]: time="2025-01-30T13:41:56.910642293Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:41:56.911727 containerd[1470]: time="2025-01-30T13:41:56.911406371Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=71015561" Jan 30 13:41:56.912999 containerd[1470]: time="2025-01-30T13:41:56.912949945Z" level=info msg="ImageCreate event name:\"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:41:56.915586 containerd[1470]: time="2025-01-30T13:41:56.915533920Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:41:56.916627 containerd[1470]: time="2025-01-30T13:41:56.916589035Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"71015439\" in 4.605168211s" Jan 30 13:41:56.916627 containerd[1470]: time="2025-01-30T13:41:56.916624059Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\"" Jan 30 13:41:56.917946 containerd[1470]: time="2025-01-30T13:41:56.917889848Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 30 13:41:56.918872 containerd[1470]: time="2025-01-30T13:41:56.918841836Z" level=info msg="CreateContainer within sandbox \"31e85362de0d400abdca744f34822dd134bd9472e435ab174cf34bad6eba433c\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 30 13:41:56.931644 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2067712123.mount: Deactivated successfully. Jan 30 13:41:56.932815 containerd[1470]: time="2025-01-30T13:41:56.932774790Z" level=info msg="CreateContainer within sandbox \"31e85362de0d400abdca744f34822dd134bd9472e435ab174cf34bad6eba433c\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"562f65a9c90b9166996dd105a2c03d86c1fe80ea36bea0dfeb159d087fd2f632\"" Jan 30 13:41:56.933367 containerd[1470]: time="2025-01-30T13:41:56.933331543Z" level=info msg="StartContainer for \"562f65a9c90b9166996dd105a2c03d86c1fe80ea36bea0dfeb159d087fd2f632\"" Jan 30 13:41:56.987695 systemd[1]: Started cri-containerd-562f65a9c90b9166996dd105a2c03d86c1fe80ea36bea0dfeb159d087fd2f632.scope - libcontainer container 562f65a9c90b9166996dd105a2c03d86c1fe80ea36bea0dfeb159d087fd2f632. Jan 30 13:41:57.122384 containerd[1470]: time="2025-01-30T13:41:57.122330297Z" level=info msg="StartContainer for \"562f65a9c90b9166996dd105a2c03d86c1fe80ea36bea0dfeb159d087fd2f632\" returns successfully" Jan 30 13:41:57.242972 kubelet[1771]: E0130 13:41:57.242768 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:41:57.890946 kubelet[1771]: I0130 13:41:57.890881 1771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-ssnr9" podStartSLOduration=15.088692727 podStartE2EDuration="19.890860039s" podCreationTimestamp="2025-01-30 13:41:38 +0000 UTC" firstStartedPulling="2025-01-30 13:41:52.115485649 +0000 UTC m=+26.347746017" lastFinishedPulling="2025-01-30 13:41:56.917652961 +0000 UTC m=+31.149913329" observedRunningTime="2025-01-30 13:41:57.890789963 +0000 UTC m=+32.123050331" watchObservedRunningTime="2025-01-30 13:41:57.890860039 +0000 UTC m=+32.123120408" Jan 30 13:41:57.960539 update_engine[1451]: I20250130 13:41:57.960474 1451 update_attempter.cc:509] Updating boot flags... Jan 30 13:41:57.986521 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (3044) Jan 30 13:41:58.031531 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (3043) Jan 30 13:41:58.063621 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (3043) Jan 30 13:41:58.243517 kubelet[1771]: E0130 13:41:58.243383 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:41:58.633562 containerd[1470]: time="2025-01-30T13:41:58.633505207Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:41:58.634356 containerd[1470]: time="2025-01-30T13:41:58.634318712Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 30 13:41:58.635861 containerd[1470]: time="2025-01-30T13:41:58.635831387Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:41:58.637723 containerd[1470]: time="2025-01-30T13:41:58.637686795Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:41:58.638269 containerd[1470]: time="2025-01-30T13:41:58.638236763Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.72029878s" Jan 30 13:41:58.638304 containerd[1470]: time="2025-01-30T13:41:58.638267014Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 30 13:41:58.640242 containerd[1470]: time="2025-01-30T13:41:58.640211952Z" level=info msg="CreateContainer within sandbox \"d63a545197a17ba0738947ee17a5746dbdf81b128804156cc094323061bc0bb6\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 30 13:41:58.656039 containerd[1470]: time="2025-01-30T13:41:58.655993080Z" level=info msg="CreateContainer within sandbox \"d63a545197a17ba0738947ee17a5746dbdf81b128804156cc094323061bc0bb6\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"b48b498c464a0b0e0b72e69414cffbd637cb8e7492c4606d0c5352d52ba44508\"" Jan 30 13:41:58.656528 containerd[1470]: time="2025-01-30T13:41:58.656494405Z" level=info msg="StartContainer for \"b48b498c464a0b0e0b72e69414cffbd637cb8e7492c4606d0c5352d52ba44508\"" Jan 30 13:41:58.687623 systemd[1]: Started cri-containerd-b48b498c464a0b0e0b72e69414cffbd637cb8e7492c4606d0c5352d52ba44508.scope - libcontainer container b48b498c464a0b0e0b72e69414cffbd637cb8e7492c4606d0c5352d52ba44508. Jan 30 13:41:58.758018 containerd[1470]: time="2025-01-30T13:41:58.757966375Z" level=info msg="StartContainer for \"b48b498c464a0b0e0b72e69414cffbd637cb8e7492c4606d0c5352d52ba44508\" returns successfully" Jan 30 13:41:58.873517 kubelet[1771]: I0130 13:41:58.873462 1771 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 30 13:41:58.873517 kubelet[1771]: I0130 13:41:58.873495 1771 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 30 13:41:58.895959 kubelet[1771]: I0130 13:41:58.895856 1771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-rjlb4" podStartSLOduration=24.309225556 podStartE2EDuration="31.895842086s" podCreationTimestamp="2025-01-30 13:41:27 +0000 UTC" firstStartedPulling="2025-01-30 13:41:51.052324303 +0000 UTC m=+25.284584671" lastFinishedPulling="2025-01-30 13:41:58.638940833 +0000 UTC m=+32.871201201" observedRunningTime="2025-01-30 13:41:58.895678974 +0000 UTC m=+33.127939342" watchObservedRunningTime="2025-01-30 13:41:58.895842086 +0000 UTC m=+33.128102454" Jan 30 13:41:59.243677 kubelet[1771]: E0130 13:41:59.243555 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:42:00.244710 kubelet[1771]: E0130 13:42:00.244634 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:42:01.140053 systemd[1]: Created slice kubepods-besteffort-pod27e0d69a_e0fd_4fd5_9724_9bcf06f329b9.slice - libcontainer container kubepods-besteffort-pod27e0d69a_e0fd_4fd5_9724_9bcf06f329b9.slice. Jan 30 13:42:01.141616 kubelet[1771]: I0130 13:42:01.141591 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2n88t\" (UniqueName: \"kubernetes.io/projected/27e0d69a-e0fd-4fd5-9724-9bcf06f329b9-kube-api-access-2n88t\") pod \"nfs-server-provisioner-0\" (UID: \"27e0d69a-e0fd-4fd5-9724-9bcf06f329b9\") " pod="default/nfs-server-provisioner-0" Jan 30 13:42:01.141715 kubelet[1771]: I0130 13:42:01.141635 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/27e0d69a-e0fd-4fd5-9724-9bcf06f329b9-data\") pod \"nfs-server-provisioner-0\" (UID: \"27e0d69a-e0fd-4fd5-9724-9bcf06f329b9\") " pod="default/nfs-server-provisioner-0" Jan 30 13:42:01.245683 kubelet[1771]: E0130 13:42:01.245635 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:42:01.443817 containerd[1470]: time="2025-01-30T13:42:01.443685104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:27e0d69a-e0fd-4fd5-9724-9bcf06f329b9,Namespace:default,Attempt:0,}" Jan 30 13:42:01.545243 systemd-networkd[1406]: cali60e51b789ff: Link UP Jan 30 13:42:01.545772 systemd-networkd[1406]: cali60e51b789ff: Gained carrier Jan 30 13:42:01.556905 containerd[1470]: 2025-01-30 13:42:01.484 [INFO][3098] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.74-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 27e0d69a-e0fd-4fd5-9724-9bcf06f329b9 1085 0 2025-01-30 13:42:01 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.0.0.74 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="270fbe38b3350ae3c5b72572979c9f672e2bf55beae66ba40e99adf2e47cedd7" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.74-k8s-nfs--server--provisioner--0-" Jan 30 13:42:01.556905 containerd[1470]: 2025-01-30 13:42:01.484 [INFO][3098] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="270fbe38b3350ae3c5b72572979c9f672e2bf55beae66ba40e99adf2e47cedd7" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.74-k8s-nfs--server--provisioner--0-eth0" Jan 30 13:42:01.556905 containerd[1470]: 2025-01-30 13:42:01.510 [INFO][3112] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="270fbe38b3350ae3c5b72572979c9f672e2bf55beae66ba40e99adf2e47cedd7" HandleID="k8s-pod-network.270fbe38b3350ae3c5b72572979c9f672e2bf55beae66ba40e99adf2e47cedd7" Workload="10.0.0.74-k8s-nfs--server--provisioner--0-eth0" Jan 30 13:42:01.556905 containerd[1470]: 2025-01-30 13:42:01.517 [INFO][3112] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="270fbe38b3350ae3c5b72572979c9f672e2bf55beae66ba40e99adf2e47cedd7" HandleID="k8s-pod-network.270fbe38b3350ae3c5b72572979c9f672e2bf55beae66ba40e99adf2e47cedd7" Workload="10.0.0.74-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000295470), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.74", "pod":"nfs-server-provisioner-0", "timestamp":"2025-01-30 13:42:01.510504766 +0000 UTC"}, Hostname:"10.0.0.74", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:42:01.556905 containerd[1470]: 2025-01-30 13:42:01.517 [INFO][3112] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:42:01.556905 containerd[1470]: 2025-01-30 13:42:01.517 [INFO][3112] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:42:01.556905 containerd[1470]: 2025-01-30 13:42:01.517 [INFO][3112] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.74' Jan 30 13:42:01.556905 containerd[1470]: 2025-01-30 13:42:01.519 [INFO][3112] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.270fbe38b3350ae3c5b72572979c9f672e2bf55beae66ba40e99adf2e47cedd7" host="10.0.0.74" Jan 30 13:42:01.556905 containerd[1470]: 2025-01-30 13:42:01.522 [INFO][3112] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.74" Jan 30 13:42:01.556905 containerd[1470]: 2025-01-30 13:42:01.526 [INFO][3112] ipam/ipam.go 489: Trying affinity for 192.168.67.128/26 host="10.0.0.74" Jan 30 13:42:01.556905 containerd[1470]: 2025-01-30 13:42:01.528 [INFO][3112] ipam/ipam.go 155: Attempting to load block cidr=192.168.67.128/26 host="10.0.0.74" Jan 30 13:42:01.556905 containerd[1470]: 2025-01-30 13:42:01.529 [INFO][3112] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.67.128/26 host="10.0.0.74" Jan 30 13:42:01.556905 containerd[1470]: 2025-01-30 13:42:01.529 [INFO][3112] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.67.128/26 handle="k8s-pod-network.270fbe38b3350ae3c5b72572979c9f672e2bf55beae66ba40e99adf2e47cedd7" host="10.0.0.74" Jan 30 13:42:01.556905 containerd[1470]: 2025-01-30 13:42:01.530 [INFO][3112] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.270fbe38b3350ae3c5b72572979c9f672e2bf55beae66ba40e99adf2e47cedd7 Jan 30 13:42:01.556905 containerd[1470]: 2025-01-30 13:42:01.536 [INFO][3112] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.67.128/26 handle="k8s-pod-network.270fbe38b3350ae3c5b72572979c9f672e2bf55beae66ba40e99adf2e47cedd7" host="10.0.0.74" Jan 30 13:42:01.556905 containerd[1470]: 2025-01-30 13:42:01.540 [INFO][3112] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.67.131/26] block=192.168.67.128/26 handle="k8s-pod-network.270fbe38b3350ae3c5b72572979c9f672e2bf55beae66ba40e99adf2e47cedd7" host="10.0.0.74" Jan 30 13:42:01.556905 containerd[1470]: 2025-01-30 13:42:01.540 [INFO][3112] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.67.131/26] handle="k8s-pod-network.270fbe38b3350ae3c5b72572979c9f672e2bf55beae66ba40e99adf2e47cedd7" host="10.0.0.74" Jan 30 13:42:01.556905 containerd[1470]: 2025-01-30 13:42:01.540 [INFO][3112] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:42:01.556905 containerd[1470]: 2025-01-30 13:42:01.540 [INFO][3112] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.67.131/26] IPv6=[] ContainerID="270fbe38b3350ae3c5b72572979c9f672e2bf55beae66ba40e99adf2e47cedd7" HandleID="k8s-pod-network.270fbe38b3350ae3c5b72572979c9f672e2bf55beae66ba40e99adf2e47cedd7" Workload="10.0.0.74-k8s-nfs--server--provisioner--0-eth0" Jan 30 13:42:01.557466 containerd[1470]: 2025-01-30 13:42:01.542 [INFO][3098] cni-plugin/k8s.go 386: Populated endpoint ContainerID="270fbe38b3350ae3c5b72572979c9f672e2bf55beae66ba40e99adf2e47cedd7" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.74-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.74-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"27e0d69a-e0fd-4fd5-9724-9bcf06f329b9", ResourceVersion:"1085", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 42, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.74", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.67.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:42:01.557466 containerd[1470]: 2025-01-30 13:42:01.543 [INFO][3098] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.67.131/32] ContainerID="270fbe38b3350ae3c5b72572979c9f672e2bf55beae66ba40e99adf2e47cedd7" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.74-k8s-nfs--server--provisioner--0-eth0" Jan 30 13:42:01.557466 containerd[1470]: 2025-01-30 13:42:01.543 [INFO][3098] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="270fbe38b3350ae3c5b72572979c9f672e2bf55beae66ba40e99adf2e47cedd7" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.74-k8s-nfs--server--provisioner--0-eth0" Jan 30 13:42:01.557466 containerd[1470]: 2025-01-30 13:42:01.544 [INFO][3098] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="270fbe38b3350ae3c5b72572979c9f672e2bf55beae66ba40e99adf2e47cedd7" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.74-k8s-nfs--server--provisioner--0-eth0" Jan 30 13:42:01.557663 containerd[1470]: 2025-01-30 13:42:01.545 [INFO][3098] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="270fbe38b3350ae3c5b72572979c9f672e2bf55beae66ba40e99adf2e47cedd7" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.74-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.74-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"27e0d69a-e0fd-4fd5-9724-9bcf06f329b9", ResourceVersion:"1085", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 42, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.74", ContainerID:"270fbe38b3350ae3c5b72572979c9f672e2bf55beae66ba40e99adf2e47cedd7", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.67.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"7a:c1:7e:d6:e9:e2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:42:01.557663 containerd[1470]: 2025-01-30 13:42:01.554 [INFO][3098] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="270fbe38b3350ae3c5b72572979c9f672e2bf55beae66ba40e99adf2e47cedd7" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.74-k8s-nfs--server--provisioner--0-eth0" Jan 30 13:42:01.580126 containerd[1470]: time="2025-01-30T13:42:01.580020915Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:42:01.580126 containerd[1470]: time="2025-01-30T13:42:01.580087858Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:42:01.580126 containerd[1470]: time="2025-01-30T13:42:01.580100567Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:42:01.580344 containerd[1470]: time="2025-01-30T13:42:01.580192145Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:42:01.604568 systemd[1]: Started cri-containerd-270fbe38b3350ae3c5b72572979c9f672e2bf55beae66ba40e99adf2e47cedd7.scope - libcontainer container 270fbe38b3350ae3c5b72572979c9f672e2bf55beae66ba40e99adf2e47cedd7. Jan 30 13:42:01.615323 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:42:01.637913 containerd[1470]: time="2025-01-30T13:42:01.637846788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:27e0d69a-e0fd-4fd5-9724-9bcf06f329b9,Namespace:default,Attempt:0,} returns sandbox id \"270fbe38b3350ae3c5b72572979c9f672e2bf55beae66ba40e99adf2e47cedd7\"" Jan 30 13:42:01.639293 containerd[1470]: time="2025-01-30T13:42:01.639262886Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 30 13:42:02.246402 kubelet[1771]: E0130 13:42:02.246334 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:42:03.246921 kubelet[1771]: E0130 13:42:03.246843 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:42:03.322119 systemd-networkd[1406]: cali60e51b789ff: Gained IPv6LL Jan 30 13:42:03.470963 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3665991550.mount: Deactivated successfully. Jan 30 13:42:04.247057 kubelet[1771]: E0130 13:42:04.247006 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:42:05.073531 containerd[1470]: time="2025-01-30T13:42:05.073470884Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:42:05.074115 containerd[1470]: time="2025-01-30T13:42:05.074079139Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Jan 30 13:42:05.075353 containerd[1470]: time="2025-01-30T13:42:05.075301993Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:42:05.079365 containerd[1470]: time="2025-01-30T13:42:05.079239601Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:42:05.080526 containerd[1470]: time="2025-01-30T13:42:05.080468448Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 3.441164059s" Jan 30 13:42:05.080746 containerd[1470]: time="2025-01-30T13:42:05.080708980Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 30 13:42:05.082879 containerd[1470]: time="2025-01-30T13:42:05.082849493Z" level=info msg="CreateContainer within sandbox \"270fbe38b3350ae3c5b72572979c9f672e2bf55beae66ba40e99adf2e47cedd7\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 30 13:42:05.095455 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3895403465.mount: Deactivated successfully. Jan 30 13:42:05.097746 containerd[1470]: time="2025-01-30T13:42:05.097712308Z" level=info msg="CreateContainer within sandbox \"270fbe38b3350ae3c5b72572979c9f672e2bf55beae66ba40e99adf2e47cedd7\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"34546a041e638d72a41f3f3a5d65ad28565d8721544f91e441a99d35996ec6c0\"" Jan 30 13:42:05.098106 containerd[1470]: time="2025-01-30T13:42:05.098083359Z" level=info msg="StartContainer for \"34546a041e638d72a41f3f3a5d65ad28565d8721544f91e441a99d35996ec6c0\"" Jan 30 13:42:05.128680 systemd[1]: Started cri-containerd-34546a041e638d72a41f3f3a5d65ad28565d8721544f91e441a99d35996ec6c0.scope - libcontainer container 34546a041e638d72a41f3f3a5d65ad28565d8721544f91e441a99d35996ec6c0. Jan 30 13:42:05.154728 containerd[1470]: time="2025-01-30T13:42:05.154673433Z" level=info msg="StartContainer for \"34546a041e638d72a41f3f3a5d65ad28565d8721544f91e441a99d35996ec6c0\" returns successfully" Jan 30 13:42:05.247306 kubelet[1771]: E0130 13:42:05.247248 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:42:05.909598 kubelet[1771]: I0130 13:42:05.909534 1771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.467058919 podStartE2EDuration="4.909516766s" podCreationTimestamp="2025-01-30 13:42:01 +0000 UTC" firstStartedPulling="2025-01-30 13:42:01.639023711 +0000 UTC m=+35.871284079" lastFinishedPulling="2025-01-30 13:42:05.081481558 +0000 UTC m=+39.313741926" observedRunningTime="2025-01-30 13:42:05.908877031 +0000 UTC m=+40.141137399" watchObservedRunningTime="2025-01-30 13:42:05.909516766 +0000 UTC m=+40.141777134" Jan 30 13:42:06.209922 kubelet[1771]: E0130 13:42:06.209768 1771 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:42:06.248290 kubelet[1771]: E0130 13:42:06.248257 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:42:07.248978 kubelet[1771]: E0130 13:42:07.248917 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:42:08.249641 kubelet[1771]: E0130 13:42:08.249592 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:42:09.250662 kubelet[1771]: E0130 13:42:09.250605 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:42:10.251233 kubelet[1771]: E0130 13:42:10.251180 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:42:11.251952 kubelet[1771]: E0130 13:42:11.251883 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:42:12.252074 kubelet[1771]: E0130 13:42:12.252020 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:42:13.252551 kubelet[1771]: E0130 13:42:13.252487 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:42:14.253544 kubelet[1771]: E0130 13:42:14.253489 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:42:15.253605 kubelet[1771]: E0130 13:42:15.253566 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:42:15.325324 systemd[1]: Created slice kubepods-besteffort-pod7d1e4c49_85fa_43d5_beb1_21889cc1ae9c.slice - libcontainer container kubepods-besteffort-pod7d1e4c49_85fa_43d5_beb1_21889cc1ae9c.slice. Jan 30 13:42:15.511547 kubelet[1771]: I0130 13:42:15.511385 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3cb774ae-51f1-4faf-9290-8e572fcb1838\" (UniqueName: \"kubernetes.io/nfs/7d1e4c49-85fa-43d5-beb1-21889cc1ae9c-pvc-3cb774ae-51f1-4faf-9290-8e572fcb1838\") pod \"test-pod-1\" (UID: \"7d1e4c49-85fa-43d5-beb1-21889cc1ae9c\") " pod="default/test-pod-1" Jan 30 13:42:15.511547 kubelet[1771]: I0130 13:42:15.511472 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqnrw\" (UniqueName: \"kubernetes.io/projected/7d1e4c49-85fa-43d5-beb1-21889cc1ae9c-kube-api-access-fqnrw\") pod \"test-pod-1\" (UID: \"7d1e4c49-85fa-43d5-beb1-21889cc1ae9c\") " pod="default/test-pod-1" Jan 30 13:42:15.635471 kernel: FS-Cache: Loaded Jan 30 13:42:15.703922 kernel: RPC: Registered named UNIX socket transport module. Jan 30 13:42:15.704043 kernel: RPC: Registered udp transport module. Jan 30 13:42:15.704066 kernel: RPC: Registered tcp transport module. Jan 30 13:42:15.704095 kernel: RPC: Registered tcp-with-tls transport module. Jan 30 13:42:15.704639 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 30 13:42:16.002965 kernel: NFS: Registering the id_resolver key type Jan 30 13:42:16.003092 kernel: Key type id_resolver registered Jan 30 13:42:16.003116 kernel: Key type id_legacy registered Jan 30 13:42:16.030718 nfsidmap[3311]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 30 13:42:16.035028 nfsidmap[3314]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 30 13:42:16.228392 containerd[1470]: time="2025-01-30T13:42:16.228337180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:7d1e4c49-85fa-43d5-beb1-21889cc1ae9c,Namespace:default,Attempt:0,}" Jan 30 13:42:16.254613 kubelet[1771]: E0130 13:42:16.254513 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:42:16.331824 systemd-networkd[1406]: cali5ec59c6bf6e: Link UP Jan 30 13:42:16.332506 systemd-networkd[1406]: cali5ec59c6bf6e: Gained carrier Jan 30 13:42:16.341760 containerd[1470]: 2025-01-30 13:42:16.271 [INFO][3317] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.74-k8s-test--pod--1-eth0 default 7d1e4c49-85fa-43d5-beb1-21889cc1ae9c 1161 0 2025-01-30 13:42:01 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.74 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="d1f9cc173a2ea373f3679c8aff6a0c2706fa298739c01fa971a8c6170d020909" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.74-k8s-test--pod--1-" Jan 30 13:42:16.341760 containerd[1470]: 2025-01-30 13:42:16.271 [INFO][3317] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d1f9cc173a2ea373f3679c8aff6a0c2706fa298739c01fa971a8c6170d020909" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.74-k8s-test--pod--1-eth0" Jan 30 13:42:16.341760 containerd[1470]: 2025-01-30 13:42:16.295 [INFO][3331] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d1f9cc173a2ea373f3679c8aff6a0c2706fa298739c01fa971a8c6170d020909" HandleID="k8s-pod-network.d1f9cc173a2ea373f3679c8aff6a0c2706fa298739c01fa971a8c6170d020909" Workload="10.0.0.74-k8s-test--pod--1-eth0" Jan 30 13:42:16.341760 containerd[1470]: 2025-01-30 13:42:16.303 [INFO][3331] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d1f9cc173a2ea373f3679c8aff6a0c2706fa298739c01fa971a8c6170d020909" HandleID="k8s-pod-network.d1f9cc173a2ea373f3679c8aff6a0c2706fa298739c01fa971a8c6170d020909" Workload="10.0.0.74-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003b3db0), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.74", "pod":"test-pod-1", "timestamp":"2025-01-30 13:42:16.295124366 +0000 UTC"}, Hostname:"10.0.0.74", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:42:16.341760 containerd[1470]: 2025-01-30 13:42:16.303 [INFO][3331] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:42:16.341760 containerd[1470]: 2025-01-30 13:42:16.303 [INFO][3331] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:42:16.341760 containerd[1470]: 2025-01-30 13:42:16.303 [INFO][3331] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.74' Jan 30 13:42:16.341760 containerd[1470]: 2025-01-30 13:42:16.305 [INFO][3331] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d1f9cc173a2ea373f3679c8aff6a0c2706fa298739c01fa971a8c6170d020909" host="10.0.0.74" Jan 30 13:42:16.341760 containerd[1470]: 2025-01-30 13:42:16.309 [INFO][3331] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.74" Jan 30 13:42:16.341760 containerd[1470]: 2025-01-30 13:42:16.312 [INFO][3331] ipam/ipam.go 489: Trying affinity for 192.168.67.128/26 host="10.0.0.74" Jan 30 13:42:16.341760 containerd[1470]: 2025-01-30 13:42:16.314 [INFO][3331] ipam/ipam.go 155: Attempting to load block cidr=192.168.67.128/26 host="10.0.0.74" Jan 30 13:42:16.341760 containerd[1470]: 2025-01-30 13:42:16.316 [INFO][3331] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.67.128/26 host="10.0.0.74" Jan 30 13:42:16.341760 containerd[1470]: 2025-01-30 13:42:16.316 [INFO][3331] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.67.128/26 handle="k8s-pod-network.d1f9cc173a2ea373f3679c8aff6a0c2706fa298739c01fa971a8c6170d020909" host="10.0.0.74" Jan 30 13:42:16.341760 containerd[1470]: 2025-01-30 13:42:16.318 [INFO][3331] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d1f9cc173a2ea373f3679c8aff6a0c2706fa298739c01fa971a8c6170d020909 Jan 30 13:42:16.341760 containerd[1470]: 2025-01-30 13:42:16.321 [INFO][3331] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.67.128/26 handle="k8s-pod-network.d1f9cc173a2ea373f3679c8aff6a0c2706fa298739c01fa971a8c6170d020909" host="10.0.0.74" Jan 30 13:42:16.341760 containerd[1470]: 2025-01-30 13:42:16.327 [INFO][3331] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.67.132/26] block=192.168.67.128/26 handle="k8s-pod-network.d1f9cc173a2ea373f3679c8aff6a0c2706fa298739c01fa971a8c6170d020909" host="10.0.0.74" Jan 30 13:42:16.341760 containerd[1470]: 2025-01-30 13:42:16.327 [INFO][3331] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.67.132/26] handle="k8s-pod-network.d1f9cc173a2ea373f3679c8aff6a0c2706fa298739c01fa971a8c6170d020909" host="10.0.0.74" Jan 30 13:42:16.341760 containerd[1470]: 2025-01-30 13:42:16.327 [INFO][3331] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:42:16.341760 containerd[1470]: 2025-01-30 13:42:16.327 [INFO][3331] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.67.132/26] IPv6=[] ContainerID="d1f9cc173a2ea373f3679c8aff6a0c2706fa298739c01fa971a8c6170d020909" HandleID="k8s-pod-network.d1f9cc173a2ea373f3679c8aff6a0c2706fa298739c01fa971a8c6170d020909" Workload="10.0.0.74-k8s-test--pod--1-eth0" Jan 30 13:42:16.341760 containerd[1470]: 2025-01-30 13:42:16.330 [INFO][3317] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d1f9cc173a2ea373f3679c8aff6a0c2706fa298739c01fa971a8c6170d020909" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.74-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.74-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"7d1e4c49-85fa-43d5-beb1-21889cc1ae9c", ResourceVersion:"1161", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 42, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.74", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.67.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:42:16.342408 containerd[1470]: 2025-01-30 13:42:16.330 [INFO][3317] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.67.132/32] ContainerID="d1f9cc173a2ea373f3679c8aff6a0c2706fa298739c01fa971a8c6170d020909" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.74-k8s-test--pod--1-eth0" Jan 30 13:42:16.342408 containerd[1470]: 2025-01-30 13:42:16.330 [INFO][3317] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="d1f9cc173a2ea373f3679c8aff6a0c2706fa298739c01fa971a8c6170d020909" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.74-k8s-test--pod--1-eth0" Jan 30 13:42:16.342408 containerd[1470]: 2025-01-30 13:42:16.332 [INFO][3317] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d1f9cc173a2ea373f3679c8aff6a0c2706fa298739c01fa971a8c6170d020909" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.74-k8s-test--pod--1-eth0" Jan 30 13:42:16.342408 containerd[1470]: 2025-01-30 13:42:16.332 [INFO][3317] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d1f9cc173a2ea373f3679c8aff6a0c2706fa298739c01fa971a8c6170d020909" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.74-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.74-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"7d1e4c49-85fa-43d5-beb1-21889cc1ae9c", ResourceVersion:"1161", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 42, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.74", ContainerID:"d1f9cc173a2ea373f3679c8aff6a0c2706fa298739c01fa971a8c6170d020909", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.67.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"12:9b:45:b0:47:4f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:42:16.342408 containerd[1470]: 2025-01-30 13:42:16.339 [INFO][3317] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d1f9cc173a2ea373f3679c8aff6a0c2706fa298739c01fa971a8c6170d020909" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.74-k8s-test--pod--1-eth0" Jan 30 13:42:16.360793 containerd[1470]: time="2025-01-30T13:42:16.360693602Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:42:16.360793 containerd[1470]: time="2025-01-30T13:42:16.360751473Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:42:16.360793 containerd[1470]: time="2025-01-30T13:42:16.360764070Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:42:16.360989 containerd[1470]: time="2025-01-30T13:42:16.360838285Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:42:16.383544 systemd[1]: Started cri-containerd-d1f9cc173a2ea373f3679c8aff6a0c2706fa298739c01fa971a8c6170d020909.scope - libcontainer container d1f9cc173a2ea373f3679c8aff6a0c2706fa298739c01fa971a8c6170d020909. Jan 30 13:42:16.395567 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:42:16.419343 containerd[1470]: time="2025-01-30T13:42:16.419296542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:7d1e4c49-85fa-43d5-beb1-21889cc1ae9c,Namespace:default,Attempt:0,} returns sandbox id \"d1f9cc173a2ea373f3679c8aff6a0c2706fa298739c01fa971a8c6170d020909\"" Jan 30 13:42:16.420491 containerd[1470]: time="2025-01-30T13:42:16.420471012Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 30 13:42:16.816795 containerd[1470]: time="2025-01-30T13:42:16.816746465Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:42:16.817464 containerd[1470]: time="2025-01-30T13:42:16.817408782Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 30 13:42:16.820191 containerd[1470]: time="2025-01-30T13:42:16.820146354Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"71015439\" in 399.640048ms" Jan 30 13:42:16.820227 containerd[1470]: time="2025-01-30T13:42:16.820190215Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\"" Jan 30 13:42:16.821730 containerd[1470]: time="2025-01-30T13:42:16.821707192Z" level=info msg="CreateContainer within sandbox \"d1f9cc173a2ea373f3679c8aff6a0c2706fa298739c01fa971a8c6170d020909\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 30 13:42:16.834393 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3777499849.mount: Deactivated successfully. Jan 30 13:42:16.835950 containerd[1470]: time="2025-01-30T13:42:16.835903435Z" level=info msg="CreateContainer within sandbox \"d1f9cc173a2ea373f3679c8aff6a0c2706fa298739c01fa971a8c6170d020909\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"35d7f5112125bbd59780495c6d7f87c0e71103119c6f4b61264c75eb9eb61502\"" Jan 30 13:42:16.836271 containerd[1470]: time="2025-01-30T13:42:16.836250722Z" level=info msg="StartContainer for \"35d7f5112125bbd59780495c6d7f87c0e71103119c6f4b61264c75eb9eb61502\"" Jan 30 13:42:16.862544 systemd[1]: Started cri-containerd-35d7f5112125bbd59780495c6d7f87c0e71103119c6f4b61264c75eb9eb61502.scope - libcontainer container 35d7f5112125bbd59780495c6d7f87c0e71103119c6f4b61264c75eb9eb61502. Jan 30 13:42:16.890232 containerd[1470]: time="2025-01-30T13:42:16.890180748Z" level=info msg="StartContainer for \"35d7f5112125bbd59780495c6d7f87c0e71103119c6f4b61264c75eb9eb61502\" returns successfully" Jan 30 13:42:16.930196 kubelet[1771]: I0130 13:42:16.930127 1771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=15.529601725 podStartE2EDuration="15.930109319s" podCreationTimestamp="2025-01-30 13:42:01 +0000 UTC" firstStartedPulling="2025-01-30 13:42:16.420135188 +0000 UTC m=+50.652395556" lastFinishedPulling="2025-01-30 13:42:16.820642782 +0000 UTC m=+51.052903150" observedRunningTime="2025-01-30 13:42:16.930038742 +0000 UTC m=+51.162299110" watchObservedRunningTime="2025-01-30 13:42:16.930109319 +0000 UTC m=+51.162369687" Jan 30 13:42:17.255269 kubelet[1771]: E0130 13:42:17.255129 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:42:18.039017 systemd-networkd[1406]: cali5ec59c6bf6e: Gained IPv6LL Jan 30 13:42:18.255753 kubelet[1771]: E0130 13:42:18.255691 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:42:19.256883 kubelet[1771]: E0130 13:42:19.256814 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"