Jan 30 13:53:05.126057 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 30 13:53:05.126099 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:53:05.126115 kernel: BIOS-provided physical RAM map: Jan 30 13:53:05.126127 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 30 13:53:05.126138 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 30 13:53:05.126150 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 30 13:53:05.126167 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Jan 30 13:53:05.126180 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Jan 30 13:53:05.126192 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Jan 30 13:53:05.126204 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 30 13:53:05.126216 kernel: NX (Execute Disable) protection: active Jan 30 13:53:05.126228 kernel: APIC: Static calls initialized Jan 30 13:53:05.126241 kernel: SMBIOS 2.7 present. Jan 30 13:53:05.126253 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Jan 30 13:53:05.126272 kernel: Hypervisor detected: KVM Jan 30 13:53:05.126286 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 30 13:53:05.126300 kernel: kvm-clock: using sched offset of 7065454821 cycles Jan 30 13:53:05.126314 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 30 13:53:05.126382 kernel: tsc: Detected 2499.996 MHz processor Jan 30 13:53:05.126398 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 13:53:05.126412 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 13:53:05.130040 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Jan 30 13:53:05.130081 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 30 13:53:05.130096 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 13:53:05.130111 kernel: Using GB pages for direct mapping Jan 30 13:53:05.130190 kernel: ACPI: Early table checksum verification disabled Jan 30 13:53:05.130205 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Jan 30 13:53:05.130221 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Jan 30 13:53:05.130535 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 30 13:53:05.130554 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jan 30 13:53:05.130576 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Jan 30 13:53:05.130590 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 30 13:53:05.130604 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 30 13:53:05.130618 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Jan 30 13:53:05.130631 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 30 13:53:05.130644 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Jan 30 13:53:05.130701 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Jan 30 13:53:05.130716 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 30 13:53:05.130731 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Jan 30 13:53:05.130749 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Jan 30 13:53:05.130769 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Jan 30 13:53:05.130983 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Jan 30 13:53:05.131002 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Jan 30 13:53:05.131017 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Jan 30 13:53:05.131036 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Jan 30 13:53:05.131052 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Jan 30 13:53:05.131066 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Jan 30 13:53:05.131269 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Jan 30 13:53:05.131288 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 30 13:53:05.131303 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 30 13:53:05.131318 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Jan 30 13:53:05.131333 kernel: NUMA: Initialized distance table, cnt=1 Jan 30 13:53:05.131347 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Jan 30 13:53:05.131367 kernel: Zone ranges: Jan 30 13:53:05.131381 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 13:53:05.131395 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Jan 30 13:53:05.131411 kernel: Normal empty Jan 30 13:53:05.131425 kernel: Movable zone start for each node Jan 30 13:53:05.131439 kernel: Early memory node ranges Jan 30 13:53:05.131455 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 30 13:53:05.131469 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Jan 30 13:53:05.131549 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Jan 30 13:53:05.131568 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 13:53:05.131588 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 30 13:53:05.131602 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Jan 30 13:53:05.131617 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 30 13:53:05.131629 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 30 13:53:05.131643 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Jan 30 13:53:05.131656 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 30 13:53:05.131678 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 13:53:05.131691 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 30 13:53:05.131703 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 30 13:53:05.131721 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 13:53:05.131736 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 30 13:53:05.131748 kernel: TSC deadline timer available Jan 30 13:53:05.131763 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 30 13:53:05.131776 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 30 13:53:05.131789 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Jan 30 13:53:05.131802 kernel: Booting paravirtualized kernel on KVM Jan 30 13:53:05.131818 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 13:53:05.131834 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 30 13:53:05.142175 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 30 13:53:05.142197 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 30 13:53:05.142502 kernel: pcpu-alloc: [0] 0 1 Jan 30 13:53:05.142519 kernel: kvm-guest: PV spinlocks enabled Jan 30 13:53:05.142578 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 30 13:53:05.142599 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:53:05.142617 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 13:53:05.142632 kernel: random: crng init done Jan 30 13:53:05.142943 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 13:53:05.143021 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 30 13:53:05.143038 kernel: Fallback order for Node 0: 0 Jan 30 13:53:05.143054 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Jan 30 13:53:05.143070 kernel: Policy zone: DMA32 Jan 30 13:53:05.143086 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 13:53:05.143102 kernel: Memory: 1932348K/2057760K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 125152K reserved, 0K cma-reserved) Jan 30 13:53:05.143117 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 30 13:53:05.143133 kernel: Kernel/User page tables isolation: enabled Jan 30 13:53:05.143154 kernel: ftrace: allocating 37921 entries in 149 pages Jan 30 13:53:05.143168 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 13:53:05.143184 kernel: Dynamic Preempt: voluntary Jan 30 13:53:05.143200 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 13:53:05.143217 kernel: rcu: RCU event tracing is enabled. Jan 30 13:53:05.143233 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 30 13:53:05.143249 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 13:53:05.143264 kernel: Rude variant of Tasks RCU enabled. Jan 30 13:53:05.143332 kernel: Tracing variant of Tasks RCU enabled. Jan 30 13:53:05.143352 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 13:53:05.143367 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 30 13:53:05.143383 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 30 13:53:05.143398 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 13:53:05.143414 kernel: Console: colour VGA+ 80x25 Jan 30 13:53:05.143429 kernel: printk: console [ttyS0] enabled Jan 30 13:53:05.143445 kernel: ACPI: Core revision 20230628 Jan 30 13:53:05.143459 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Jan 30 13:53:05.143474 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 13:53:05.143573 kernel: x2apic enabled Jan 30 13:53:05.143592 kernel: APIC: Switched APIC routing to: physical x2apic Jan 30 13:53:05.143621 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Jan 30 13:53:05.143641 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Jan 30 13:53:05.143656 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 30 13:53:05.143672 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 30 13:53:05.143689 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 13:53:05.143705 kernel: Spectre V2 : Mitigation: Retpolines Jan 30 13:53:05.143721 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 13:53:05.143737 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 30 13:53:05.143754 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 30 13:53:05.143771 kernel: RETBleed: Vulnerable Jan 30 13:53:05.143790 kernel: Speculative Store Bypass: Vulnerable Jan 30 13:53:05.143807 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Jan 30 13:53:05.143823 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 30 13:53:05.148026 kernel: GDS: Unknown: Dependent on hypervisor status Jan 30 13:53:05.148187 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 30 13:53:05.148213 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 30 13:53:05.148231 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 30 13:53:05.148255 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jan 30 13:53:05.148309 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jan 30 13:53:05.148327 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 30 13:53:05.148391 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 30 13:53:05.148412 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 30 13:53:05.148429 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 30 13:53:05.148447 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 30 13:53:05.148464 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jan 30 13:53:05.148481 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jan 30 13:53:05.148497 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Jan 30 13:53:05.148514 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Jan 30 13:53:05.148535 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Jan 30 13:53:05.148551 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Jan 30 13:53:05.148568 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Jan 30 13:53:05.148584 kernel: Freeing SMP alternatives memory: 32K Jan 30 13:53:05.148601 kernel: pid_max: default: 32768 minimum: 301 Jan 30 13:53:05.148617 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 13:53:05.148723 kernel: landlock: Up and running. Jan 30 13:53:05.148741 kernel: SELinux: Initializing. Jan 30 13:53:05.148896 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 30 13:53:05.148980 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 30 13:53:05.148999 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 30 13:53:05.149021 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:53:05.149037 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:53:05.149055 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:53:05.149071 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 30 13:53:05.149088 kernel: signal: max sigframe size: 3632 Jan 30 13:53:05.149105 kernel: rcu: Hierarchical SRCU implementation. Jan 30 13:53:05.149123 kernel: rcu: Max phase no-delay instances is 400. Jan 30 13:53:05.149140 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 30 13:53:05.149440 kernel: smp: Bringing up secondary CPUs ... Jan 30 13:53:05.149468 kernel: smpboot: x86: Booting SMP configuration: Jan 30 13:53:05.149486 kernel: .... node #0, CPUs: #1 Jan 30 13:53:05.149505 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 30 13:53:05.149522 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 30 13:53:05.149538 kernel: smp: Brought up 1 node, 2 CPUs Jan 30 13:53:05.149554 kernel: smpboot: Max logical packages: 1 Jan 30 13:53:05.149631 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Jan 30 13:53:05.149650 kernel: devtmpfs: initialized Jan 30 13:53:05.149672 kernel: x86/mm: Memory block size: 128MB Jan 30 13:53:05.149688 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 13:53:05.149704 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 30 13:53:05.149720 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 13:53:05.149771 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 13:53:05.149793 kernel: audit: initializing netlink subsys (disabled) Jan 30 13:53:05.149811 kernel: audit: type=2000 audit(1738245184.782:1): state=initialized audit_enabled=0 res=1 Jan 30 13:53:05.149827 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 13:53:05.149844 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 13:53:05.153350 kernel: cpuidle: using governor menu Jan 30 13:53:05.153378 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 13:53:05.153394 kernel: dca service started, version 1.12.1 Jan 30 13:53:05.153411 kernel: PCI: Using configuration type 1 for base access Jan 30 13:53:05.153428 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 13:53:05.153445 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 13:53:05.153463 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 13:53:05.153479 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 13:53:05.153496 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 13:53:05.153519 kernel: ACPI: Added _OSI(Module Device) Jan 30 13:53:05.153536 kernel: ACPI: Added _OSI(Processor Device) Jan 30 13:53:05.153552 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 13:53:05.153569 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 13:53:05.153585 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 30 13:53:05.153602 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 30 13:53:05.153619 kernel: ACPI: Interpreter enabled Jan 30 13:53:05.153635 kernel: ACPI: PM: (supports S0 S5) Jan 30 13:53:05.153651 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 13:53:05.153668 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 13:53:05.153688 kernel: PCI: Using E820 reservations for host bridge windows Jan 30 13:53:05.153705 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jan 30 13:53:05.153721 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 13:53:05.155899 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 30 13:53:05.156362 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 30 13:53:05.156530 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 30 13:53:05.156553 kernel: acpiphp: Slot [3] registered Jan 30 13:53:05.156577 kernel: acpiphp: Slot [4] registered Jan 30 13:53:05.156593 kernel: acpiphp: Slot [5] registered Jan 30 13:53:05.156610 kernel: acpiphp: Slot [6] registered Jan 30 13:53:05.156627 kernel: acpiphp: Slot [7] registered Jan 30 13:53:05.156644 kernel: acpiphp: Slot [8] registered Jan 30 13:53:05.156662 kernel: acpiphp: Slot [9] registered Jan 30 13:53:05.156679 kernel: acpiphp: Slot [10] registered Jan 30 13:53:05.156695 kernel: acpiphp: Slot [11] registered Jan 30 13:53:05.156712 kernel: acpiphp: Slot [12] registered Jan 30 13:53:05.156732 kernel: acpiphp: Slot [13] registered Jan 30 13:53:05.156749 kernel: acpiphp: Slot [14] registered Jan 30 13:53:05.156765 kernel: acpiphp: Slot [15] registered Jan 30 13:53:05.156782 kernel: acpiphp: Slot [16] registered Jan 30 13:53:05.156798 kernel: acpiphp: Slot [17] registered Jan 30 13:53:05.161217 kernel: acpiphp: Slot [18] registered Jan 30 13:53:05.162305 kernel: acpiphp: Slot [19] registered Jan 30 13:53:05.162432 kernel: acpiphp: Slot [20] registered Jan 30 13:53:05.162479 kernel: acpiphp: Slot [21] registered Jan 30 13:53:05.162497 kernel: acpiphp: Slot [22] registered Jan 30 13:53:05.162522 kernel: acpiphp: Slot [23] registered Jan 30 13:53:05.162567 kernel: acpiphp: Slot [24] registered Jan 30 13:53:05.162813 kernel: acpiphp: Slot [25] registered Jan 30 13:53:05.162834 kernel: acpiphp: Slot [26] registered Jan 30 13:53:05.162851 kernel: acpiphp: Slot [27] registered Jan 30 13:53:05.165956 kernel: acpiphp: Slot [28] registered Jan 30 13:53:05.166126 kernel: acpiphp: Slot [29] registered Jan 30 13:53:05.166145 kernel: acpiphp: Slot [30] registered Jan 30 13:53:05.166162 kernel: acpiphp: Slot [31] registered Jan 30 13:53:05.166187 kernel: PCI host bridge to bus 0000:00 Jan 30 13:53:05.166537 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 30 13:53:05.166747 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 30 13:53:05.169215 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 30 13:53:05.169512 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 30 13:53:05.169649 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 13:53:05.176397 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 30 13:53:05.176655 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 30 13:53:05.176821 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Jan 30 13:53:05.179283 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 30 13:53:05.179649 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Jan 30 13:53:05.179803 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Jan 30 13:53:05.182086 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Jan 30 13:53:05.182245 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Jan 30 13:53:05.182512 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Jan 30 13:53:05.182650 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Jan 30 13:53:05.182867 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Jan 30 13:53:05.185388 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Jan 30 13:53:05.185537 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Jan 30 13:53:05.185663 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jan 30 13:53:05.185788 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 30 13:53:05.191182 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jan 30 13:53:05.191364 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Jan 30 13:53:05.191508 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jan 30 13:53:05.191641 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Jan 30 13:53:05.191662 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 30 13:53:05.191680 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 30 13:53:05.191704 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 30 13:53:05.191720 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 30 13:53:05.191737 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 30 13:53:05.191754 kernel: iommu: Default domain type: Translated Jan 30 13:53:05.191771 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 13:53:05.191787 kernel: PCI: Using ACPI for IRQ routing Jan 30 13:53:05.191804 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 30 13:53:05.191821 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 30 13:53:05.191838 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Jan 30 13:53:05.191987 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Jan 30 13:53:05.192117 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Jan 30 13:53:05.192259 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 30 13:53:05.192280 kernel: vgaarb: loaded Jan 30 13:53:05.192297 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jan 30 13:53:05.192314 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Jan 30 13:53:05.192330 kernel: clocksource: Switched to clocksource kvm-clock Jan 30 13:53:05.192345 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 13:53:05.192362 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 13:53:05.192383 kernel: pnp: PnP ACPI init Jan 30 13:53:05.192399 kernel: pnp: PnP ACPI: found 5 devices Jan 30 13:53:05.192416 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 13:53:05.192433 kernel: NET: Registered PF_INET protocol family Jan 30 13:53:05.192449 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 13:53:05.192466 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 30 13:53:05.192483 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 13:53:05.192499 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 30 13:53:05.192520 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 30 13:53:05.192536 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 30 13:53:05.192553 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 30 13:53:05.192569 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 30 13:53:05.192586 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 13:53:05.192602 kernel: NET: Registered PF_XDP protocol family Jan 30 13:53:05.192727 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 30 13:53:05.192844 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 30 13:53:05.199553 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 30 13:53:05.199806 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 30 13:53:05.200394 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 30 13:53:05.200423 kernel: PCI: CLS 0 bytes, default 64 Jan 30 13:53:05.200438 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 30 13:53:05.200454 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Jan 30 13:53:05.200468 kernel: clocksource: Switched to clocksource tsc Jan 30 13:53:05.200565 kernel: Initialise system trusted keyrings Jan 30 13:53:05.200581 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 30 13:53:05.200604 kernel: Key type asymmetric registered Jan 30 13:53:05.200618 kernel: Asymmetric key parser 'x509' registered Jan 30 13:53:05.200632 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 13:53:05.200647 kernel: io scheduler mq-deadline registered Jan 30 13:53:05.200662 kernel: io scheduler kyber registered Jan 30 13:53:05.200676 kernel: io scheduler bfq registered Jan 30 13:53:05.200690 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 13:53:05.200704 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 13:53:05.200719 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 13:53:05.200736 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 30 13:53:05.200750 kernel: i8042: Warning: Keylock active Jan 30 13:53:05.200764 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 30 13:53:05.200779 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 30 13:53:05.200944 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 30 13:53:05.201257 kernel: rtc_cmos 00:00: registered as rtc0 Jan 30 13:53:05.201383 kernel: rtc_cmos 00:00: setting system clock to 2025-01-30T13:53:04 UTC (1738245184) Jan 30 13:53:05.202620 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 30 13:53:05.202656 kernel: intel_pstate: CPU model not supported Jan 30 13:53:05.202671 kernel: NET: Registered PF_INET6 protocol family Jan 30 13:53:05.202685 kernel: Segment Routing with IPv6 Jan 30 13:53:05.202699 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 13:53:05.202712 kernel: NET: Registered PF_PACKET protocol family Jan 30 13:53:05.202726 kernel: Key type dns_resolver registered Jan 30 13:53:05.202740 kernel: IPI shorthand broadcast: enabled Jan 30 13:53:05.202754 kernel: sched_clock: Marking stable (747003385, 286085363)->(1146097926, -113009178) Jan 30 13:53:05.202842 kernel: registered taskstats version 1 Jan 30 13:53:05.202867 kernel: Loading compiled-in X.509 certificates Jan 30 13:53:05.203883 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 30 13:53:05.203900 kernel: Key type .fscrypt registered Jan 30 13:53:05.203914 kernel: Key type fscrypt-provisioning registered Jan 30 13:53:05.203928 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 13:53:05.203942 kernel: ima: Allocated hash algorithm: sha1 Jan 30 13:53:05.204027 kernel: ima: No architecture policies found Jan 30 13:53:05.204042 kernel: clk: Disabling unused clocks Jan 30 13:53:05.204055 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 30 13:53:05.204075 kernel: Write protecting the kernel read-only data: 36864k Jan 30 13:53:05.204089 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 30 13:53:05.204103 kernel: Run /init as init process Jan 30 13:53:05.204117 kernel: with arguments: Jan 30 13:53:05.204190 kernel: /init Jan 30 13:53:05.204212 kernel: with environment: Jan 30 13:53:05.204226 kernel: HOME=/ Jan 30 13:53:05.204239 kernel: TERM=linux Jan 30 13:53:05.204253 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 13:53:05.204277 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:53:05.204309 systemd[1]: Detected virtualization amazon. Jan 30 13:53:05.204328 systemd[1]: Detected architecture x86-64. Jan 30 13:53:05.204342 systemd[1]: Running in initrd. Jan 30 13:53:05.204356 systemd[1]: No hostname configured, using default hostname. Jan 30 13:53:05.204373 systemd[1]: Hostname set to . Jan 30 13:53:05.204389 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:53:05.204403 systemd[1]: Queued start job for default target initrd.target. Jan 30 13:53:05.204418 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:53:05.204472 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:53:05.204490 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 13:53:05.204505 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:53:05.204554 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 13:53:05.211120 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 13:53:05.211144 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 13:53:05.211162 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 13:53:05.211181 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:53:05.211197 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:53:05.211214 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:53:05.211240 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:53:05.211257 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:53:05.211277 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:53:05.211294 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:53:05.211309 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:53:05.211325 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 13:53:05.211340 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 13:53:05.211354 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:53:05.211371 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:53:05.211393 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:53:05.211411 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:53:05.211429 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 30 13:53:05.211453 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 13:53:05.211474 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:53:05.211492 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 13:53:05.211508 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 13:53:05.211527 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:53:05.211542 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:53:05.211610 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:53:05.211627 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 13:53:05.211643 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:53:05.211658 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 13:53:05.211723 systemd-journald[178]: Collecting audit messages is disabled. Jan 30 13:53:05.211759 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:53:05.211775 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:53:05.211794 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 13:53:05.211810 kernel: Bridge firewalling registered Jan 30 13:53:05.211825 systemd-journald[178]: Journal started Jan 30 13:53:05.211856 systemd-journald[178]: Runtime Journal (/run/log/journal/ec2829977015537995a7ad94d02e7bba) is 4.8M, max 38.6M, 33.7M free. Jan 30 13:53:05.126866 systemd-modules-load[179]: Inserted module 'overlay' Jan 30 13:53:05.329069 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:53:05.210805 systemd-modules-load[179]: Inserted module 'br_netfilter' Jan 30 13:53:05.331946 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:53:05.332026 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:53:05.337466 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:53:05.346111 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:53:05.356115 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:53:05.361262 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:53:05.364591 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:53:05.403326 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:53:05.416641 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:53:05.425633 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:53:05.428513 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:53:05.456344 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 13:53:05.491616 dracut-cmdline[214]: dracut-dracut-053 Jan 30 13:53:05.496668 dracut-cmdline[214]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:53:05.510007 systemd-resolved[211]: Positive Trust Anchors: Jan 30 13:53:05.510026 systemd-resolved[211]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:53:05.510086 systemd-resolved[211]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:53:05.515478 systemd-resolved[211]: Defaulting to hostname 'linux'. Jan 30 13:53:05.517620 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:53:05.519440 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:53:05.623932 kernel: SCSI subsystem initialized Jan 30 13:53:05.636920 kernel: Loading iSCSI transport class v2.0-870. Jan 30 13:53:05.650903 kernel: iscsi: registered transport (tcp) Jan 30 13:53:05.686908 kernel: iscsi: registered transport (qla4xxx) Jan 30 13:53:05.686996 kernel: QLogic iSCSI HBA Driver Jan 30 13:53:05.740944 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 13:53:05.746061 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 13:53:05.784335 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 13:53:05.784415 kernel: device-mapper: uevent: version 1.0.3 Jan 30 13:53:05.784436 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 13:53:05.843909 kernel: raid6: avx512x4 gen() 12750 MB/s Jan 30 13:53:05.861119 kernel: raid6: avx512x2 gen() 13867 MB/s Jan 30 13:53:05.877908 kernel: raid6: avx512x1 gen() 12982 MB/s Jan 30 13:53:05.894983 kernel: raid6: avx2x4 gen() 14283 MB/s Jan 30 13:53:05.911912 kernel: raid6: avx2x2 gen() 13400 MB/s Jan 30 13:53:05.929213 kernel: raid6: avx2x1 gen() 10489 MB/s Jan 30 13:53:05.929303 kernel: raid6: using algorithm avx2x4 gen() 14283 MB/s Jan 30 13:53:05.946970 kernel: raid6: .... xor() 3632 MB/s, rmw enabled Jan 30 13:53:05.947068 kernel: raid6: using avx512x2 recovery algorithm Jan 30 13:53:05.974952 kernel: xor: automatically using best checksumming function avx Jan 30 13:53:06.190952 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 13:53:06.208319 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:53:06.214747 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:53:06.243752 systemd-udevd[396]: Using default interface naming scheme 'v255'. Jan 30 13:53:06.251209 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:53:06.268902 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 13:53:06.300607 dracut-pre-trigger[402]: rd.md=0: removing MD RAID activation Jan 30 13:53:06.344461 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:53:06.354138 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:53:06.447623 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:53:06.460769 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 13:53:06.508848 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 13:53:06.514268 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:53:06.519036 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:53:06.521141 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:53:06.537284 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 13:53:06.587707 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:53:06.599926 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 30 13:53:06.600310 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 30 13:53:06.620902 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 30 13:53:06.628412 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 30 13:53:06.658690 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 30 13:53:06.658900 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Jan 30 13:53:06.659058 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 13:53:06.659077 kernel: GPT:9289727 != 16777215 Jan 30 13:53:06.659094 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 13:53:06.659112 kernel: GPT:9289727 != 16777215 Jan 30 13:53:06.659128 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 13:53:06.659144 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 30 13:53:06.659161 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 13:53:06.659182 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:8a:2d:2f:e7:af Jan 30 13:53:06.654487 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:53:06.654647 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:53:06.657188 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:53:06.659944 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:53:06.665433 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:53:06.669224 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:53:06.678688 (udev-worker)[453]: Network interface NamePolicy= disabled on kernel command line. Jan 30 13:53:06.717118 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:53:06.731066 kernel: AVX2 version of gcm_enc/dec engaged. Jan 30 13:53:06.731184 kernel: AES CTR mode by8 optimization enabled Jan 30 13:53:06.883931 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (450) Jan 30 13:53:06.910920 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/nvme0n1p3 scanned by (udev-worker) (453) Jan 30 13:53:06.937951 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:53:06.948190 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:53:06.993482 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:53:07.039631 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 30 13:53:07.039805 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 30 13:53:07.057547 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 30 13:53:07.088204 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 30 13:53:07.097624 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 30 13:53:07.105526 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 13:53:07.118074 disk-uuid[630]: Primary Header is updated. Jan 30 13:53:07.118074 disk-uuid[630]: Secondary Entries is updated. Jan 30 13:53:07.118074 disk-uuid[630]: Secondary Header is updated. Jan 30 13:53:07.125895 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 30 13:53:07.137909 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 30 13:53:07.143898 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 30 13:53:08.147013 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 30 13:53:08.147993 disk-uuid[631]: The operation has completed successfully. Jan 30 13:53:08.317576 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 13:53:08.317698 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 13:53:08.344242 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 13:53:08.350095 sh[972]: Success Jan 30 13:53:08.365901 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 30 13:53:08.490705 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 13:53:08.500020 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 13:53:08.504685 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 13:53:08.550210 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 30 13:53:08.550276 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:53:08.550295 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 13:53:08.551143 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 13:53:08.552278 kernel: BTRFS info (device dm-0): using free space tree Jan 30 13:53:08.667900 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 30 13:53:08.681048 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 13:53:08.684260 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 13:53:08.702273 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 13:53:08.717137 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 13:53:08.752034 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:53:08.752341 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:53:08.752362 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 30 13:53:08.760965 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 30 13:53:08.794903 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:53:08.795413 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 13:53:08.803238 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 13:53:08.815079 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 13:53:08.871662 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:53:08.878148 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:53:08.910905 systemd-networkd[1164]: lo: Link UP Jan 30 13:53:08.910933 systemd-networkd[1164]: lo: Gained carrier Jan 30 13:53:08.912829 systemd-networkd[1164]: Enumeration completed Jan 30 13:53:08.913252 systemd-networkd[1164]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:53:08.913257 systemd-networkd[1164]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:53:08.914539 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:53:08.916718 systemd[1]: Reached target network.target - Network. Jan 30 13:53:08.925332 systemd-networkd[1164]: eth0: Link UP Jan 30 13:53:08.925336 systemd-networkd[1164]: eth0: Gained carrier Jan 30 13:53:08.925350 systemd-networkd[1164]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:53:08.957987 systemd-networkd[1164]: eth0: DHCPv4 address 172.31.29.156/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 30 13:53:09.219171 ignition[1091]: Ignition 2.19.0 Jan 30 13:53:09.219241 ignition[1091]: Stage: fetch-offline Jan 30 13:53:09.219581 ignition[1091]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:53:09.219770 ignition[1091]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 30 13:53:09.222758 ignition[1091]: Ignition finished successfully Jan 30 13:53:09.226364 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:53:09.231224 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 30 13:53:09.265901 ignition[1173]: Ignition 2.19.0 Jan 30 13:53:09.265928 ignition[1173]: Stage: fetch Jan 30 13:53:09.266858 ignition[1173]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:53:09.269940 ignition[1173]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 30 13:53:09.270101 ignition[1173]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 30 13:53:09.322219 ignition[1173]: PUT result: OK Jan 30 13:53:09.338914 ignition[1173]: parsed url from cmdline: "" Jan 30 13:53:09.338966 ignition[1173]: no config URL provided Jan 30 13:53:09.338979 ignition[1173]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:53:09.338997 ignition[1173]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:53:09.339026 ignition[1173]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 30 13:53:09.345127 ignition[1173]: PUT result: OK Jan 30 13:53:09.345222 ignition[1173]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 30 13:53:09.348159 ignition[1173]: GET result: OK Jan 30 13:53:09.348241 ignition[1173]: parsing config with SHA512: f7f18d0b9c093ef9d185ce867574662e32a1b4a80c716c181d59bcdc3e5af6c28ff68a338eb7076d1554330ca5b7387580c53101c78581d25522c0d82e41c89c Jan 30 13:53:09.352209 unknown[1173]: fetched base config from "system" Jan 30 13:53:09.352570 ignition[1173]: fetch: fetch complete Jan 30 13:53:09.352223 unknown[1173]: fetched base config from "system" Jan 30 13:53:09.352575 ignition[1173]: fetch: fetch passed Jan 30 13:53:09.352345 unknown[1173]: fetched user config from "aws" Jan 30 13:53:09.352622 ignition[1173]: Ignition finished successfully Jan 30 13:53:09.355535 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 30 13:53:09.365051 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 13:53:09.389591 ignition[1179]: Ignition 2.19.0 Jan 30 13:53:09.389605 ignition[1179]: Stage: kargs Jan 30 13:53:09.390198 ignition[1179]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:53:09.390213 ignition[1179]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 30 13:53:09.390329 ignition[1179]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 30 13:53:09.391630 ignition[1179]: PUT result: OK Jan 30 13:53:09.402271 ignition[1179]: kargs: kargs passed Jan 30 13:53:09.402339 ignition[1179]: Ignition finished successfully Jan 30 13:53:09.405734 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 13:53:09.412229 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 13:53:09.432934 ignition[1186]: Ignition 2.19.0 Jan 30 13:53:09.432952 ignition[1186]: Stage: disks Jan 30 13:53:09.433405 ignition[1186]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:53:09.433415 ignition[1186]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 30 13:53:09.433583 ignition[1186]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 30 13:53:09.435869 ignition[1186]: PUT result: OK Jan 30 13:53:09.442185 ignition[1186]: disks: disks passed Jan 30 13:53:09.442247 ignition[1186]: Ignition finished successfully Jan 30 13:53:09.449571 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 13:53:09.450327 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 13:53:09.454062 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 13:53:09.456269 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:53:09.457682 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:53:09.460626 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:53:09.471216 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 13:53:09.504953 systemd-fsck[1194]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 30 13:53:09.508118 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 13:53:09.515013 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 13:53:09.748897 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 30 13:53:09.751730 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 13:53:09.753341 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 13:53:09.773188 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:53:09.790935 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 13:53:09.793916 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 30 13:53:09.793981 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 13:53:09.794015 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:53:09.815947 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 13:53:09.823906 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1213) Jan 30 13:53:09.825950 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:53:09.826008 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:53:09.826028 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 30 13:53:09.826536 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 13:53:09.841987 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 30 13:53:09.843574 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:53:10.253758 initrd-setup-root[1237]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 13:53:10.274160 initrd-setup-root[1244]: cut: /sysroot/etc/group: No such file or directory Jan 30 13:53:10.284991 initrd-setup-root[1251]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 13:53:10.305419 initrd-setup-root[1258]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 13:53:10.581097 systemd-networkd[1164]: eth0: Gained IPv6LL Jan 30 13:53:10.792662 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 13:53:10.804052 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 13:53:10.807148 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 13:53:10.820616 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 13:53:10.830635 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:53:10.911578 ignition[1326]: INFO : Ignition 2.19.0 Jan 30 13:53:10.911578 ignition[1326]: INFO : Stage: mount Jan 30 13:53:10.914249 ignition[1326]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:53:10.914249 ignition[1326]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 30 13:53:10.914249 ignition[1326]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 30 13:53:10.914249 ignition[1326]: INFO : PUT result: OK Jan 30 13:53:10.923496 ignition[1326]: INFO : mount: mount passed Jan 30 13:53:10.923496 ignition[1326]: INFO : Ignition finished successfully Jan 30 13:53:10.914504 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 13:53:10.925551 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 13:53:10.934181 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 13:53:10.965668 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:53:11.026242 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1338) Jan 30 13:53:11.036666 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:53:11.036812 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:53:11.036837 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 30 13:53:11.047900 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 30 13:53:11.052298 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:53:11.092156 ignition[1355]: INFO : Ignition 2.19.0 Jan 30 13:53:11.092156 ignition[1355]: INFO : Stage: files Jan 30 13:53:11.094705 ignition[1355]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:53:11.094705 ignition[1355]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 30 13:53:11.094705 ignition[1355]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 30 13:53:11.098929 ignition[1355]: INFO : PUT result: OK Jan 30 13:53:11.102174 ignition[1355]: DEBUG : files: compiled without relabeling support, skipping Jan 30 13:53:11.127089 ignition[1355]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 13:53:11.127089 ignition[1355]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 13:53:11.169956 ignition[1355]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 13:53:11.171906 ignition[1355]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 13:53:11.174174 unknown[1355]: wrote ssh authorized keys file for user: core Jan 30 13:53:11.177735 ignition[1355]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 13:53:11.180514 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 30 13:53:11.183744 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 13:53:11.183744 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:53:11.183744 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:53:11.183744 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:53:11.183744 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:53:11.183744 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:53:11.183744 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 30 13:53:11.659919 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 30 13:53:12.213729 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:53:12.217273 ignition[1355]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:53:12.217273 ignition[1355]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:53:12.217273 ignition[1355]: INFO : files: files passed Jan 30 13:53:12.217273 ignition[1355]: INFO : Ignition finished successfully Jan 30 13:53:12.224315 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 13:53:12.236320 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 13:53:12.247544 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 13:53:12.253177 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 13:53:12.253453 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 13:53:12.265295 initrd-setup-root-after-ignition[1384]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:53:12.265295 initrd-setup-root-after-ignition[1384]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:53:12.273074 initrd-setup-root-after-ignition[1388]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:53:12.275222 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:53:12.276231 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 13:53:12.294491 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 13:53:12.347533 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 13:53:12.347738 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 13:53:12.350884 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 13:53:12.352454 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 13:53:12.354051 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 13:53:12.361098 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 13:53:12.375343 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:53:12.381069 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 13:53:12.395105 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:53:12.397122 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:53:12.401117 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 13:53:12.402574 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 13:53:12.402743 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:53:12.409248 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 13:53:12.411788 systemd[1]: Stopped target basic.target - Basic System. Jan 30 13:53:12.412066 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 13:53:12.416557 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:53:12.416701 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 13:53:12.422806 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 13:53:12.428966 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:53:12.429270 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 13:53:12.429586 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 13:53:12.429752 systemd[1]: Stopped target swap.target - Swaps. Jan 30 13:53:12.429918 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 13:53:12.430164 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:53:12.430852 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:53:12.431262 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:53:12.431543 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 13:53:12.437016 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:53:12.439916 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 13:53:12.440099 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 13:53:12.442556 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 13:53:12.442733 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:53:12.445541 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 13:53:12.445702 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 13:53:12.453609 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 13:53:12.476199 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 13:53:12.477360 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 13:53:12.478532 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:53:12.485652 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 13:53:12.486907 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:53:12.506301 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 13:53:12.508094 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 13:53:12.518499 ignition[1408]: INFO : Ignition 2.19.0 Jan 30 13:53:12.518499 ignition[1408]: INFO : Stage: umount Jan 30 13:53:12.518499 ignition[1408]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:53:12.518499 ignition[1408]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 30 13:53:12.518499 ignition[1408]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 30 13:53:12.544530 ignition[1408]: INFO : PUT result: OK Jan 30 13:53:12.544530 ignition[1408]: INFO : umount: umount passed Jan 30 13:53:12.544530 ignition[1408]: INFO : Ignition finished successfully Jan 30 13:53:12.539847 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 13:53:12.540026 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 13:53:12.545371 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 13:53:12.547447 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 13:53:12.547564 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 13:53:12.550283 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 13:53:12.550345 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 13:53:12.552396 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 30 13:53:12.552461 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 30 13:53:12.554370 systemd[1]: Stopped target network.target - Network. Jan 30 13:53:12.555471 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 13:53:12.555544 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:53:12.558044 systemd[1]: Stopped target paths.target - Path Units. Jan 30 13:53:12.559303 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 13:53:12.561728 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:53:12.564494 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 13:53:12.567067 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 13:53:12.567317 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 13:53:12.567361 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:53:12.567529 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 13:53:12.567559 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:53:12.567773 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 13:53:12.567816 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 13:53:12.568234 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 13:53:12.568286 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 13:53:12.568678 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 13:53:12.574508 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 13:53:12.586262 systemd-networkd[1164]: eth0: DHCPv6 lease lost Jan 30 13:53:12.594177 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 13:53:12.594315 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 13:53:12.603672 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 13:53:12.603758 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:53:12.625656 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 13:53:12.628403 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 13:53:12.630076 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:53:12.631657 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:53:12.634760 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 13:53:12.635273 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 13:53:12.654505 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:53:12.654583 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:53:12.659883 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 13:53:12.659963 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 13:53:12.664225 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 13:53:12.664304 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:53:12.666026 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 13:53:12.675060 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:53:12.677395 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 13:53:12.677521 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 13:53:12.681927 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 13:53:12.682005 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 13:53:12.683813 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 13:53:12.683861 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:53:12.689111 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 13:53:12.689177 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:53:12.691965 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 13:53:12.692019 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 13:53:12.694751 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:53:12.694809 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:53:12.696415 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 13:53:12.696463 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 13:53:12.710246 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 13:53:12.711378 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 13:53:12.711448 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:53:12.714233 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 30 13:53:12.714291 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:53:12.719675 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 13:53:12.719734 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:53:12.723529 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:53:12.723611 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:53:12.725948 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 13:53:12.728451 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 13:53:12.736508 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 13:53:12.736639 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 13:53:12.739385 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 13:53:12.752315 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 13:53:12.764616 systemd[1]: Switching root. Jan 30 13:53:12.811814 systemd-journald[178]: Journal stopped Jan 30 13:53:15.337711 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Jan 30 13:53:15.337809 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 13:53:15.337837 kernel: SELinux: policy capability open_perms=1 Jan 30 13:53:15.337856 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 13:53:15.360576 kernel: SELinux: policy capability always_check_network=0 Jan 30 13:53:15.360620 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 13:53:15.360639 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 13:53:15.360665 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 13:53:15.360683 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 13:53:15.360699 kernel: audit: type=1403 audit(1738245193.290:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 13:53:15.364392 systemd[1]: Successfully loaded SELinux policy in 78.280ms. Jan 30 13:53:15.364434 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 39.868ms. Jan 30 13:53:15.364459 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:53:15.364568 systemd[1]: Detected virtualization amazon. Jan 30 13:53:15.364602 systemd[1]: Detected architecture x86-64. Jan 30 13:53:15.364623 systemd[1]: Detected first boot. Jan 30 13:53:15.364646 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:53:15.364671 zram_generator::config[1450]: No configuration found. Jan 30 13:53:15.364694 systemd[1]: Populated /etc with preset unit settings. Jan 30 13:53:15.364716 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 30 13:53:15.364737 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 30 13:53:15.364761 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 30 13:53:15.364783 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 13:53:15.364809 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 13:53:15.364831 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 13:53:15.364853 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 13:53:15.368055 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 13:53:15.368110 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 13:53:15.368130 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 13:53:15.368158 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 13:53:15.368176 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:53:15.368195 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:53:15.368517 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 13:53:15.368540 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 13:53:15.368560 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 13:53:15.368579 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:53:15.368597 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 30 13:53:15.368615 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:53:15.368633 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 30 13:53:15.368652 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 30 13:53:15.372501 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 30 13:53:15.372546 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 13:53:15.372569 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:53:15.372592 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:53:15.372614 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:53:15.372636 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:53:15.372658 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 13:53:15.372679 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 13:53:15.372704 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:53:15.372775 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:53:15.372800 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:53:15.372821 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 13:53:15.372844 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 13:53:15.372865 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 13:53:15.372899 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 13:53:15.372922 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:53:15.373506 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 13:53:15.373579 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 13:53:15.373604 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 13:53:15.373628 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 13:53:15.373650 systemd[1]: Reached target machines.target - Containers. Jan 30 13:53:15.373672 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 13:53:15.373693 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:53:15.373716 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:53:15.373738 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 13:53:15.373761 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:53:15.373788 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:53:15.373809 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:53:15.373830 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 13:53:15.373852 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:53:15.383044 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 13:53:15.383090 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 30 13:53:15.383110 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 30 13:53:15.383128 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 30 13:53:15.383304 systemd[1]: Stopped systemd-fsck-usr.service. Jan 30 13:53:15.383325 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:53:15.383344 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:53:15.383363 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 13:53:15.383383 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 13:53:15.383412 kernel: fuse: init (API version 7.39) Jan 30 13:53:15.383432 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:53:15.383450 systemd[1]: verity-setup.service: Deactivated successfully. Jan 30 13:53:15.383468 systemd[1]: Stopped verity-setup.service. Jan 30 13:53:15.383493 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:53:15.383511 kernel: loop: module loaded Jan 30 13:53:15.383528 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 13:53:15.383546 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 13:53:15.383565 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 13:53:15.383594 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 13:53:15.383613 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 13:53:15.383632 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 13:53:15.383652 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:53:15.383671 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 13:53:15.383689 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 13:53:15.383707 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:53:15.383726 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:53:15.383744 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:53:15.383772 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:53:15.383792 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 13:53:15.383811 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 13:53:15.383832 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:53:15.383851 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:53:15.383889 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:53:15.383907 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 13:53:15.383926 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 13:53:15.383951 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 13:53:15.384007 systemd-journald[1532]: Collecting audit messages is disabled. Jan 30 13:53:15.384042 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 13:53:15.384061 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 13:53:15.384086 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 13:53:15.384106 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:53:15.389401 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 13:53:15.389443 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 13:53:15.389464 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 13:53:15.389487 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:53:15.389517 kernel: ACPI: bus type drm_connector registered Jan 30 13:53:15.389543 systemd-journald[1532]: Journal started Jan 30 13:53:15.389592 systemd-journald[1532]: Runtime Journal (/run/log/journal/ec2829977015537995a7ad94d02e7bba) is 4.8M, max 38.6M, 33.7M free. Jan 30 13:53:14.718570 systemd[1]: Queued start job for default target multi-user.target. Jan 30 13:53:14.769935 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 30 13:53:14.770503 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 30 13:53:15.397999 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 13:53:15.400902 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:53:15.403914 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 13:53:15.408657 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:53:15.422986 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:53:15.448448 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 13:53:15.448592 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:53:15.494722 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:53:15.455549 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 13:53:15.457694 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:53:15.458075 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:53:15.461125 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 13:53:15.463388 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 13:53:15.466719 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 13:53:15.536900 kernel: loop0: detected capacity change from 0 to 140768 Jan 30 13:53:15.557248 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 13:53:15.559223 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 13:53:15.566623 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:53:15.574960 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 13:53:15.590336 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 13:53:15.621220 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 13:53:15.663433 systemd-journald[1532]: Time spent on flushing to /var/log/journal/ec2829977015537995a7ad94d02e7bba is 98.730ms for 951 entries. Jan 30 13:53:15.663433 systemd-journald[1532]: System Journal (/var/log/journal/ec2829977015537995a7ad94d02e7bba) is 8.0M, max 195.6M, 187.6M free. Jan 30 13:53:15.781147 systemd-journald[1532]: Received client request to flush runtime journal. Jan 30 13:53:15.781618 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 13:53:15.679629 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:53:15.686200 udevadm[1586]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 30 13:53:15.704353 systemd-tmpfiles[1559]: ACLs are not supported, ignoring. Jan 30 13:53:15.704378 systemd-tmpfiles[1559]: ACLs are not supported, ignoring. Jan 30 13:53:15.722831 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:53:15.738089 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 13:53:15.787577 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 13:53:15.808016 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 13:53:15.809189 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 13:53:15.828920 kernel: loop1: detected capacity change from 0 to 142488 Jan 30 13:53:15.854582 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 13:53:15.885242 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:53:15.930952 systemd-tmpfiles[1600]: ACLs are not supported, ignoring. Jan 30 13:53:15.931421 systemd-tmpfiles[1600]: ACLs are not supported, ignoring. Jan 30 13:53:15.941518 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:53:15.986170 kernel: loop2: detected capacity change from 0 to 61336 Jan 30 13:53:16.114360 kernel: loop3: detected capacity change from 0 to 210664 Jan 30 13:53:16.242896 kernel: loop4: detected capacity change from 0 to 140768 Jan 30 13:53:16.298937 kernel: loop5: detected capacity change from 0 to 142488 Jan 30 13:53:16.330912 kernel: loop6: detected capacity change from 0 to 61336 Jan 30 13:53:16.377632 kernel: loop7: detected capacity change from 0 to 210664 Jan 30 13:53:16.420625 (sd-merge)[1606]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 30 13:53:16.423468 (sd-merge)[1606]: Merged extensions into '/usr'. Jan 30 13:53:16.439165 systemd[1]: Reloading requested from client PID 1558 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 13:53:16.439460 systemd[1]: Reloading... Jan 30 13:53:16.625902 zram_generator::config[1638]: No configuration found. Jan 30 13:53:16.928375 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:53:17.081132 systemd[1]: Reloading finished in 638 ms. Jan 30 13:53:17.143586 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 13:53:17.172254 systemd[1]: Starting ensure-sysext.service... Jan 30 13:53:17.192657 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:53:17.212231 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 13:53:17.222294 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:53:17.233907 systemd[1]: Reloading requested from client PID 1680 ('systemctl') (unit ensure-sysext.service)... Jan 30 13:53:17.233934 systemd[1]: Reloading... Jan 30 13:53:17.242815 systemd-tmpfiles[1681]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 13:53:17.243605 systemd-tmpfiles[1681]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 13:53:17.251210 systemd-tmpfiles[1681]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 13:53:17.251692 systemd-tmpfiles[1681]: ACLs are not supported, ignoring. Jan 30 13:53:17.251793 systemd-tmpfiles[1681]: ACLs are not supported, ignoring. Jan 30 13:53:17.261745 systemd-tmpfiles[1681]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:53:17.261762 systemd-tmpfiles[1681]: Skipping /boot Jan 30 13:53:17.282185 systemd-tmpfiles[1681]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:53:17.282200 systemd-tmpfiles[1681]: Skipping /boot Jan 30 13:53:17.354456 systemd-udevd[1684]: Using default interface naming scheme 'v255'. Jan 30 13:53:17.396367 zram_generator::config[1710]: No configuration found. Jan 30 13:53:17.676633 (udev-worker)[1733]: Network interface NamePolicy= disabled on kernel command line. Jan 30 13:53:17.762044 ldconfig[1554]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 13:53:17.833902 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 30 13:53:17.841643 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:53:17.858904 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Jan 30 13:53:17.881936 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Jan 30 13:53:17.883903 kernel: ACPI: button: Power Button [PWRF] Jan 30 13:53:17.885941 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Jan 30 13:53:17.893788 kernel: ACPI: button: Sleep Button [SLPF] Jan 30 13:53:17.962916 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1733) Jan 30 13:53:17.965417 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 13:53:17.999615 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 30 13:53:18.000410 systemd[1]: Reloading finished in 765 ms. Jan 30 13:53:18.022045 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:53:18.024352 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 13:53:18.026151 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:53:18.073352 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 13:53:18.078242 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 13:53:18.090275 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 13:53:18.095526 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:53:18.108990 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:53:18.124276 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 13:53:18.129230 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:53:18.149833 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 13:53:18.160499 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:53:18.160847 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:53:18.168690 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:53:18.180285 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:53:18.192393 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:53:18.193753 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:53:18.194154 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:53:18.202114 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:53:18.202412 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:53:18.202663 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:53:18.202812 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:53:18.211926 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:53:18.212317 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:53:18.251266 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:53:18.253166 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:53:18.253511 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 13:53:18.256231 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:53:18.269976 systemd[1]: Finished ensure-sysext.service. Jan 30 13:53:18.273510 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:53:18.274430 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:53:18.278129 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 13:53:18.293108 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 13:53:18.298816 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:53:18.299739 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:53:18.301710 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:53:18.301975 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:53:18.358343 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 30 13:53:18.365997 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:53:18.366210 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:53:18.402133 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 13:53:18.403749 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:53:18.403846 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:53:18.417251 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 13:53:18.418194 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 13:53:18.418753 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 13:53:18.447257 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 13:53:18.448909 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 13:53:18.455259 augenrules[1910]: No rules Jan 30 13:53:18.458765 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 13:53:18.461845 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 13:53:18.481956 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 13:53:18.504763 lvm[1909]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:53:18.519984 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 13:53:18.601089 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 13:53:18.605739 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:53:18.618210 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 13:53:18.643022 systemd-networkd[1849]: lo: Link UP Jan 30 13:53:18.643034 systemd-networkd[1849]: lo: Gained carrier Jan 30 13:53:18.645350 systemd-networkd[1849]: Enumeration completed Jan 30 13:53:18.645553 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:53:18.646105 systemd-networkd[1849]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:53:18.647985 systemd-networkd[1849]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:53:18.648183 lvm[1929]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:53:18.652469 systemd-networkd[1849]: eth0: Link UP Jan 30 13:53:18.652818 systemd-networkd[1849]: eth0: Gained carrier Jan 30 13:53:18.652940 systemd-networkd[1849]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:53:18.674990 systemd-networkd[1849]: eth0: DHCPv4 address 172.31.29.156/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 30 13:53:18.707154 systemd-resolved[1851]: Positive Trust Anchors: Jan 30 13:53:18.707589 systemd-resolved[1851]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:53:18.707698 systemd-resolved[1851]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:53:18.714736 systemd-resolved[1851]: Defaulting to hostname 'linux'. Jan 30 13:53:18.805231 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:53:18.807310 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 13:53:18.809227 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:53:18.812647 systemd[1]: Reached target network.target - Network. Jan 30 13:53:18.813640 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:53:18.815379 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:53:18.817129 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 13:53:18.818572 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 13:53:18.820149 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 13:53:18.822044 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 13:53:18.823395 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 13:53:18.824861 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 13:53:18.824921 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:53:18.825840 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:53:18.827570 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 13:53:18.830827 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 13:53:18.842240 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 13:53:18.845155 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 13:53:18.848322 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 13:53:18.850446 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:53:18.851857 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:53:18.852009 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:53:18.852038 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:53:18.857802 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 13:53:18.864204 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 30 13:53:18.877472 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 13:53:18.884536 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 13:53:18.897615 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 13:53:18.899522 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 13:53:18.901339 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 13:53:18.961273 jq[1939]: false Jan 30 13:53:18.921779 systemd[1]: Started ntpd.service - Network Time Service. Jan 30 13:53:18.925300 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 30 13:53:18.932095 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 13:53:18.980165 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 13:53:18.987171 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 13:53:18.989325 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 13:53:18.991054 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 13:53:18.993812 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 13:53:19.000388 extend-filesystems[1940]: Found loop4 Jan 30 13:53:19.020056 extend-filesystems[1940]: Found loop5 Jan 30 13:53:19.020056 extend-filesystems[1940]: Found loop6 Jan 30 13:53:19.020056 extend-filesystems[1940]: Found loop7 Jan 30 13:53:19.020056 extend-filesystems[1940]: Found nvme0n1 Jan 30 13:53:19.020056 extend-filesystems[1940]: Found nvme0n1p1 Jan 30 13:53:19.020056 extend-filesystems[1940]: Found nvme0n1p2 Jan 30 13:53:19.020056 extend-filesystems[1940]: Found nvme0n1p3 Jan 30 13:53:19.020056 extend-filesystems[1940]: Found usr Jan 30 13:53:19.020056 extend-filesystems[1940]: Found nvme0n1p4 Jan 30 13:53:19.020056 extend-filesystems[1940]: Found nvme0n1p6 Jan 30 13:53:19.020056 extend-filesystems[1940]: Found nvme0n1p7 Jan 30 13:53:19.020056 extend-filesystems[1940]: Found nvme0n1p9 Jan 30 13:53:19.020056 extend-filesystems[1940]: Checking size of /dev/nvme0n1p9 Jan 30 13:53:19.001045 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 13:53:19.023723 dbus-daemon[1938]: [system] SELinux support is enabled Jan 30 13:53:19.012524 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 13:53:19.012780 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 13:53:19.014937 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 13:53:19.016748 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 13:53:19.024273 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 13:53:19.056531 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 13:53:19.056589 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 13:53:19.068490 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 13:53:19.068521 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 13:53:19.099388 coreos-metadata[1937]: Jan 30 13:53:19.098 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 30 13:53:19.103985 dbus-daemon[1938]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1849 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 30 13:53:19.124962 coreos-metadata[1937]: Jan 30 13:53:19.119 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 30 13:53:19.121979 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 30 13:53:19.146103 extend-filesystems[1940]: Resized partition /dev/nvme0n1p9 Jan 30 13:53:19.151988 coreos-metadata[1937]: Jan 30 13:53:19.136 INFO Fetch successful Jan 30 13:53:19.151988 coreos-metadata[1937]: Jan 30 13:53:19.136 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 30 13:53:19.151988 coreos-metadata[1937]: Jan 30 13:53:19.139 INFO Fetch successful Jan 30 13:53:19.151988 coreos-metadata[1937]: Jan 30 13:53:19.139 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 30 13:53:19.151988 coreos-metadata[1937]: Jan 30 13:53:19.141 INFO Fetch successful Jan 30 13:53:19.151988 coreos-metadata[1937]: Jan 30 13:53:19.141 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 30 13:53:19.151988 coreos-metadata[1937]: Jan 30 13:53:19.143 INFO Fetch successful Jan 30 13:53:19.151988 coreos-metadata[1937]: Jan 30 13:53:19.144 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 30 13:53:19.151988 coreos-metadata[1937]: Jan 30 13:53:19.145 INFO Fetch failed with 404: resource not found Jan 30 13:53:19.151988 coreos-metadata[1937]: Jan 30 13:53:19.145 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 30 13:53:19.151988 coreos-metadata[1937]: Jan 30 13:53:19.146 INFO Fetch successful Jan 30 13:53:19.151988 coreos-metadata[1937]: Jan 30 13:53:19.146 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 30 13:53:19.153455 extend-filesystems[1976]: resize2fs 1.47.1 (20-May-2024) Jan 30 13:53:19.159157 coreos-metadata[1937]: Jan 30 13:53:19.152 INFO Fetch successful Jan 30 13:53:19.159157 coreos-metadata[1937]: Jan 30 13:53:19.153 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 30 13:53:19.159157 coreos-metadata[1937]: Jan 30 13:53:19.155 INFO Fetch successful Jan 30 13:53:19.159157 coreos-metadata[1937]: Jan 30 13:53:19.155 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 30 13:53:19.159157 coreos-metadata[1937]: Jan 30 13:53:19.156 INFO Fetch successful Jan 30 13:53:19.159157 coreos-metadata[1937]: Jan 30 13:53:19.157 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 30 13:53:19.159157 coreos-metadata[1937]: Jan 30 13:53:19.157 INFO Fetch successful Jan 30 13:53:19.177906 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jan 30 13:53:19.178057 jq[1952]: true Jan 30 13:53:19.183488 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 13:53:19.185151 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 13:53:19.246220 (ntainerd)[1972]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 13:53:19.253120 ntpd[1942]: 30 Jan 13:53:19 ntpd[1942]: ntpd 4.2.8p17@1.4004-o Wed Jan 29 09:31:52 UTC 2025 (1): Starting Jan 30 13:53:19.253120 ntpd[1942]: 30 Jan 13:53:19 ntpd[1942]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 30 13:53:19.253120 ntpd[1942]: 30 Jan 13:53:19 ntpd[1942]: ---------------------------------------------------- Jan 30 13:53:19.253120 ntpd[1942]: 30 Jan 13:53:19 ntpd[1942]: ntp-4 is maintained by Network Time Foundation, Jan 30 13:53:19.253120 ntpd[1942]: 30 Jan 13:53:19 ntpd[1942]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 30 13:53:19.253120 ntpd[1942]: 30 Jan 13:53:19 ntpd[1942]: corporation. Support and training for ntp-4 are Jan 30 13:53:19.253120 ntpd[1942]: 30 Jan 13:53:19 ntpd[1942]: available at https://www.nwtime.org/support Jan 30 13:53:19.253120 ntpd[1942]: 30 Jan 13:53:19 ntpd[1942]: ---------------------------------------------------- Jan 30 13:53:19.246468 ntpd[1942]: ntpd 4.2.8p17@1.4004-o Wed Jan 29 09:31:52 UTC 2025 (1): Starting Jan 30 13:53:19.253804 update_engine[1951]: I20250130 13:53:19.249909 1951 main.cc:92] Flatcar Update Engine starting Jan 30 13:53:19.246494 ntpd[1942]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 30 13:53:19.246505 ntpd[1942]: ---------------------------------------------------- Jan 30 13:53:19.246515 ntpd[1942]: ntp-4 is maintained by Network Time Foundation, Jan 30 13:53:19.246525 ntpd[1942]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 30 13:53:19.246535 ntpd[1942]: corporation. Support and training for ntp-4 are Jan 30 13:53:19.246545 ntpd[1942]: available at https://www.nwtime.org/support Jan 30 13:53:19.246555 ntpd[1942]: ---------------------------------------------------- Jan 30 13:53:19.261147 ntpd[1942]: 30 Jan 13:53:19 ntpd[1942]: proto: precision = 0.075 usec (-24) Jan 30 13:53:19.261147 ntpd[1942]: 30 Jan 13:53:19 ntpd[1942]: basedate set to 2025-01-17 Jan 30 13:53:19.261147 ntpd[1942]: 30 Jan 13:53:19 ntpd[1942]: gps base set to 2025-01-19 (week 2350) Jan 30 13:53:19.258169 ntpd[1942]: proto: precision = 0.075 usec (-24) Jan 30 13:53:19.258519 ntpd[1942]: basedate set to 2025-01-17 Jan 30 13:53:19.258535 ntpd[1942]: gps base set to 2025-01-19 (week 2350) Jan 30 13:53:19.274769 update_engine[1951]: I20250130 13:53:19.273803 1951 update_check_scheduler.cc:74] Next update check in 8m37s Jan 30 13:53:19.275123 ntpd[1942]: 30 Jan 13:53:19 ntpd[1942]: Listen and drop on 0 v6wildcard [::]:123 Jan 30 13:53:19.275123 ntpd[1942]: 30 Jan 13:53:19 ntpd[1942]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 30 13:53:19.272640 ntpd[1942]: Listen and drop on 0 v6wildcard [::]:123 Jan 30 13:53:19.271409 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 30 13:53:19.272856 ntpd[1942]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 30 13:53:19.273673 systemd[1]: Started update-engine.service - Update Engine. Jan 30 13:53:19.276756 ntpd[1942]: Listen normally on 2 lo 127.0.0.1:123 Jan 30 13:53:19.276954 ntpd[1942]: 30 Jan 13:53:19 ntpd[1942]: Listen normally on 2 lo 127.0.0.1:123 Jan 30 13:53:19.276954 ntpd[1942]: 30 Jan 13:53:19 ntpd[1942]: Listen normally on 3 eth0 172.31.29.156:123 Jan 30 13:53:19.276954 ntpd[1942]: 30 Jan 13:53:19 ntpd[1942]: Listen normally on 4 lo [::1]:123 Jan 30 13:53:19.276838 ntpd[1942]: Listen normally on 3 eth0 172.31.29.156:123 Jan 30 13:53:19.277131 ntpd[1942]: 30 Jan 13:53:19 ntpd[1942]: bind(21) AF_INET6 fe80::48a:2dff:fe2f:e7af%2#123 flags 0x11 failed: Cannot assign requested address Jan 30 13:53:19.276935 ntpd[1942]: Listen normally on 4 lo [::1]:123 Jan 30 13:53:19.276994 ntpd[1942]: bind(21) AF_INET6 fe80::48a:2dff:fe2f:e7af%2#123 flags 0x11 failed: Cannot assign requested address Jan 30 13:53:19.277016 ntpd[1942]: unable to create socket on eth0 (5) for fe80::48a:2dff:fe2f:e7af%2#123 Jan 30 13:53:19.281984 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jan 30 13:53:19.277425 ntpd[1942]: failed to init interface for address fe80::48a:2dff:fe2f:e7af%2 Jan 30 13:53:19.329025 ntpd[1942]: 30 Jan 13:53:19 ntpd[1942]: unable to create socket on eth0 (5) for fe80::48a:2dff:fe2f:e7af%2#123 Jan 30 13:53:19.329025 ntpd[1942]: 30 Jan 13:53:19 ntpd[1942]: failed to init interface for address fe80::48a:2dff:fe2f:e7af%2 Jan 30 13:53:19.329025 ntpd[1942]: 30 Jan 13:53:19 ntpd[1942]: Listening on routing socket on fd #21 for interface updates Jan 30 13:53:19.329025 ntpd[1942]: 30 Jan 13:53:19 ntpd[1942]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 30 13:53:19.329025 ntpd[1942]: 30 Jan 13:53:19 ntpd[1942]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 30 13:53:19.329285 jq[1984]: true Jan 30 13:53:19.284116 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 13:53:19.277475 ntpd[1942]: Listening on routing socket on fd #21 for interface updates Jan 30 13:53:19.315599 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 30 13:53:19.287055 ntpd[1942]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 30 13:53:19.317261 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 13:53:19.287091 ntpd[1942]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 30 13:53:19.343112 extend-filesystems[1976]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 30 13:53:19.343112 extend-filesystems[1976]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 30 13:53:19.343112 extend-filesystems[1976]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jan 30 13:53:19.338497 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 13:53:19.351094 extend-filesystems[1940]: Resized filesystem in /dev/nvme0n1p9 Jan 30 13:53:19.338902 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 13:53:19.376905 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1736) Jan 30 13:53:19.451786 systemd-logind[1950]: Watching system buttons on /dev/input/event1 (Power Button) Jan 30 13:53:19.456064 systemd-logind[1950]: Watching system buttons on /dev/input/event3 (Sleep Button) Jan 30 13:53:19.457022 systemd-logind[1950]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 30 13:53:19.465579 systemd-logind[1950]: New seat seat0. Jan 30 13:53:19.468864 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 13:53:19.511709 bash[2038]: Updated "/home/core/.ssh/authorized_keys" Jan 30 13:53:19.513494 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 13:53:19.530323 systemd[1]: Starting sshkeys.service... Jan 30 13:53:19.588239 dbus-daemon[1938]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 30 13:53:19.588723 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 30 13:53:19.605354 dbus-daemon[1938]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1970 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 30 13:53:19.606748 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 30 13:53:19.621011 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 30 13:53:19.637084 systemd[1]: Starting polkit.service - Authorization Manager... Jan 30 13:53:19.757907 polkitd[2093]: Started polkitd version 121 Jan 30 13:53:19.799768 polkitd[2093]: Loading rules from directory /etc/polkit-1/rules.d Jan 30 13:53:19.808064 polkitd[2093]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 30 13:53:19.812382 polkitd[2093]: Finished loading, compiling and executing 2 rules Jan 30 13:53:19.812686 sshd_keygen[1969]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 13:53:19.815716 dbus-daemon[1938]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 30 13:53:19.815952 systemd[1]: Started polkit.service - Authorization Manager. Jan 30 13:53:19.822434 polkitd[2093]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 30 13:53:19.854522 coreos-metadata[2085]: Jan 30 13:53:19.853 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 30 13:53:19.859768 coreos-metadata[2085]: Jan 30 13:53:19.859 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 30 13:53:19.863952 coreos-metadata[2085]: Jan 30 13:53:19.863 INFO Fetch successful Jan 30 13:53:19.863952 coreos-metadata[2085]: Jan 30 13:53:19.863 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 30 13:53:19.866631 coreos-metadata[2085]: Jan 30 13:53:19.866 INFO Fetch successful Jan 30 13:53:19.868146 unknown[2085]: wrote ssh authorized keys file for user: core Jan 30 13:53:19.889995 locksmithd[1995]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 13:53:19.903616 systemd-hostnamed[1970]: Hostname set to (transient) Jan 30 13:53:19.904706 systemd-resolved[1851]: System hostname changed to 'ip-172-31-29-156'. Jan 30 13:53:19.922757 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 13:53:19.933399 update-ssh-keys[2133]: Updated "/home/core/.ssh/authorized_keys" Jan 30 13:53:19.934837 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 13:53:19.936519 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 30 13:53:19.941145 systemd[1]: Finished sshkeys.service. Jan 30 13:53:19.951059 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 13:53:19.951314 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 13:53:19.962368 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 13:53:19.978385 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 13:53:19.987728 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 13:53:19.998337 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 30 13:53:19.999946 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 13:53:20.002442 containerd[1972]: time="2025-01-30T13:53:20.001985027Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 30 13:53:20.026732 containerd[1972]: time="2025-01-30T13:53:20.026485398Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:53:20.028217 containerd[1972]: time="2025-01-30T13:53:20.028171470Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:53:20.028217 containerd[1972]: time="2025-01-30T13:53:20.028211717Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 13:53:20.028369 containerd[1972]: time="2025-01-30T13:53:20.028241447Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 13:53:20.028461 containerd[1972]: time="2025-01-30T13:53:20.028436760Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 13:53:20.028504 containerd[1972]: time="2025-01-30T13:53:20.028463750Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 13:53:20.028566 containerd[1972]: time="2025-01-30T13:53:20.028541585Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:53:20.028616 containerd[1972]: time="2025-01-30T13:53:20.028566289Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:53:20.028795 containerd[1972]: time="2025-01-30T13:53:20.028766777Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:53:20.028795 containerd[1972]: time="2025-01-30T13:53:20.028790126Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 13:53:20.028890 containerd[1972]: time="2025-01-30T13:53:20.028810239Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:53:20.028890 containerd[1972]: time="2025-01-30T13:53:20.028825111Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 13:53:20.028972 containerd[1972]: time="2025-01-30T13:53:20.028948055Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:53:20.029197 containerd[1972]: time="2025-01-30T13:53:20.029168885Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:53:20.029338 containerd[1972]: time="2025-01-30T13:53:20.029311760Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:53:20.029388 containerd[1972]: time="2025-01-30T13:53:20.029334452Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 13:53:20.029461 containerd[1972]: time="2025-01-30T13:53:20.029437719Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 13:53:20.029524 containerd[1972]: time="2025-01-30T13:53:20.029503253Z" level=info msg="metadata content store policy set" policy=shared Jan 30 13:53:20.036903 containerd[1972]: time="2025-01-30T13:53:20.034714517Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 13:53:20.036903 containerd[1972]: time="2025-01-30T13:53:20.034786277Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 13:53:20.036903 containerd[1972]: time="2025-01-30T13:53:20.034810749Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 13:53:20.036903 containerd[1972]: time="2025-01-30T13:53:20.034836162Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 13:53:20.036903 containerd[1972]: time="2025-01-30T13:53:20.034856854Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 13:53:20.036903 containerd[1972]: time="2025-01-30T13:53:20.035040368Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 13:53:20.036903 containerd[1972]: time="2025-01-30T13:53:20.035332482Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 13:53:20.036903 containerd[1972]: time="2025-01-30T13:53:20.035440897Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 13:53:20.036903 containerd[1972]: time="2025-01-30T13:53:20.035457724Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 13:53:20.036903 containerd[1972]: time="2025-01-30T13:53:20.035470312Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 13:53:20.036903 containerd[1972]: time="2025-01-30T13:53:20.035487101Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 13:53:20.036903 containerd[1972]: time="2025-01-30T13:53:20.035510025Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 13:53:20.036903 containerd[1972]: time="2025-01-30T13:53:20.035522805Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 13:53:20.036903 containerd[1972]: time="2025-01-30T13:53:20.035536232Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 13:53:20.037567 containerd[1972]: time="2025-01-30T13:53:20.035551253Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 13:53:20.037567 containerd[1972]: time="2025-01-30T13:53:20.035568633Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 13:53:20.037567 containerd[1972]: time="2025-01-30T13:53:20.035580677Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 13:53:20.037567 containerd[1972]: time="2025-01-30T13:53:20.035592468Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 13:53:20.037567 containerd[1972]: time="2025-01-30T13:53:20.035615582Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 13:53:20.037567 containerd[1972]: time="2025-01-30T13:53:20.035631069Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 13:53:20.037567 containerd[1972]: time="2025-01-30T13:53:20.035649469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 13:53:20.037567 containerd[1972]: time="2025-01-30T13:53:20.035663166Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 13:53:20.037567 containerd[1972]: time="2025-01-30T13:53:20.035674805Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 13:53:20.037567 containerd[1972]: time="2025-01-30T13:53:20.035686890Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 13:53:20.037567 containerd[1972]: time="2025-01-30T13:53:20.035698273Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 13:53:20.037567 containerd[1972]: time="2025-01-30T13:53:20.035711846Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 13:53:20.037567 containerd[1972]: time="2025-01-30T13:53:20.035723598Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 13:53:20.037567 containerd[1972]: time="2025-01-30T13:53:20.035741759Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 13:53:20.038318 containerd[1972]: time="2025-01-30T13:53:20.035754823Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 13:53:20.038318 containerd[1972]: time="2025-01-30T13:53:20.035765706Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 13:53:20.038318 containerd[1972]: time="2025-01-30T13:53:20.035778935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 13:53:20.038318 containerd[1972]: time="2025-01-30T13:53:20.035793655Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 13:53:20.038318 containerd[1972]: time="2025-01-30T13:53:20.035813935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 13:53:20.038318 containerd[1972]: time="2025-01-30T13:53:20.035825065Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 13:53:20.038318 containerd[1972]: time="2025-01-30T13:53:20.035834763Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 13:53:20.038318 containerd[1972]: time="2025-01-30T13:53:20.035902236Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 13:53:20.038318 containerd[1972]: time="2025-01-30T13:53:20.035924997Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 13:53:20.038318 containerd[1972]: time="2025-01-30T13:53:20.035936321Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 13:53:20.038318 containerd[1972]: time="2025-01-30T13:53:20.035949277Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 13:53:20.038318 containerd[1972]: time="2025-01-30T13:53:20.035960021Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 13:53:20.038318 containerd[1972]: time="2025-01-30T13:53:20.035978007Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 13:53:20.038318 containerd[1972]: time="2025-01-30T13:53:20.035993983Z" level=info msg="NRI interface is disabled by configuration." Jan 30 13:53:20.038145 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 13:53:20.038832 containerd[1972]: time="2025-01-30T13:53:20.036021161Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 13:53:20.038893 containerd[1972]: time="2025-01-30T13:53:20.036329246Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 13:53:20.038893 containerd[1972]: time="2025-01-30T13:53:20.036383216Z" level=info msg="Connect containerd service" Jan 30 13:53:20.038893 containerd[1972]: time="2025-01-30T13:53:20.036418087Z" level=info msg="using legacy CRI server" Jan 30 13:53:20.038893 containerd[1972]: time="2025-01-30T13:53:20.036425196Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 13:53:20.038893 containerd[1972]: time="2025-01-30T13:53:20.036535424Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 13:53:20.038893 containerd[1972]: time="2025-01-30T13:53:20.037204922Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:53:20.038893 containerd[1972]: time="2025-01-30T13:53:20.037669016Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 13:53:20.038893 containerd[1972]: time="2025-01-30T13:53:20.037727009Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 13:53:20.038893 containerd[1972]: time="2025-01-30T13:53:20.037771636Z" level=info msg="Start subscribing containerd event" Jan 30 13:53:20.038893 containerd[1972]: time="2025-01-30T13:53:20.037825228Z" level=info msg="Start recovering state" Jan 30 13:53:20.038893 containerd[1972]: time="2025-01-30T13:53:20.037917812Z" level=info msg="Start event monitor" Jan 30 13:53:20.038893 containerd[1972]: time="2025-01-30T13:53:20.037948144Z" level=info msg="Start snapshots syncer" Jan 30 13:53:20.038893 containerd[1972]: time="2025-01-30T13:53:20.037962318Z" level=info msg="Start cni network conf syncer for default" Jan 30 13:53:20.038893 containerd[1972]: time="2025-01-30T13:53:20.037973219Z" level=info msg="Start streaming server" Jan 30 13:53:20.038893 containerd[1972]: time="2025-01-30T13:53:20.038041441Z" level=info msg="containerd successfully booted in 0.040529s" Jan 30 13:53:20.245113 systemd-networkd[1849]: eth0: Gained IPv6LL Jan 30 13:53:20.248536 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 13:53:20.258144 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 13:53:20.279515 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 30 13:53:20.285131 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:53:20.288387 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 13:53:20.322292 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 13:53:20.375955 amazon-ssm-agent[2154]: Initializing new seelog logger Jan 30 13:53:20.375955 amazon-ssm-agent[2154]: New Seelog Logger Creation Complete Jan 30 13:53:20.375955 amazon-ssm-agent[2154]: 2025/01/30 13:53:20 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 13:53:20.375955 amazon-ssm-agent[2154]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 13:53:20.377300 amazon-ssm-agent[2154]: 2025/01/30 13:53:20 processing appconfig overrides Jan 30 13:53:20.377565 amazon-ssm-agent[2154]: 2025/01/30 13:53:20 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 13:53:20.377565 amazon-ssm-agent[2154]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 13:53:20.377661 amazon-ssm-agent[2154]: 2025/01/30 13:53:20 processing appconfig overrides Jan 30 13:53:20.378485 amazon-ssm-agent[2154]: 2025-01-30 13:53:20 INFO Proxy environment variables: Jan 30 13:53:20.378653 amazon-ssm-agent[2154]: 2025/01/30 13:53:20 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 13:53:20.378653 amazon-ssm-agent[2154]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 13:53:20.378737 amazon-ssm-agent[2154]: 2025/01/30 13:53:20 processing appconfig overrides Jan 30 13:53:20.381132 amazon-ssm-agent[2154]: 2025/01/30 13:53:20 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 13:53:20.381132 amazon-ssm-agent[2154]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 13:53:20.381325 amazon-ssm-agent[2154]: 2025/01/30 13:53:20 processing appconfig overrides Jan 30 13:53:20.478996 amazon-ssm-agent[2154]: 2025-01-30 13:53:20 INFO https_proxy: Jan 30 13:53:20.577368 amazon-ssm-agent[2154]: 2025-01-30 13:53:20 INFO http_proxy: Jan 30 13:53:20.675405 amazon-ssm-agent[2154]: 2025-01-30 13:53:20 INFO no_proxy: Jan 30 13:53:20.773947 amazon-ssm-agent[2154]: 2025-01-30 13:53:20 INFO Checking if agent identity type OnPrem can be assumed Jan 30 13:53:20.872262 amazon-ssm-agent[2154]: 2025-01-30 13:53:20 INFO Checking if agent identity type EC2 can be assumed Jan 30 13:53:20.976028 amazon-ssm-agent[2154]: 2025-01-30 13:53:20 INFO Agent will take identity from EC2 Jan 30 13:53:21.076324 amazon-ssm-agent[2154]: 2025-01-30 13:53:20 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 30 13:53:21.081193 amazon-ssm-agent[2154]: 2025-01-30 13:53:20 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 30 13:53:21.081193 amazon-ssm-agent[2154]: 2025-01-30 13:53:20 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 30 13:53:21.081193 amazon-ssm-agent[2154]: 2025-01-30 13:53:20 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jan 30 13:53:21.081193 amazon-ssm-agent[2154]: 2025-01-30 13:53:20 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Jan 30 13:53:21.081193 amazon-ssm-agent[2154]: 2025-01-30 13:53:20 INFO [amazon-ssm-agent] Starting Core Agent Jan 30 13:53:21.081193 amazon-ssm-agent[2154]: 2025-01-30 13:53:20 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jan 30 13:53:21.081193 amazon-ssm-agent[2154]: 2025-01-30 13:53:20 INFO [Registrar] Starting registrar module Jan 30 13:53:21.081638 amazon-ssm-agent[2154]: 2025-01-30 13:53:20 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jan 30 13:53:21.081638 amazon-ssm-agent[2154]: 2025-01-30 13:53:21 INFO [EC2Identity] EC2 registration was successful. Jan 30 13:53:21.081638 amazon-ssm-agent[2154]: 2025-01-30 13:53:21 INFO [CredentialRefresher] credentialRefresher has started Jan 30 13:53:21.081638 amazon-ssm-agent[2154]: 2025-01-30 13:53:21 INFO [CredentialRefresher] Starting credentials refresher loop Jan 30 13:53:21.081638 amazon-ssm-agent[2154]: 2025-01-30 13:53:21 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 30 13:53:21.176031 amazon-ssm-agent[2154]: 2025-01-30 13:53:21 INFO [CredentialRefresher] Next credential rotation will be in 31.733327251116666 minutes Jan 30 13:53:22.102854 amazon-ssm-agent[2154]: 2025-01-30 13:53:22 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 30 13:53:22.205327 amazon-ssm-agent[2154]: 2025-01-30 13:53:22 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2174) started Jan 30 13:53:22.247079 ntpd[1942]: Listen normally on 6 eth0 [fe80::48a:2dff:fe2f:e7af%2]:123 Jan 30 13:53:22.248262 ntpd[1942]: 30 Jan 13:53:22 ntpd[1942]: Listen normally on 6 eth0 [fe80::48a:2dff:fe2f:e7af%2]:123 Jan 30 13:53:22.305670 amazon-ssm-agent[2154]: 2025-01-30 13:53:22 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 30 13:53:22.701774 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:53:22.704087 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 13:53:22.706732 systemd[1]: Startup finished in 909ms (kernel) + 8.525s (initrd) + 9.490s (userspace) = 18.925s. Jan 30 13:53:22.833375 (kubelet)[2189]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:53:24.454790 kubelet[2189]: E0130 13:53:24.454734 2189 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:53:24.457793 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:53:24.458012 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:53:24.458529 systemd[1]: kubelet.service: Consumed 1.054s CPU time. Jan 30 13:53:27.576723 systemd-resolved[1851]: Clock change detected. Flushing caches. Jan 30 13:53:30.170540 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 13:53:30.180827 systemd[1]: Started sshd@0-172.31.29.156:22-139.178.68.195:34122.service - OpenSSH per-connection server daemon (139.178.68.195:34122). Jan 30 13:53:30.364723 sshd[2202]: Accepted publickey for core from 139.178.68.195 port 34122 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:53:30.368641 sshd[2202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:53:30.395650 systemd-logind[1950]: New session 1 of user core. Jan 30 13:53:30.397422 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 13:53:30.406068 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 13:53:30.439265 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 13:53:30.450771 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 13:53:30.456763 (systemd)[2206]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 13:53:30.635328 systemd[2206]: Queued start job for default target default.target. Jan 30 13:53:30.647350 systemd[2206]: Created slice app.slice - User Application Slice. Jan 30 13:53:30.647776 systemd[2206]: Reached target paths.target - Paths. Jan 30 13:53:30.647815 systemd[2206]: Reached target timers.target - Timers. Jan 30 13:53:30.650284 systemd[2206]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 13:53:30.677239 systemd[2206]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 13:53:30.677618 systemd[2206]: Reached target sockets.target - Sockets. Jan 30 13:53:30.677640 systemd[2206]: Reached target basic.target - Basic System. Jan 30 13:53:30.677696 systemd[2206]: Reached target default.target - Main User Target. Jan 30 13:53:30.677738 systemd[2206]: Startup finished in 211ms. Jan 30 13:53:30.677933 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 13:53:30.685625 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 13:53:30.841586 systemd[1]: Started sshd@1-172.31.29.156:22-139.178.68.195:34130.service - OpenSSH per-connection server daemon (139.178.68.195:34130). Jan 30 13:53:31.018575 sshd[2217]: Accepted publickey for core from 139.178.68.195 port 34130 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:53:31.020160 sshd[2217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:53:31.038213 systemd-logind[1950]: New session 2 of user core. Jan 30 13:53:31.047639 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 13:53:31.191960 sshd[2217]: pam_unix(sshd:session): session closed for user core Jan 30 13:53:31.197052 systemd[1]: sshd@1-172.31.29.156:22-139.178.68.195:34130.service: Deactivated successfully. Jan 30 13:53:31.199781 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 13:53:31.204204 systemd-logind[1950]: Session 2 logged out. Waiting for processes to exit. Jan 30 13:53:31.206419 systemd-logind[1950]: Removed session 2. Jan 30 13:53:31.229458 systemd[1]: Started sshd@2-172.31.29.156:22-139.178.68.195:34140.service - OpenSSH per-connection server daemon (139.178.68.195:34140). Jan 30 13:53:31.403315 sshd[2224]: Accepted publickey for core from 139.178.68.195 port 34140 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:53:31.405799 sshd[2224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:53:31.412222 systemd-logind[1950]: New session 3 of user core. Jan 30 13:53:31.418638 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 13:53:31.538723 sshd[2224]: pam_unix(sshd:session): session closed for user core Jan 30 13:53:31.544398 systemd[1]: sshd@2-172.31.29.156:22-139.178.68.195:34140.service: Deactivated successfully. Jan 30 13:53:31.546617 systemd[1]: session-3.scope: Deactivated successfully. Jan 30 13:53:31.548525 systemd-logind[1950]: Session 3 logged out. Waiting for processes to exit. Jan 30 13:53:31.549925 systemd-logind[1950]: Removed session 3. Jan 30 13:53:31.576855 systemd[1]: Started sshd@3-172.31.29.156:22-139.178.68.195:34156.service - OpenSSH per-connection server daemon (139.178.68.195:34156). Jan 30 13:53:31.786178 sshd[2231]: Accepted publickey for core from 139.178.68.195 port 34156 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:53:31.789885 sshd[2231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:53:31.798582 systemd-logind[1950]: New session 4 of user core. Jan 30 13:53:31.805225 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 13:53:31.932339 sshd[2231]: pam_unix(sshd:session): session closed for user core Jan 30 13:53:31.937012 systemd[1]: sshd@3-172.31.29.156:22-139.178.68.195:34156.service: Deactivated successfully. Jan 30 13:53:31.939503 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 13:53:31.940990 systemd-logind[1950]: Session 4 logged out. Waiting for processes to exit. Jan 30 13:53:31.942764 systemd-logind[1950]: Removed session 4. Jan 30 13:53:31.971334 systemd[1]: Started sshd@4-172.31.29.156:22-139.178.68.195:34170.service - OpenSSH per-connection server daemon (139.178.68.195:34170). Jan 30 13:53:32.139007 sshd[2238]: Accepted publickey for core from 139.178.68.195 port 34170 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:53:32.140824 sshd[2238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:53:32.148347 systemd-logind[1950]: New session 5 of user core. Jan 30 13:53:32.154602 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 13:53:32.274353 sudo[2241]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 13:53:32.274936 sudo[2241]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:53:32.291610 sudo[2241]: pam_unix(sudo:session): session closed for user root Jan 30 13:53:32.316187 sshd[2238]: pam_unix(sshd:session): session closed for user core Jan 30 13:53:32.326731 systemd[1]: sshd@4-172.31.29.156:22-139.178.68.195:34170.service: Deactivated successfully. Jan 30 13:53:32.339204 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 13:53:32.350063 systemd-logind[1950]: Session 5 logged out. Waiting for processes to exit. Jan 30 13:53:32.382157 systemd[1]: Started sshd@5-172.31.29.156:22-139.178.68.195:34174.service - OpenSSH per-connection server daemon (139.178.68.195:34174). Jan 30 13:53:32.383880 systemd-logind[1950]: Removed session 5. Jan 30 13:53:32.547997 sshd[2246]: Accepted publickey for core from 139.178.68.195 port 34174 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:53:32.551014 sshd[2246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:53:32.574236 systemd-logind[1950]: New session 6 of user core. Jan 30 13:53:32.581721 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 13:53:32.686598 sudo[2250]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 13:53:32.687063 sudo[2250]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:53:32.693302 sudo[2250]: pam_unix(sudo:session): session closed for user root Jan 30 13:53:32.700178 sudo[2249]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 30 13:53:32.700805 sudo[2249]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:53:32.716237 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 30 13:53:32.720458 auditctl[2253]: No rules Jan 30 13:53:32.720903 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 13:53:32.721131 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 30 13:53:32.728017 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 13:53:32.765276 augenrules[2271]: No rules Jan 30 13:53:32.766948 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 13:53:32.768621 sudo[2249]: pam_unix(sudo:session): session closed for user root Jan 30 13:53:32.792454 sshd[2246]: pam_unix(sshd:session): session closed for user core Jan 30 13:53:32.802428 systemd[1]: sshd@5-172.31.29.156:22-139.178.68.195:34174.service: Deactivated successfully. Jan 30 13:53:32.805672 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 13:53:32.811242 systemd-logind[1950]: Session 6 logged out. Waiting for processes to exit. Jan 30 13:53:32.835854 systemd-logind[1950]: Removed session 6. Jan 30 13:53:32.842297 systemd[1]: Started sshd@6-172.31.29.156:22-139.178.68.195:34186.service - OpenSSH per-connection server daemon (139.178.68.195:34186). Jan 30 13:53:33.040087 sshd[2279]: Accepted publickey for core from 139.178.68.195 port 34186 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:53:33.042236 sshd[2279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:53:33.050809 systemd-logind[1950]: New session 7 of user core. Jan 30 13:53:33.061409 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 13:53:33.162223 sudo[2282]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 13:53:33.162712 sudo[2282]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:53:34.623080 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:53:34.623586 systemd[1]: kubelet.service: Consumed 1.054s CPU time. Jan 30 13:53:34.637310 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:53:34.666196 systemd[1]: Reloading requested from client PID 2320 ('systemctl') (unit session-7.scope)... Jan 30 13:53:34.666214 systemd[1]: Reloading... Jan 30 13:53:34.911759 zram_generator::config[2363]: No configuration found. Jan 30 13:53:35.087340 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:53:35.307112 systemd[1]: Reloading finished in 640 ms. Jan 30 13:53:35.372823 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 30 13:53:35.373073 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 30 13:53:35.373749 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:53:35.382771 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:53:35.608082 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:53:35.624283 (kubelet)[2420]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:53:35.691841 kubelet[2420]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:53:35.692545 kubelet[2420]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 13:53:35.692545 kubelet[2420]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:53:35.695078 kubelet[2420]: I0130 13:53:35.694988 2420 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:53:36.280195 kubelet[2420]: I0130 13:53:36.279984 2420 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 13:53:36.280195 kubelet[2420]: I0130 13:53:36.280182 2420 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:53:36.280555 kubelet[2420]: I0130 13:53:36.280536 2420 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 13:53:36.306177 kubelet[2420]: I0130 13:53:36.305579 2420 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:53:36.324322 kubelet[2420]: I0130 13:53:36.324285 2420 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:53:36.324604 kubelet[2420]: I0130 13:53:36.324562 2420 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:53:36.324816 kubelet[2420]: I0130 13:53:36.324603 2420 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172.31.29.156","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 13:53:36.324934 kubelet[2420]: I0130 13:53:36.324831 2420 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:53:36.324934 kubelet[2420]: I0130 13:53:36.324852 2420 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 13:53:36.325017 kubelet[2420]: I0130 13:53:36.325002 2420 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:53:36.326456 kubelet[2420]: I0130 13:53:36.326431 2420 kubelet.go:400] "Attempting to sync node with API server" Jan 30 13:53:36.326456 kubelet[2420]: I0130 13:53:36.326457 2420 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:53:36.326660 kubelet[2420]: I0130 13:53:36.326490 2420 kubelet.go:312] "Adding apiserver pod source" Jan 30 13:53:36.326660 kubelet[2420]: I0130 13:53:36.326512 2420 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:53:36.328963 kubelet[2420]: E0130 13:53:36.328256 2420 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:53:36.328963 kubelet[2420]: E0130 13:53:36.328650 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:53:36.333455 kubelet[2420]: I0130 13:53:36.333428 2420 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 13:53:36.335554 kubelet[2420]: I0130 13:53:36.335527 2420 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:53:36.335666 kubelet[2420]: W0130 13:53:36.335604 2420 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 13:53:36.336493 kubelet[2420]: I0130 13:53:36.336400 2420 server.go:1264] "Started kubelet" Jan 30 13:53:36.339939 kubelet[2420]: I0130 13:53:36.339092 2420 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:53:36.344549 kubelet[2420]: I0130 13:53:36.344432 2420 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:53:36.346500 kubelet[2420]: I0130 13:53:36.346479 2420 server.go:455] "Adding debug handlers to kubelet server" Jan 30 13:53:36.348146 kubelet[2420]: I0130 13:53:36.348054 2420 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:53:36.349392 kubelet[2420]: I0130 13:53:36.348663 2420 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:53:36.350565 kubelet[2420]: I0130 13:53:36.350545 2420 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 13:53:36.351541 kubelet[2420]: I0130 13:53:36.351525 2420 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 13:53:36.351723 kubelet[2420]: I0130 13:53:36.351712 2420 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:53:36.354688 kubelet[2420]: W0130 13:53:36.354659 2420 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 30 13:53:36.354794 kubelet[2420]: E0130 13:53:36.354699 2420 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 30 13:53:36.354845 kubelet[2420]: W0130 13:53:36.354796 2420 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "172.31.29.156" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 30 13:53:36.354845 kubelet[2420]: E0130 13:53:36.354812 2420 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.29.156" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 30 13:53:36.357102 kubelet[2420]: I0130 13:53:36.357043 2420 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:53:36.358419 kubelet[2420]: E0130 13:53:36.358397 2420 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:53:36.359491 kubelet[2420]: I0130 13:53:36.359459 2420 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:53:36.359491 kubelet[2420]: I0130 13:53:36.359477 2420 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:53:36.379646 kubelet[2420]: I0130 13:53:36.379607 2420 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 13:53:36.379930 kubelet[2420]: I0130 13:53:36.379815 2420 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 13:53:36.379930 kubelet[2420]: I0130 13:53:36.379839 2420 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:53:36.387071 kubelet[2420]: I0130 13:53:36.386906 2420 policy_none.go:49] "None policy: Start" Jan 30 13:53:36.388220 kubelet[2420]: I0130 13:53:36.388081 2420 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 13:53:36.388220 kubelet[2420]: I0130 13:53:36.388112 2420 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:53:36.404673 kubelet[2420]: E0130 13:53:36.402970 2420 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.31.29.156\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Jan 30 13:53:36.404673 kubelet[2420]: W0130 13:53:36.403089 2420 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 30 13:53:36.404673 kubelet[2420]: E0130 13:53:36.403116 2420 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 30 13:53:36.404673 kubelet[2420]: E0130 13:53:36.403209 2420 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.29.156.181f7cd08c3e98c0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.29.156,UID:172.31.29.156,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172.31.29.156,},FirstTimestamp:2025-01-30 13:53:36.336255168 +0000 UTC m=+0.707339877,LastTimestamp:2025-01-30 13:53:36.336255168 +0000 UTC m=+0.707339877,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.29.156,}" Jan 30 13:53:36.404437 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 30 13:53:36.426389 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 30 13:53:36.439743 kubelet[2420]: E0130 13:53:36.437301 2420 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.29.156.181f7cd08d8fc15a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.29.156,UID:172.31.29.156,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:172.31.29.156,},FirstTimestamp:2025-01-30 13:53:36.358351194 +0000 UTC m=+0.729435905,LastTimestamp:2025-01-30 13:53:36.358351194 +0000 UTC m=+0.729435905,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.29.156,}" Jan 30 13:53:36.449794 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 30 13:53:36.454093 kubelet[2420]: I0130 13:53:36.453634 2420 kubelet_node_status.go:73] "Attempting to register node" node="172.31.29.156" Jan 30 13:53:36.455389 kubelet[2420]: I0130 13:53:36.454506 2420 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:53:36.455389 kubelet[2420]: I0130 13:53:36.454848 2420 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:53:36.455389 kubelet[2420]: I0130 13:53:36.454985 2420 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:53:36.461869 kubelet[2420]: E0130 13:53:36.461099 2420 kubelet_node_status.go:96] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.29.156" Jan 30 13:53:36.468904 kubelet[2420]: E0130 13:53:36.468879 2420 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.29.156\" not found" Jan 30 13:53:36.476163 kubelet[2420]: E0130 13:53:36.475969 2420 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.29.156.181f7cd08e9bfbc6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.29.156,UID:172.31.29.156,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 172.31.29.156 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:172.31.29.156,},FirstTimestamp:2025-01-30 13:53:36.375929798 +0000 UTC m=+0.747014502,LastTimestamp:2025-01-30 13:53:36.375929798 +0000 UTC m=+0.747014502,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.29.156,}" Jan 30 13:53:36.482391 kubelet[2420]: I0130 13:53:36.482193 2420 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:53:36.485308 kubelet[2420]: I0130 13:53:36.484802 2420 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:53:36.485308 kubelet[2420]: I0130 13:53:36.484840 2420 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 13:53:36.485308 kubelet[2420]: I0130 13:53:36.484862 2420 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 13:53:36.485308 kubelet[2420]: E0130 13:53:36.484918 2420 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 30 13:53:36.496118 kubelet[2420]: E0130 13:53:36.496020 2420 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.29.156.181f7cd08e9c3a3f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.29.156,UID:172.31.29.156,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node 172.31.29.156 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:172.31.29.156,},FirstTimestamp:2025-01-30 13:53:36.375945791 +0000 UTC m=+0.747030485,LastTimestamp:2025-01-30 13:53:36.375945791 +0000 UTC m=+0.747030485,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.29.156,}" Jan 30 13:53:36.498380 kubelet[2420]: W0130 13:53:36.498302 2420 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Jan 30 13:53:36.498380 kubelet[2420]: E0130 13:53:36.498339 2420 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Jan 30 13:53:36.521703 kubelet[2420]: E0130 13:53:36.521580 2420 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.29.156.181f7cd08e9c49e8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.29.156,UID:172.31.29.156,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node 172.31.29.156 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:172.31.29.156,},FirstTimestamp:2025-01-30 13:53:36.3759498 +0000 UTC m=+0.747034502,LastTimestamp:2025-01-30 13:53:36.3759498 +0000 UTC m=+0.747034502,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.29.156,}" Jan 30 13:53:36.549414 kubelet[2420]: E0130 13:53:36.547487 2420 event.go:359] "Server rejected event (will not retry!)" err="events \"172.31.29.156.181f7cd08e9bfbc6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.29.156.181f7cd08e9bfbc6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.29.156,UID:172.31.29.156,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 172.31.29.156 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:172.31.29.156,},FirstTimestamp:2025-01-30 13:53:36.375929798 +0000 UTC m=+0.747014502,LastTimestamp:2025-01-30 13:53:36.45358199 +0000 UTC m=+0.824666693,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.29.156,}" Jan 30 13:53:36.617857 kubelet[2420]: E0130 13:53:36.617786 2420 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172.31.29.156\" not found" node="172.31.29.156" Jan 30 13:53:36.665847 kubelet[2420]: I0130 13:53:36.665811 2420 kubelet_node_status.go:73] "Attempting to register node" node="172.31.29.156" Jan 30 13:53:36.708607 kubelet[2420]: I0130 13:53:36.708570 2420 kubelet_node_status.go:76] "Successfully registered node" node="172.31.29.156" Jan 30 13:53:36.784277 kubelet[2420]: E0130 13:53:36.783966 2420 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.29.156\" not found" Jan 30 13:53:36.884643 kubelet[2420]: E0130 13:53:36.884520 2420 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.29.156\" not found" Jan 30 13:53:36.985685 kubelet[2420]: E0130 13:53:36.985635 2420 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.29.156\" not found" Jan 30 13:53:37.086409 kubelet[2420]: E0130 13:53:37.086344 2420 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.29.156\" not found" Jan 30 13:53:37.153528 sudo[2282]: pam_unix(sudo:session): session closed for user root Jan 30 13:53:37.177044 sshd[2279]: pam_unix(sshd:session): session closed for user core Jan 30 13:53:37.188953 systemd[1]: sshd@6-172.31.29.156:22-139.178.68.195:34186.service: Deactivated successfully. Jan 30 13:53:37.189376 kubelet[2420]: E0130 13:53:37.189232 2420 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.29.156\" not found" Jan 30 13:53:37.197628 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 13:53:37.202827 systemd-logind[1950]: Session 7 logged out. Waiting for processes to exit. Jan 30 13:53:37.208379 systemd-logind[1950]: Removed session 7. Jan 30 13:53:37.282983 kubelet[2420]: I0130 13:53:37.282940 2420 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 30 13:53:37.290255 kubelet[2420]: E0130 13:53:37.290098 2420 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.29.156\" not found" Jan 30 13:53:37.329487 kubelet[2420]: I0130 13:53:37.329425 2420 apiserver.go:52] "Watching apiserver" Jan 30 13:53:37.329487 kubelet[2420]: E0130 13:53:37.329430 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:53:37.341196 kubelet[2420]: I0130 13:53:37.341143 2420 topology_manager.go:215] "Topology Admit Handler" podUID="a329e2c4-51a9-4843-a9e8-b48284b269b5" podNamespace="calico-system" podName="csi-node-driver-zjwh8" Jan 30 13:53:37.341331 kubelet[2420]: I0130 13:53:37.341249 2420 topology_manager.go:215] "Topology Admit Handler" podUID="04af22e7-a5a4-4bdf-9077-8d88022ba084" podNamespace="kube-system" podName="kube-proxy-rg5hn" Jan 30 13:53:37.341331 kubelet[2420]: I0130 13:53:37.341322 2420 topology_manager.go:215] "Topology Admit Handler" podUID="97b4f0a1-096d-4edf-bb5f-a2301c83e32c" podNamespace="calico-system" podName="calico-node-q2znc" Jan 30 13:53:37.341619 kubelet[2420]: E0130 13:53:37.341587 2420 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zjwh8" podUID="a329e2c4-51a9-4843-a9e8-b48284b269b5" Jan 30 13:53:37.354850 kubelet[2420]: I0130 13:53:37.354652 2420 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 13:53:37.357953 kubelet[2420]: I0130 13:53:37.357923 2420 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a329e2c4-51a9-4843-a9e8-b48284b269b5-kubelet-dir\") pod \"csi-node-driver-zjwh8\" (UID: \"a329e2c4-51a9-4843-a9e8-b48284b269b5\") " pod="calico-system/csi-node-driver-zjwh8" Jan 30 13:53:37.358806 systemd[1]: Created slice kubepods-besteffort-pod04af22e7_a5a4_4bdf_9077_8d88022ba084.slice - libcontainer container kubepods-besteffort-pod04af22e7_a5a4_4bdf_9077_8d88022ba084.slice. Jan 30 13:53:37.360306 kubelet[2420]: I0130 13:53:37.359521 2420 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kll6p\" (UniqueName: \"kubernetes.io/projected/a329e2c4-51a9-4843-a9e8-b48284b269b5-kube-api-access-kll6p\") pod \"csi-node-driver-zjwh8\" (UID: \"a329e2c4-51a9-4843-a9e8-b48284b269b5\") " pod="calico-system/csi-node-driver-zjwh8" Jan 30 13:53:37.360306 kubelet[2420]: I0130 13:53:37.359671 2420 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/04af22e7-a5a4-4bdf-9077-8d88022ba084-kube-proxy\") pod \"kube-proxy-rg5hn\" (UID: \"04af22e7-a5a4-4bdf-9077-8d88022ba084\") " pod="kube-system/kube-proxy-rg5hn" Jan 30 13:53:37.360306 kubelet[2420]: I0130 13:53:37.359697 2420 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/04af22e7-a5a4-4bdf-9077-8d88022ba084-xtables-lock\") pod \"kube-proxy-rg5hn\" (UID: \"04af22e7-a5a4-4bdf-9077-8d88022ba084\") " pod="kube-system/kube-proxy-rg5hn" Jan 30 13:53:37.360306 kubelet[2420]: I0130 13:53:37.359727 2420 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjr2j\" (UniqueName: \"kubernetes.io/projected/04af22e7-a5a4-4bdf-9077-8d88022ba084-kube-api-access-cjr2j\") pod \"kube-proxy-rg5hn\" (UID: \"04af22e7-a5a4-4bdf-9077-8d88022ba084\") " pod="kube-system/kube-proxy-rg5hn" Jan 30 13:53:37.360306 kubelet[2420]: I0130 13:53:37.359753 2420 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/97b4f0a1-096d-4edf-bb5f-a2301c83e32c-var-lib-calico\") pod \"calico-node-q2znc\" (UID: \"97b4f0a1-096d-4edf-bb5f-a2301c83e32c\") " pod="calico-system/calico-node-q2znc" Jan 30 13:53:37.360599 kubelet[2420]: I0130 13:53:37.359779 2420 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a329e2c4-51a9-4843-a9e8-b48284b269b5-socket-dir\") pod \"csi-node-driver-zjwh8\" (UID: \"a329e2c4-51a9-4843-a9e8-b48284b269b5\") " pod="calico-system/csi-node-driver-zjwh8" Jan 30 13:53:37.360599 kubelet[2420]: I0130 13:53:37.359802 2420 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/04af22e7-a5a4-4bdf-9077-8d88022ba084-lib-modules\") pod \"kube-proxy-rg5hn\" (UID: \"04af22e7-a5a4-4bdf-9077-8d88022ba084\") " pod="kube-system/kube-proxy-rg5hn" Jan 30 13:53:37.360599 kubelet[2420]: I0130 13:53:37.359917 2420 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/97b4f0a1-096d-4edf-bb5f-a2301c83e32c-cni-net-dir\") pod \"calico-node-q2znc\" (UID: \"97b4f0a1-096d-4edf-bb5f-a2301c83e32c\") " pod="calico-system/calico-node-q2znc" Jan 30 13:53:37.360599 kubelet[2420]: I0130 13:53:37.359943 2420 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/97b4f0a1-096d-4edf-bb5f-a2301c83e32c-flexvol-driver-host\") pod \"calico-node-q2znc\" (UID: \"97b4f0a1-096d-4edf-bb5f-a2301c83e32c\") " pod="calico-system/calico-node-q2znc" Jan 30 13:53:37.360599 kubelet[2420]: I0130 13:53:37.359968 2420 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/a329e2c4-51a9-4843-a9e8-b48284b269b5-varrun\") pod \"csi-node-driver-zjwh8\" (UID: \"a329e2c4-51a9-4843-a9e8-b48284b269b5\") " pod="calico-system/csi-node-driver-zjwh8" Jan 30 13:53:37.360793 kubelet[2420]: I0130 13:53:37.359991 2420 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/97b4f0a1-096d-4edf-bb5f-a2301c83e32c-xtables-lock\") pod \"calico-node-q2znc\" (UID: \"97b4f0a1-096d-4edf-bb5f-a2301c83e32c\") " pod="calico-system/calico-node-q2znc" Jan 30 13:53:37.360793 kubelet[2420]: I0130 13:53:37.360016 2420 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/97b4f0a1-096d-4edf-bb5f-a2301c83e32c-node-certs\") pod \"calico-node-q2znc\" (UID: \"97b4f0a1-096d-4edf-bb5f-a2301c83e32c\") " pod="calico-system/calico-node-q2znc" Jan 30 13:53:37.360793 kubelet[2420]: I0130 13:53:37.360039 2420 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/97b4f0a1-096d-4edf-bb5f-a2301c83e32c-cni-log-dir\") pod \"calico-node-q2znc\" (UID: \"97b4f0a1-096d-4edf-bb5f-a2301c83e32c\") " pod="calico-system/calico-node-q2znc" Jan 30 13:53:37.360793 kubelet[2420]: I0130 13:53:37.360065 2420 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vs22p\" (UniqueName: \"kubernetes.io/projected/97b4f0a1-096d-4edf-bb5f-a2301c83e32c-kube-api-access-vs22p\") pod \"calico-node-q2znc\" (UID: \"97b4f0a1-096d-4edf-bb5f-a2301c83e32c\") " pod="calico-system/calico-node-q2znc" Jan 30 13:53:37.360793 kubelet[2420]: I0130 13:53:37.360110 2420 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a329e2c4-51a9-4843-a9e8-b48284b269b5-registration-dir\") pod \"csi-node-driver-zjwh8\" (UID: \"a329e2c4-51a9-4843-a9e8-b48284b269b5\") " pod="calico-system/csi-node-driver-zjwh8" Jan 30 13:53:37.361101 kubelet[2420]: I0130 13:53:37.360132 2420 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/97b4f0a1-096d-4edf-bb5f-a2301c83e32c-lib-modules\") pod \"calico-node-q2znc\" (UID: \"97b4f0a1-096d-4edf-bb5f-a2301c83e32c\") " pod="calico-system/calico-node-q2znc" Jan 30 13:53:37.361101 kubelet[2420]: I0130 13:53:37.360157 2420 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/97b4f0a1-096d-4edf-bb5f-a2301c83e32c-policysync\") pod \"calico-node-q2znc\" (UID: \"97b4f0a1-096d-4edf-bb5f-a2301c83e32c\") " pod="calico-system/calico-node-q2znc" Jan 30 13:53:37.361101 kubelet[2420]: I0130 13:53:37.360180 2420 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97b4f0a1-096d-4edf-bb5f-a2301c83e32c-tigera-ca-bundle\") pod \"calico-node-q2znc\" (UID: \"97b4f0a1-096d-4edf-bb5f-a2301c83e32c\") " pod="calico-system/calico-node-q2znc" Jan 30 13:53:37.361101 kubelet[2420]: I0130 13:53:37.360202 2420 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/97b4f0a1-096d-4edf-bb5f-a2301c83e32c-var-run-calico\") pod \"calico-node-q2znc\" (UID: \"97b4f0a1-096d-4edf-bb5f-a2301c83e32c\") " pod="calico-system/calico-node-q2znc" Jan 30 13:53:37.361101 kubelet[2420]: I0130 13:53:37.360231 2420 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/97b4f0a1-096d-4edf-bb5f-a2301c83e32c-cni-bin-dir\") pod \"calico-node-q2znc\" (UID: \"97b4f0a1-096d-4edf-bb5f-a2301c83e32c\") " pod="calico-system/calico-node-q2znc" Jan 30 13:53:37.374458 systemd[1]: Created slice kubepods-besteffort-pod97b4f0a1_096d_4edf_bb5f_a2301c83e32c.slice - libcontainer container kubepods-besteffort-pod97b4f0a1_096d_4edf_bb5f_a2301c83e32c.slice. Jan 30 13:53:37.391871 kubelet[2420]: I0130 13:53:37.391507 2420 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 30 13:53:37.392511 containerd[1972]: time="2025-01-30T13:53:37.392462437Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 13:53:37.392932 kubelet[2420]: I0130 13:53:37.392701 2420 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 30 13:53:37.465403 kubelet[2420]: E0130 13:53:37.464169 2420 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:37.465403 kubelet[2420]: W0130 13:53:37.464193 2420 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:37.465403 kubelet[2420]: E0130 13:53:37.464213 2420 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:37.465403 kubelet[2420]: E0130 13:53:37.465194 2420 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:37.465403 kubelet[2420]: W0130 13:53:37.465211 2420 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:37.465403 kubelet[2420]: E0130 13:53:37.465231 2420 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:37.469119 kubelet[2420]: E0130 13:53:37.468814 2420 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:37.469119 kubelet[2420]: W0130 13:53:37.468836 2420 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:37.469119 kubelet[2420]: E0130 13:53:37.468858 2420 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:37.469521 kubelet[2420]: E0130 13:53:37.469440 2420 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:37.469521 kubelet[2420]: W0130 13:53:37.469455 2420 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:37.469521 kubelet[2420]: E0130 13:53:37.469471 2420 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:37.482950 kubelet[2420]: E0130 13:53:37.482828 2420 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:37.482950 kubelet[2420]: W0130 13:53:37.482855 2420 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:37.482950 kubelet[2420]: E0130 13:53:37.482881 2420 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:37.518393 kubelet[2420]: E0130 13:53:37.514507 2420 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:37.518393 kubelet[2420]: W0130 13:53:37.514536 2420 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:37.518393 kubelet[2420]: E0130 13:53:37.514563 2420 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:37.526583 kubelet[2420]: E0130 13:53:37.525661 2420 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:37.526583 kubelet[2420]: W0130 13:53:37.525686 2420 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:37.526583 kubelet[2420]: E0130 13:53:37.525714 2420 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:37.532845 kubelet[2420]: E0130 13:53:37.532815 2420 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:37.533065 kubelet[2420]: W0130 13:53:37.532840 2420 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:37.533065 kubelet[2420]: E0130 13:53:37.533023 2420 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:37.675110 containerd[1972]: time="2025-01-30T13:53:37.675065265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rg5hn,Uid:04af22e7-a5a4-4bdf-9077-8d88022ba084,Namespace:kube-system,Attempt:0,}" Jan 30 13:53:37.679013 containerd[1972]: time="2025-01-30T13:53:37.678836065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-q2znc,Uid:97b4f0a1-096d-4edf-bb5f-a2301c83e32c,Namespace:calico-system,Attempt:0,}" Jan 30 13:53:38.283382 containerd[1972]: time="2025-01-30T13:53:38.281877620Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:53:38.290847 containerd[1972]: time="2025-01-30T13:53:38.290796695Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:53:38.293581 containerd[1972]: time="2025-01-30T13:53:38.293185849Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 30 13:53:38.294373 containerd[1972]: time="2025-01-30T13:53:38.294327331Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:53:38.295694 containerd[1972]: time="2025-01-30T13:53:38.295660638Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:53:38.299970 containerd[1972]: time="2025-01-30T13:53:38.299924624Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:53:38.301765 containerd[1972]: time="2025-01-30T13:53:38.301695963Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 626.532113ms" Jan 30 13:53:38.303096 containerd[1972]: time="2025-01-30T13:53:38.303062035Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 624.13535ms" Jan 30 13:53:38.330129 kubelet[2420]: E0130 13:53:38.330086 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:53:38.485354 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2338736549.mount: Deactivated successfully. Jan 30 13:53:38.627608 containerd[1972]: time="2025-01-30T13:53:38.627351467Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:53:38.632405 containerd[1972]: time="2025-01-30T13:53:38.631803765Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:53:38.632554 containerd[1972]: time="2025-01-30T13:53:38.632375409Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:53:38.636041 containerd[1972]: time="2025-01-30T13:53:38.635797095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:53:38.638175 containerd[1972]: time="2025-01-30T13:53:38.637609418Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:53:38.638175 containerd[1972]: time="2025-01-30T13:53:38.637693359Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:53:38.638175 containerd[1972]: time="2025-01-30T13:53:38.637726597Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:53:38.638175 containerd[1972]: time="2025-01-30T13:53:38.637864288Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:53:38.760190 systemd[1]: run-containerd-runc-k8s.io-d06a85b8ad0991712718590669bfd1737f85f8694c53b1ad5128a9df56166da1-runc.IKJYoZ.mount: Deactivated successfully. Jan 30 13:53:38.772612 systemd[1]: Started cri-containerd-9ea11dd27f019ebf41b9acbd72e23afcd5c58d55b87c5b306df887c48cb6c039.scope - libcontainer container 9ea11dd27f019ebf41b9acbd72e23afcd5c58d55b87c5b306df887c48cb6c039. Jan 30 13:53:38.775763 systemd[1]: Started cri-containerd-d06a85b8ad0991712718590669bfd1737f85f8694c53b1ad5128a9df56166da1.scope - libcontainer container d06a85b8ad0991712718590669bfd1737f85f8694c53b1ad5128a9df56166da1. Jan 30 13:53:38.818277 containerd[1972]: time="2025-01-30T13:53:38.818096834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rg5hn,Uid:04af22e7-a5a4-4bdf-9077-8d88022ba084,Namespace:kube-system,Attempt:0,} returns sandbox id \"9ea11dd27f019ebf41b9acbd72e23afcd5c58d55b87c5b306df887c48cb6c039\"" Jan 30 13:53:38.823113 containerd[1972]: time="2025-01-30T13:53:38.823069718Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 30 13:53:38.823763 containerd[1972]: time="2025-01-30T13:53:38.823712251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-q2znc,Uid:97b4f0a1-096d-4edf-bb5f-a2301c83e32c,Namespace:calico-system,Attempt:0,} returns sandbox id \"d06a85b8ad0991712718590669bfd1737f85f8694c53b1ad5128a9df56166da1\"" Jan 30 13:53:39.331041 kubelet[2420]: E0130 13:53:39.330997 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:53:39.487810 kubelet[2420]: E0130 13:53:39.487756 2420 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zjwh8" podUID="a329e2c4-51a9-4843-a9e8-b48284b269b5" Jan 30 13:53:40.276212 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2877987782.mount: Deactivated successfully. Jan 30 13:53:40.331521 kubelet[2420]: E0130 13:53:40.331478 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:53:40.909704 containerd[1972]: time="2025-01-30T13:53:40.909609888Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:40.910931 containerd[1972]: time="2025-01-30T13:53:40.910791686Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=29058337" Jan 30 13:53:40.912631 containerd[1972]: time="2025-01-30T13:53:40.911811337Z" level=info msg="ImageCreate event name:\"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:40.914299 containerd[1972]: time="2025-01-30T13:53:40.914257259Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:40.914911 containerd[1972]: time="2025-01-30T13:53:40.914867408Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"29057356\" in 2.0917511s" Jan 30 13:53:40.915006 containerd[1972]: time="2025-01-30T13:53:40.914919250Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\"" Jan 30 13:53:40.916736 containerd[1972]: time="2025-01-30T13:53:40.916703958Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 30 13:53:40.918419 containerd[1972]: time="2025-01-30T13:53:40.918383427Z" level=info msg="CreateContainer within sandbox \"9ea11dd27f019ebf41b9acbd72e23afcd5c58d55b87c5b306df887c48cb6c039\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 13:53:40.934208 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1983515406.mount: Deactivated successfully. Jan 30 13:53:40.938941 containerd[1972]: time="2025-01-30T13:53:40.938891782Z" level=info msg="CreateContainer within sandbox \"9ea11dd27f019ebf41b9acbd72e23afcd5c58d55b87c5b306df887c48cb6c039\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f461ce1e6fb92fa70142d9cbe275e18887e44d52619e3299cf5fb52c076cd389\"" Jan 30 13:53:40.939886 containerd[1972]: time="2025-01-30T13:53:40.939841844Z" level=info msg="StartContainer for \"f461ce1e6fb92fa70142d9cbe275e18887e44d52619e3299cf5fb52c076cd389\"" Jan 30 13:53:40.991597 systemd[1]: Started cri-containerd-f461ce1e6fb92fa70142d9cbe275e18887e44d52619e3299cf5fb52c076cd389.scope - libcontainer container f461ce1e6fb92fa70142d9cbe275e18887e44d52619e3299cf5fb52c076cd389. Jan 30 13:53:41.032219 containerd[1972]: time="2025-01-30T13:53:41.032004656Z" level=info msg="StartContainer for \"f461ce1e6fb92fa70142d9cbe275e18887e44d52619e3299cf5fb52c076cd389\" returns successfully" Jan 30 13:53:41.331835 kubelet[2420]: E0130 13:53:41.331795 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:53:41.485772 kubelet[2420]: E0130 13:53:41.485729 2420 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zjwh8" podUID="a329e2c4-51a9-4843-a9e8-b48284b269b5" Jan 30 13:53:41.601205 kubelet[2420]: E0130 13:53:41.600553 2420 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:41.601205 kubelet[2420]: W0130 13:53:41.600579 2420 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:41.601205 kubelet[2420]: E0130 13:53:41.600605 2420 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:41.601205 kubelet[2420]: E0130 13:53:41.600997 2420 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:41.601205 kubelet[2420]: W0130 13:53:41.601012 2420 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:41.601205 kubelet[2420]: E0130 13:53:41.601029 2420 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:41.602105 kubelet[2420]: E0130 13:53:41.601237 2420 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:41.602105 kubelet[2420]: W0130 13:53:41.601247 2420 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:41.602105 kubelet[2420]: E0130 13:53:41.601258 2420 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:41.602105 kubelet[2420]: E0130 13:53:41.601618 2420 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:41.602105 kubelet[2420]: W0130 13:53:41.601631 2420 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:41.602105 kubelet[2420]: E0130 13:53:41.601644 2420 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:41.602105 kubelet[2420]: E0130 13:53:41.601985 2420 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:41.602105 kubelet[2420]: W0130 13:53:41.601998 2420 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:41.602105 kubelet[2420]: E0130 13:53:41.602011 2420 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:41.604928 kubelet[2420]: E0130 13:53:41.602204 2420 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:41.604928 kubelet[2420]: W0130 13:53:41.602213 2420 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:41.604928 kubelet[2420]: E0130 13:53:41.602224 2420 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:41.604928 kubelet[2420]: E0130 13:53:41.602427 2420 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:41.604928 kubelet[2420]: W0130 13:53:41.602499 2420 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:41.604928 kubelet[2420]: E0130 13:53:41.603796 2420 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:41.604928 kubelet[2420]: E0130 13:53:41.604450 2420 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:41.604928 kubelet[2420]: W0130 13:53:41.604463 2420 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:41.604928 kubelet[2420]: E0130 13:53:41.604477 2420 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:41.604928 kubelet[2420]: E0130 13:53:41.604842 2420 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:41.606592 kubelet[2420]: W0130 13:53:41.604886 2420 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:41.606592 kubelet[2420]: E0130 13:53:41.604900 2420 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:41.606592 kubelet[2420]: E0130 13:53:41.605311 2420 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:41.606592 kubelet[2420]: W0130 13:53:41.605325 2420 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:41.606592 kubelet[2420]: E0130 13:53:41.605523 2420 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:41.606592 kubelet[2420]: E0130 13:53:41.605826 2420 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:41.606592 kubelet[2420]: W0130 13:53:41.605867 2420 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:41.606592 kubelet[2420]: E0130 13:53:41.605882 2420 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:41.606592 kubelet[2420]: E0130 13:53:41.606188 2420 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:41.606592 kubelet[2420]: W0130 13:53:41.606198 2420 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:41.607819 kubelet[2420]: E0130 13:53:41.606240 2420 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:41.607819 kubelet[2420]: E0130 13:53:41.606643 2420 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:41.607819 kubelet[2420]: W0130 13:53:41.606682 2420 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:41.607819 kubelet[2420]: E0130 13:53:41.606696 2420 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:41.607819 kubelet[2420]: E0130 13:53:41.606991 2420 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:41.607819 kubelet[2420]: W0130 13:53:41.607002 2420 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:41.607819 kubelet[2420]: E0130 13:53:41.607044 2420 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:41.607819 kubelet[2420]: E0130 13:53:41.607483 2420 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:41.607819 kubelet[2420]: W0130 13:53:41.607495 2420 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:41.607819 kubelet[2420]: E0130 13:53:41.607539 2420 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:41.609941 kubelet[2420]: E0130 13:53:41.607843 2420 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:41.609941 kubelet[2420]: W0130 13:53:41.607853 2420 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:41.609941 kubelet[2420]: E0130 13:53:41.607897 2420 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:41.609941 kubelet[2420]: E0130 13:53:41.608281 2420 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:41.609941 kubelet[2420]: W0130 13:53:41.608291 2420 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:41.609941 kubelet[2420]: E0130 13:53:41.608304 2420 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:41.609941 kubelet[2420]: E0130 13:53:41.609217 2420 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:41.609941 kubelet[2420]: W0130 13:53:41.609229 2420 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:41.609941 kubelet[2420]: E0130 13:53:41.609346 2420 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:41.610977 kubelet[2420]: E0130 13:53:41.610128 2420 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:41.610977 kubelet[2420]: W0130 13:53:41.610175 2420 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:41.610977 kubelet[2420]: E0130 13:53:41.610192 2420 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:41.610977 kubelet[2420]: E0130 13:53:41.610896 2420 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:41.610977 kubelet[2420]: W0130 13:53:41.610907 2420 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:41.610977 kubelet[2420]: E0130 13:53:41.610920 2420 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:41.699596 kubelet[2420]: E0130 13:53:41.699561 2420 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:41.699596 kubelet[2420]: W0130 13:53:41.699586 2420 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:41.699895 kubelet[2420]: E0130 13:53:41.699609 2420 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:41.700118 kubelet[2420]: E0130 13:53:41.700095 2420 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:41.700118 kubelet[2420]: W0130 13:53:41.700111 2420 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:41.700231 kubelet[2420]: E0130 13:53:41.700147 2420 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:41.700734 kubelet[2420]: E0130 13:53:41.700713 2420 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:41.701227 kubelet[2420]: W0130 13:53:41.700729 2420 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:41.701227 kubelet[2420]: E0130 13:53:41.701055 2420 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:41.701589 kubelet[2420]: E0130 13:53:41.701571 2420 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:41.701589 kubelet[2420]: W0130 13:53:41.701586 2420 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:41.701709 kubelet[2420]: E0130 13:53:41.701606 2420 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:41.701871 kubelet[2420]: E0130 13:53:41.701853 2420 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:41.701871 kubelet[2420]: W0130 13:53:41.701867 2420 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:41.701983 kubelet[2420]: E0130 13:53:41.701953 2420 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:41.702198 kubelet[2420]: E0130 13:53:41.702180 2420 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:41.702198 kubelet[2420]: W0130 13:53:41.702195 2420 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:41.702306 kubelet[2420]: E0130 13:53:41.702214 2420 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:41.702572 kubelet[2420]: E0130 13:53:41.702546 2420 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:41.702572 kubelet[2420]: W0130 13:53:41.702565 2420 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:41.702702 kubelet[2420]: E0130 13:53:41.702587 2420 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:41.702834 kubelet[2420]: E0130 13:53:41.702816 2420 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:41.702834 kubelet[2420]: W0130 13:53:41.702831 2420 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:41.702935 kubelet[2420]: E0130 13:53:41.702848 2420 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:41.704316 kubelet[2420]: E0130 13:53:41.704222 2420 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:41.704316 kubelet[2420]: W0130 13:53:41.704310 2420 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:41.704465 kubelet[2420]: E0130 13:53:41.704397 2420 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:41.705951 kubelet[2420]: E0130 13:53:41.705929 2420 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:41.705951 kubelet[2420]: W0130 13:53:41.705946 2420 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:41.706070 kubelet[2420]: E0130 13:53:41.705968 2420 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:41.706809 kubelet[2420]: E0130 13:53:41.706789 2420 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:41.706809 kubelet[2420]: W0130 13:53:41.706805 2420 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:41.707010 kubelet[2420]: E0130 13:53:41.706964 2420 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:41.707076 kubelet[2420]: E0130 13:53:41.707032 2420 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:41.707076 kubelet[2420]: W0130 13:53:41.707041 2420 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:41.707076 kubelet[2420]: E0130 13:53:41.707053 2420 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:42.184280 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount97690652.mount: Deactivated successfully. Jan 30 13:53:42.339969 kubelet[2420]: E0130 13:53:42.339888 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:53:42.348820 containerd[1972]: time="2025-01-30T13:53:42.348756139Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:42.355658 containerd[1972]: time="2025-01-30T13:53:42.353899524Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Jan 30 13:53:42.360633 containerd[1972]: time="2025-01-30T13:53:42.359906809Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:42.375598 containerd[1972]: time="2025-01-30T13:53:42.375546792Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:42.376728 containerd[1972]: time="2025-01-30T13:53:42.376679118Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.459931176s" Jan 30 13:53:42.377685 containerd[1972]: time="2025-01-30T13:53:42.376733023Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 30 13:53:42.382641 containerd[1972]: time="2025-01-30T13:53:42.382595213Z" level=info msg="CreateContainer within sandbox \"d06a85b8ad0991712718590669bfd1737f85f8694c53b1ad5128a9df56166da1\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 30 13:53:42.407221 containerd[1972]: time="2025-01-30T13:53:42.407178486Z" level=info msg="CreateContainer within sandbox \"d06a85b8ad0991712718590669bfd1737f85f8694c53b1ad5128a9df56166da1\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"62295310f1f850c0de6ee8902edbf7665a4d7635dcb0e1a6ee196f6e50a78794\"" Jan 30 13:53:42.409901 containerd[1972]: time="2025-01-30T13:53:42.408301968Z" level=info msg="StartContainer for \"62295310f1f850c0de6ee8902edbf7665a4d7635dcb0e1a6ee196f6e50a78794\"" Jan 30 13:53:42.449615 systemd[1]: Started cri-containerd-62295310f1f850c0de6ee8902edbf7665a4d7635dcb0e1a6ee196f6e50a78794.scope - libcontainer container 62295310f1f850c0de6ee8902edbf7665a4d7635dcb0e1a6ee196f6e50a78794. Jan 30 13:53:42.482602 containerd[1972]: time="2025-01-30T13:53:42.482555683Z" level=info msg="StartContainer for \"62295310f1f850c0de6ee8902edbf7665a4d7635dcb0e1a6ee196f6e50a78794\" returns successfully" Jan 30 13:53:42.500518 systemd[1]: cri-containerd-62295310f1f850c0de6ee8902edbf7665a4d7635dcb0e1a6ee196f6e50a78794.scope: Deactivated successfully. Jan 30 13:53:42.546711 kubelet[2420]: I0130 13:53:42.546539 2420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rg5hn" podStartSLOduration=4.452127192 podStartE2EDuration="6.54651885s" podCreationTimestamp="2025-01-30 13:53:36 +0000 UTC" firstStartedPulling="2025-01-30 13:53:38.821846194 +0000 UTC m=+3.192930897" lastFinishedPulling="2025-01-30 13:53:40.916237853 +0000 UTC m=+5.287322555" observedRunningTime="2025-01-30 13:53:41.600018849 +0000 UTC m=+5.971103559" watchObservedRunningTime="2025-01-30 13:53:42.54651885 +0000 UTC m=+6.917603559" Jan 30 13:53:42.700273 containerd[1972]: time="2025-01-30T13:53:42.700107623Z" level=info msg="shim disconnected" id=62295310f1f850c0de6ee8902edbf7665a4d7635dcb0e1a6ee196f6e50a78794 namespace=k8s.io Jan 30 13:53:42.700273 containerd[1972]: time="2025-01-30T13:53:42.700174934Z" level=warning msg="cleaning up after shim disconnected" id=62295310f1f850c0de6ee8902edbf7665a4d7635dcb0e1a6ee196f6e50a78794 namespace=k8s.io Jan 30 13:53:42.700273 containerd[1972]: time="2025-01-30T13:53:42.700187786Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:53:43.102226 systemd[1]: run-containerd-runc-k8s.io-62295310f1f850c0de6ee8902edbf7665a4d7635dcb0e1a6ee196f6e50a78794-runc.0L1eME.mount: Deactivated successfully. Jan 30 13:53:43.102370 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-62295310f1f850c0de6ee8902edbf7665a4d7635dcb0e1a6ee196f6e50a78794-rootfs.mount: Deactivated successfully. Jan 30 13:53:43.340477 kubelet[2420]: E0130 13:53:43.340405 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:53:43.485418 kubelet[2420]: E0130 13:53:43.485256 2420 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zjwh8" podUID="a329e2c4-51a9-4843-a9e8-b48284b269b5" Jan 30 13:53:43.538455 containerd[1972]: time="2025-01-30T13:53:43.537645342Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 30 13:53:44.341618 kubelet[2420]: E0130 13:53:44.341561 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:53:45.342563 kubelet[2420]: E0130 13:53:45.342505 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:53:45.486446 kubelet[2420]: E0130 13:53:45.486180 2420 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zjwh8" podUID="a329e2c4-51a9-4843-a9e8-b48284b269b5" Jan 30 13:53:46.343243 kubelet[2420]: E0130 13:53:46.342973 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:53:47.343214 kubelet[2420]: E0130 13:53:47.343139 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:53:47.485998 kubelet[2420]: E0130 13:53:47.485942 2420 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zjwh8" podUID="a329e2c4-51a9-4843-a9e8-b48284b269b5" Jan 30 13:53:47.698688 containerd[1972]: time="2025-01-30T13:53:47.698550978Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:47.699824 containerd[1972]: time="2025-01-30T13:53:47.699772698Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 30 13:53:47.700686 containerd[1972]: time="2025-01-30T13:53:47.700631227Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:47.703626 containerd[1972]: time="2025-01-30T13:53:47.702852132Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:47.703626 containerd[1972]: time="2025-01-30T13:53:47.703490831Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 4.165159607s" Jan 30 13:53:47.703626 containerd[1972]: time="2025-01-30T13:53:47.703526635Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 30 13:53:47.706233 containerd[1972]: time="2025-01-30T13:53:47.706203075Z" level=info msg="CreateContainer within sandbox \"d06a85b8ad0991712718590669bfd1737f85f8694c53b1ad5128a9df56166da1\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 30 13:53:47.720636 containerd[1972]: time="2025-01-30T13:53:47.720549885Z" level=info msg="CreateContainer within sandbox \"d06a85b8ad0991712718590669bfd1737f85f8694c53b1ad5128a9df56166da1\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"4cc0d06277f46eb02ca11dea2e6487b6aa68092c57010142474580a380af4f4e\"" Jan 30 13:53:47.723420 containerd[1972]: time="2025-01-30T13:53:47.721460823Z" level=info msg="StartContainer for \"4cc0d06277f46eb02ca11dea2e6487b6aa68092c57010142474580a380af4f4e\"" Jan 30 13:53:47.758135 systemd[1]: run-containerd-runc-k8s.io-4cc0d06277f46eb02ca11dea2e6487b6aa68092c57010142474580a380af4f4e-runc.8radAF.mount: Deactivated successfully. Jan 30 13:53:47.768620 systemd[1]: Started cri-containerd-4cc0d06277f46eb02ca11dea2e6487b6aa68092c57010142474580a380af4f4e.scope - libcontainer container 4cc0d06277f46eb02ca11dea2e6487b6aa68092c57010142474580a380af4f4e. Jan 30 13:53:47.802402 containerd[1972]: time="2025-01-30T13:53:47.801621167Z" level=info msg="StartContainer for \"4cc0d06277f46eb02ca11dea2e6487b6aa68092c57010142474580a380af4f4e\" returns successfully" Jan 30 13:53:48.343617 kubelet[2420]: E0130 13:53:48.343520 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:53:48.504006 containerd[1972]: time="2025-01-30T13:53:48.503902777Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:53:48.515571 systemd[1]: cri-containerd-4cc0d06277f46eb02ca11dea2e6487b6aa68092c57010142474580a380af4f4e.scope: Deactivated successfully. Jan 30 13:53:48.527395 kubelet[2420]: I0130 13:53:48.526920 2420 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 30 13:53:48.555028 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4cc0d06277f46eb02ca11dea2e6487b6aa68092c57010142474580a380af4f4e-rootfs.mount: Deactivated successfully. Jan 30 13:53:49.048024 containerd[1972]: time="2025-01-30T13:53:49.047929452Z" level=info msg="shim disconnected" id=4cc0d06277f46eb02ca11dea2e6487b6aa68092c57010142474580a380af4f4e namespace=k8s.io Jan 30 13:53:49.048024 containerd[1972]: time="2025-01-30T13:53:49.048008253Z" level=warning msg="cleaning up after shim disconnected" id=4cc0d06277f46eb02ca11dea2e6487b6aa68092c57010142474580a380af4f4e namespace=k8s.io Jan 30 13:53:49.048024 containerd[1972]: time="2025-01-30T13:53:49.048023061Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:53:49.344343 kubelet[2420]: E0130 13:53:49.344209 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:53:49.496287 systemd[1]: Created slice kubepods-besteffort-poda329e2c4_51a9_4843_a9e8_b48284b269b5.slice - libcontainer container kubepods-besteffort-poda329e2c4_51a9_4843_a9e8_b48284b269b5.slice. Jan 30 13:53:49.506703 containerd[1972]: time="2025-01-30T13:53:49.502648304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zjwh8,Uid:a329e2c4-51a9-4843-a9e8-b48284b269b5,Namespace:calico-system,Attempt:0,}" Jan 30 13:53:49.566728 containerd[1972]: time="2025-01-30T13:53:49.566459802Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 30 13:53:49.628489 containerd[1972]: time="2025-01-30T13:53:49.625081446Z" level=error msg="Failed to destroy network for sandbox \"5fe40c59fc65eac8b73a850f326d7b0d4b5c86ad5f8d73df478b78cf00b3511d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:53:49.628823 containerd[1972]: time="2025-01-30T13:53:49.628770781Z" level=error msg="encountered an error cleaning up failed sandbox \"5fe40c59fc65eac8b73a850f326d7b0d4b5c86ad5f8d73df478b78cf00b3511d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:53:49.628909 containerd[1972]: time="2025-01-30T13:53:49.628858626Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zjwh8,Uid:a329e2c4-51a9-4843-a9e8-b48284b269b5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5fe40c59fc65eac8b73a850f326d7b0d4b5c86ad5f8d73df478b78cf00b3511d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:53:49.629196 kubelet[2420]: E0130 13:53:49.629157 2420 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5fe40c59fc65eac8b73a850f326d7b0d4b5c86ad5f8d73df478b78cf00b3511d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:53:49.629653 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5fe40c59fc65eac8b73a850f326d7b0d4b5c86ad5f8d73df478b78cf00b3511d-shm.mount: Deactivated successfully. Jan 30 13:53:49.631862 kubelet[2420]: E0130 13:53:49.629861 2420 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5fe40c59fc65eac8b73a850f326d7b0d4b5c86ad5f8d73df478b78cf00b3511d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zjwh8" Jan 30 13:53:49.631862 kubelet[2420]: E0130 13:53:49.629902 2420 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5fe40c59fc65eac8b73a850f326d7b0d4b5c86ad5f8d73df478b78cf00b3511d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zjwh8" Jan 30 13:53:49.631862 kubelet[2420]: E0130 13:53:49.629965 2420 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zjwh8_calico-system(a329e2c4-51a9-4843-a9e8-b48284b269b5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zjwh8_calico-system(a329e2c4-51a9-4843-a9e8-b48284b269b5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5fe40c59fc65eac8b73a850f326d7b0d4b5c86ad5f8d73df478b78cf00b3511d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zjwh8" podUID="a329e2c4-51a9-4843-a9e8-b48284b269b5" Jan 30 13:53:50.344894 kubelet[2420]: E0130 13:53:50.344835 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:53:50.568668 kubelet[2420]: I0130 13:53:50.568630 2420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5fe40c59fc65eac8b73a850f326d7b0d4b5c86ad5f8d73df478b78cf00b3511d" Jan 30 13:53:50.569652 containerd[1972]: time="2025-01-30T13:53:50.569595108Z" level=info msg="StopPodSandbox for \"5fe40c59fc65eac8b73a850f326d7b0d4b5c86ad5f8d73df478b78cf00b3511d\"" Jan 30 13:53:50.570287 containerd[1972]: time="2025-01-30T13:53:50.569955561Z" level=info msg="Ensure that sandbox 5fe40c59fc65eac8b73a850f326d7b0d4b5c86ad5f8d73df478b78cf00b3511d in task-service has been cleanup successfully" Jan 30 13:53:50.608802 containerd[1972]: time="2025-01-30T13:53:50.608675822Z" level=error msg="StopPodSandbox for \"5fe40c59fc65eac8b73a850f326d7b0d4b5c86ad5f8d73df478b78cf00b3511d\" failed" error="failed to destroy network for sandbox \"5fe40c59fc65eac8b73a850f326d7b0d4b5c86ad5f8d73df478b78cf00b3511d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:53:50.609591 kubelet[2420]: E0130 13:53:50.609018 2420 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5fe40c59fc65eac8b73a850f326d7b0d4b5c86ad5f8d73df478b78cf00b3511d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5fe40c59fc65eac8b73a850f326d7b0d4b5c86ad5f8d73df478b78cf00b3511d" Jan 30 13:53:50.609591 kubelet[2420]: E0130 13:53:50.609199 2420 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5fe40c59fc65eac8b73a850f326d7b0d4b5c86ad5f8d73df478b78cf00b3511d"} Jan 30 13:53:50.609591 kubelet[2420]: E0130 13:53:50.609322 2420 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a329e2c4-51a9-4843-a9e8-b48284b269b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5fe40c59fc65eac8b73a850f326d7b0d4b5c86ad5f8d73df478b78cf00b3511d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:53:50.609591 kubelet[2420]: E0130 13:53:50.609414 2420 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a329e2c4-51a9-4843-a9e8-b48284b269b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5fe40c59fc65eac8b73a850f326d7b0d4b5c86ad5f8d73df478b78cf00b3511d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zjwh8" podUID="a329e2c4-51a9-4843-a9e8-b48284b269b5" Jan 30 13:53:51.267472 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 30 13:53:51.346012 kubelet[2420]: E0130 13:53:51.345952 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:53:51.913503 kubelet[2420]: I0130 13:53:51.910466 2420 topology_manager.go:215] "Topology Admit Handler" podUID="21b007a3-ebbb-4563-8ea1-756127933ab6" podNamespace="default" podName="nginx-deployment-85f456d6dd-5b59x" Jan 30 13:53:51.923625 systemd[1]: Created slice kubepods-besteffort-pod21b007a3_ebbb_4563_8ea1_756127933ab6.slice - libcontainer container kubepods-besteffort-pod21b007a3_ebbb_4563_8ea1_756127933ab6.slice. Jan 30 13:53:51.954660 kubelet[2420]: W0130 13:53:51.954625 2420 reflector.go:547] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:172.31.29.156" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node '172.31.29.156' and this object Jan 30 13:53:51.956174 kubelet[2420]: E0130 13:53:51.956130 2420 reflector.go:150] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:172.31.29.156" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node '172.31.29.156' and this object Jan 30 13:53:52.080240 kubelet[2420]: I0130 13:53:52.080195 2420 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5w8d\" (UniqueName: \"kubernetes.io/projected/21b007a3-ebbb-4563-8ea1-756127933ab6-kube-api-access-v5w8d\") pod \"nginx-deployment-85f456d6dd-5b59x\" (UID: \"21b007a3-ebbb-4563-8ea1-756127933ab6\") " pod="default/nginx-deployment-85f456d6dd-5b59x" Jan 30 13:53:52.346181 kubelet[2420]: E0130 13:53:52.346140 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:53:53.190541 kubelet[2420]: E0130 13:53:53.190172 2420 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 30 13:53:53.190541 kubelet[2420]: E0130 13:53:53.190232 2420 projected.go:200] Error preparing data for projected volume kube-api-access-v5w8d for pod default/nginx-deployment-85f456d6dd-5b59x: failed to sync configmap cache: timed out waiting for the condition Jan 30 13:53:53.190541 kubelet[2420]: E0130 13:53:53.190325 2420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/21b007a3-ebbb-4563-8ea1-756127933ab6-kube-api-access-v5w8d podName:21b007a3-ebbb-4563-8ea1-756127933ab6 nodeName:}" failed. No retries permitted until 2025-01-30 13:53:53.69029577 +0000 UTC m=+18.061380479 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-v5w8d" (UniqueName: "kubernetes.io/projected/21b007a3-ebbb-4563-8ea1-756127933ab6-kube-api-access-v5w8d") pod "nginx-deployment-85f456d6dd-5b59x" (UID: "21b007a3-ebbb-4563-8ea1-756127933ab6") : failed to sync configmap cache: timed out waiting for the condition Jan 30 13:53:53.347693 kubelet[2420]: E0130 13:53:53.347646 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:53:54.029050 containerd[1972]: time="2025-01-30T13:53:54.028894068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-5b59x,Uid:21b007a3-ebbb-4563-8ea1-756127933ab6,Namespace:default,Attempt:0,}" Jan 30 13:53:54.288895 containerd[1972]: time="2025-01-30T13:53:54.285864371Z" level=error msg="Failed to destroy network for sandbox \"878ac55d6bd2e8783eb4ed5fcdfd5f4cd3945618dd7790a33d23ef6b83dc2086\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:53:54.288895 containerd[1972]: time="2025-01-30T13:53:54.288654152Z" level=error msg="encountered an error cleaning up failed sandbox \"878ac55d6bd2e8783eb4ed5fcdfd5f4cd3945618dd7790a33d23ef6b83dc2086\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:53:54.288895 containerd[1972]: time="2025-01-30T13:53:54.288721909Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-5b59x,Uid:21b007a3-ebbb-4563-8ea1-756127933ab6,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"878ac55d6bd2e8783eb4ed5fcdfd5f4cd3945618dd7790a33d23ef6b83dc2086\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:53:54.289221 kubelet[2420]: E0130 13:53:54.289017 2420 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"878ac55d6bd2e8783eb4ed5fcdfd5f4cd3945618dd7790a33d23ef6b83dc2086\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:53:54.289221 kubelet[2420]: E0130 13:53:54.289091 2420 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"878ac55d6bd2e8783eb4ed5fcdfd5f4cd3945618dd7790a33d23ef6b83dc2086\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-5b59x" Jan 30 13:53:54.289221 kubelet[2420]: E0130 13:53:54.289118 2420 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"878ac55d6bd2e8783eb4ed5fcdfd5f4cd3945618dd7790a33d23ef6b83dc2086\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-5b59x" Jan 30 13:53:54.289584 kubelet[2420]: E0130 13:53:54.289182 2420 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-5b59x_default(21b007a3-ebbb-4563-8ea1-756127933ab6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-5b59x_default(21b007a3-ebbb-4563-8ea1-756127933ab6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"878ac55d6bd2e8783eb4ed5fcdfd5f4cd3945618dd7790a33d23ef6b83dc2086\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-5b59x" podUID="21b007a3-ebbb-4563-8ea1-756127933ab6" Jan 30 13:53:54.290954 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-878ac55d6bd2e8783eb4ed5fcdfd5f4cd3945618dd7790a33d23ef6b83dc2086-shm.mount: Deactivated successfully. Jan 30 13:53:54.349307 kubelet[2420]: E0130 13:53:54.349260 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:53:54.599698 kubelet[2420]: I0130 13:53:54.599410 2420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="878ac55d6bd2e8783eb4ed5fcdfd5f4cd3945618dd7790a33d23ef6b83dc2086" Jan 30 13:53:54.600153 containerd[1972]: time="2025-01-30T13:53:54.600112948Z" level=info msg="StopPodSandbox for \"878ac55d6bd2e8783eb4ed5fcdfd5f4cd3945618dd7790a33d23ef6b83dc2086\"" Jan 30 13:53:54.600403 containerd[1972]: time="2025-01-30T13:53:54.600333886Z" level=info msg="Ensure that sandbox 878ac55d6bd2e8783eb4ed5fcdfd5f4cd3945618dd7790a33d23ef6b83dc2086 in task-service has been cleanup successfully" Jan 30 13:53:54.663380 containerd[1972]: time="2025-01-30T13:53:54.663303533Z" level=error msg="StopPodSandbox for \"878ac55d6bd2e8783eb4ed5fcdfd5f4cd3945618dd7790a33d23ef6b83dc2086\" failed" error="failed to destroy network for sandbox \"878ac55d6bd2e8783eb4ed5fcdfd5f4cd3945618dd7790a33d23ef6b83dc2086\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:53:54.663583 kubelet[2420]: E0130 13:53:54.663543 2420 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"878ac55d6bd2e8783eb4ed5fcdfd5f4cd3945618dd7790a33d23ef6b83dc2086\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="878ac55d6bd2e8783eb4ed5fcdfd5f4cd3945618dd7790a33d23ef6b83dc2086" Jan 30 13:53:54.663669 kubelet[2420]: E0130 13:53:54.663596 2420 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"878ac55d6bd2e8783eb4ed5fcdfd5f4cd3945618dd7790a33d23ef6b83dc2086"} Jan 30 13:53:54.663669 kubelet[2420]: E0130 13:53:54.663644 2420 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"21b007a3-ebbb-4563-8ea1-756127933ab6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"878ac55d6bd2e8783eb4ed5fcdfd5f4cd3945618dd7790a33d23ef6b83dc2086\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:53:54.663808 kubelet[2420]: E0130 13:53:54.663675 2420 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"21b007a3-ebbb-4563-8ea1-756127933ab6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"878ac55d6bd2e8783eb4ed5fcdfd5f4cd3945618dd7790a33d23ef6b83dc2086\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-5b59x" podUID="21b007a3-ebbb-4563-8ea1-756127933ab6" Jan 30 13:53:55.350071 kubelet[2420]: E0130 13:53:55.349823 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:53:56.327314 kubelet[2420]: E0130 13:53:56.327276 2420 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:53:56.357211 kubelet[2420]: E0130 13:53:56.355434 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:53:57.158729 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2675256627.mount: Deactivated successfully. Jan 30 13:53:57.216502 containerd[1972]: time="2025-01-30T13:53:57.216436780Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:57.218545 containerd[1972]: time="2025-01-30T13:53:57.218391319Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 30 13:53:57.226428 containerd[1972]: time="2025-01-30T13:53:57.224692752Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:57.231328 containerd[1972]: time="2025-01-30T13:53:57.230541895Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:57.231328 containerd[1972]: time="2025-01-30T13:53:57.231176679Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 7.664667452s" Jan 30 13:53:57.231328 containerd[1972]: time="2025-01-30T13:53:57.231216269Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 30 13:53:57.250937 containerd[1972]: time="2025-01-30T13:53:57.250890378Z" level=info msg="CreateContainer within sandbox \"d06a85b8ad0991712718590669bfd1737f85f8694c53b1ad5128a9df56166da1\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 30 13:53:57.279073 containerd[1972]: time="2025-01-30T13:53:57.279015038Z" level=info msg="CreateContainer within sandbox \"d06a85b8ad0991712718590669bfd1737f85f8694c53b1ad5128a9df56166da1\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"532ec525aa34c654885528e14da16cbd8046e66673e670cf6d902cda9491945d\"" Jan 30 13:53:57.281347 containerd[1972]: time="2025-01-30T13:53:57.279860268Z" level=info msg="StartContainer for \"532ec525aa34c654885528e14da16cbd8046e66673e670cf6d902cda9491945d\"" Jan 30 13:53:57.358021 kubelet[2420]: E0130 13:53:57.357978 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:53:57.385612 systemd[1]: Started cri-containerd-532ec525aa34c654885528e14da16cbd8046e66673e670cf6d902cda9491945d.scope - libcontainer container 532ec525aa34c654885528e14da16cbd8046e66673e670cf6d902cda9491945d. Jan 30 13:53:57.429459 containerd[1972]: time="2025-01-30T13:53:57.429202127Z" level=info msg="StartContainer for \"532ec525aa34c654885528e14da16cbd8046e66673e670cf6d902cda9491945d\" returns successfully" Jan 30 13:53:57.524804 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 30 13:53:57.524990 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 30 13:53:58.359034 kubelet[2420]: E0130 13:53:58.358975 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:53:58.618635 kubelet[2420]: I0130 13:53:58.618522 2420 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:53:59.359197 kubelet[2420]: E0130 13:53:59.359140 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:53:59.538396 kernel: bpftool[3190]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 30 13:53:59.846336 systemd-networkd[1849]: vxlan.calico: Link UP Jan 30 13:53:59.846346 systemd-networkd[1849]: vxlan.calico: Gained carrier Jan 30 13:53:59.850075 (udev-worker)[3208]: Network interface NamePolicy= disabled on kernel command line. Jan 30 13:53:59.902177 (udev-worker)[3048]: Network interface NamePolicy= disabled on kernel command line. Jan 30 13:54:00.359645 kubelet[2420]: E0130 13:54:00.359580 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:01.360262 kubelet[2420]: E0130 13:54:01.360214 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:01.638681 systemd-networkd[1849]: vxlan.calico: Gained IPv6LL Jan 30 13:54:02.360768 kubelet[2420]: E0130 13:54:02.360711 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:02.486986 containerd[1972]: time="2025-01-30T13:54:02.486610430Z" level=info msg="StopPodSandbox for \"5fe40c59fc65eac8b73a850f326d7b0d4b5c86ad5f8d73df478b78cf00b3511d\"" Jan 30 13:54:02.771081 kubelet[2420]: I0130 13:54:02.770697 2420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-q2znc" podStartSLOduration=8.363850795 podStartE2EDuration="26.770676049s" podCreationTimestamp="2025-01-30 13:53:36 +0000 UTC" firstStartedPulling="2025-01-30 13:53:38.825500688 +0000 UTC m=+3.196585390" lastFinishedPulling="2025-01-30 13:53:57.232325946 +0000 UTC m=+21.603410644" observedRunningTime="2025-01-30 13:53:57.644468372 +0000 UTC m=+22.015553083" watchObservedRunningTime="2025-01-30 13:54:02.770676049 +0000 UTC m=+27.141760757" Jan 30 13:54:02.992769 containerd[1972]: 2025-01-30 13:54:02.764 [INFO][3276] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5fe40c59fc65eac8b73a850f326d7b0d4b5c86ad5f8d73df478b78cf00b3511d" Jan 30 13:54:02.992769 containerd[1972]: 2025-01-30 13:54:02.765 [INFO][3276] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5fe40c59fc65eac8b73a850f326d7b0d4b5c86ad5f8d73df478b78cf00b3511d" iface="eth0" netns="/var/run/netns/cni-1a8ee2ef-e47c-5e93-e6af-2ec1e3970b1a" Jan 30 13:54:02.992769 containerd[1972]: 2025-01-30 13:54:02.765 [INFO][3276] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5fe40c59fc65eac8b73a850f326d7b0d4b5c86ad5f8d73df478b78cf00b3511d" iface="eth0" netns="/var/run/netns/cni-1a8ee2ef-e47c-5e93-e6af-2ec1e3970b1a" Jan 30 13:54:02.992769 containerd[1972]: 2025-01-30 13:54:02.768 [INFO][3276] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5fe40c59fc65eac8b73a850f326d7b0d4b5c86ad5f8d73df478b78cf00b3511d" iface="eth0" netns="/var/run/netns/cni-1a8ee2ef-e47c-5e93-e6af-2ec1e3970b1a" Jan 30 13:54:02.992769 containerd[1972]: 2025-01-30 13:54:02.768 [INFO][3276] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5fe40c59fc65eac8b73a850f326d7b0d4b5c86ad5f8d73df478b78cf00b3511d" Jan 30 13:54:02.992769 containerd[1972]: 2025-01-30 13:54:02.768 [INFO][3276] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5fe40c59fc65eac8b73a850f326d7b0d4b5c86ad5f8d73df478b78cf00b3511d" Jan 30 13:54:02.992769 containerd[1972]: 2025-01-30 13:54:02.904 [INFO][3282] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5fe40c59fc65eac8b73a850f326d7b0d4b5c86ad5f8d73df478b78cf00b3511d" HandleID="k8s-pod-network.5fe40c59fc65eac8b73a850f326d7b0d4b5c86ad5f8d73df478b78cf00b3511d" Workload="172.31.29.156-k8s-csi--node--driver--zjwh8-eth0" Jan 30 13:54:02.992769 containerd[1972]: 2025-01-30 13:54:02.904 [INFO][3282] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:54:02.992769 containerd[1972]: 2025-01-30 13:54:02.904 [INFO][3282] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:54:02.992769 containerd[1972]: 2025-01-30 13:54:02.961 [WARNING][3282] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5fe40c59fc65eac8b73a850f326d7b0d4b5c86ad5f8d73df478b78cf00b3511d" HandleID="k8s-pod-network.5fe40c59fc65eac8b73a850f326d7b0d4b5c86ad5f8d73df478b78cf00b3511d" Workload="172.31.29.156-k8s-csi--node--driver--zjwh8-eth0" Jan 30 13:54:02.992769 containerd[1972]: 2025-01-30 13:54:02.961 [INFO][3282] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5fe40c59fc65eac8b73a850f326d7b0d4b5c86ad5f8d73df478b78cf00b3511d" HandleID="k8s-pod-network.5fe40c59fc65eac8b73a850f326d7b0d4b5c86ad5f8d73df478b78cf00b3511d" Workload="172.31.29.156-k8s-csi--node--driver--zjwh8-eth0" Jan 30 13:54:02.992769 containerd[1972]: 2025-01-30 13:54:02.979 [INFO][3282] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:54:02.992769 containerd[1972]: 2025-01-30 13:54:02.984 [INFO][3276] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5fe40c59fc65eac8b73a850f326d7b0d4b5c86ad5f8d73df478b78cf00b3511d" Jan 30 13:54:02.996750 containerd[1972]: time="2025-01-30T13:54:02.992965553Z" level=info msg="TearDown network for sandbox \"5fe40c59fc65eac8b73a850f326d7b0d4b5c86ad5f8d73df478b78cf00b3511d\" successfully" Jan 30 13:54:02.996750 containerd[1972]: time="2025-01-30T13:54:02.993009185Z" level=info msg="StopPodSandbox for \"5fe40c59fc65eac8b73a850f326d7b0d4b5c86ad5f8d73df478b78cf00b3511d\" returns successfully" Jan 30 13:54:02.997725 containerd[1972]: time="2025-01-30T13:54:02.997655615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zjwh8,Uid:a329e2c4-51a9-4843-a9e8-b48284b269b5,Namespace:calico-system,Attempt:1,}" Jan 30 13:54:03.003536 systemd[1]: run-netns-cni\x2d1a8ee2ef\x2de47c\x2d5e93\x2de6af\x2d2ec1e3970b1a.mount: Deactivated successfully. Jan 30 13:54:03.361112 kubelet[2420]: E0130 13:54:03.361057 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:03.419024 systemd-networkd[1849]: calidd06851b23a: Link UP Jan 30 13:54:03.419654 systemd-networkd[1849]: calidd06851b23a: Gained carrier Jan 30 13:54:03.433626 (udev-worker)[3308]: Network interface NamePolicy= disabled on kernel command line. Jan 30 13:54:03.469271 containerd[1972]: 2025-01-30 13:54:03.181 [INFO][3291] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.29.156-k8s-csi--node--driver--zjwh8-eth0 csi-node-driver- calico-system a329e2c4-51a9-4843-a9e8-b48284b269b5 1043 0 2025-01-30 13:53:36 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172.31.29.156 csi-node-driver-zjwh8 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calidd06851b23a [] []}} ContainerID="aafc45943d6f60beca2d2c91a6f3494d01198ee6af024d8f10311006b85680c5" Namespace="calico-system" Pod="csi-node-driver-zjwh8" WorkloadEndpoint="172.31.29.156-k8s-csi--node--driver--zjwh8-" Jan 30 13:54:03.469271 containerd[1972]: 2025-01-30 13:54:03.181 [INFO][3291] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="aafc45943d6f60beca2d2c91a6f3494d01198ee6af024d8f10311006b85680c5" Namespace="calico-system" Pod="csi-node-driver-zjwh8" WorkloadEndpoint="172.31.29.156-k8s-csi--node--driver--zjwh8-eth0" Jan 30 13:54:03.469271 containerd[1972]: 2025-01-30 13:54:03.238 [INFO][3300] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aafc45943d6f60beca2d2c91a6f3494d01198ee6af024d8f10311006b85680c5" HandleID="k8s-pod-network.aafc45943d6f60beca2d2c91a6f3494d01198ee6af024d8f10311006b85680c5" Workload="172.31.29.156-k8s-csi--node--driver--zjwh8-eth0" Jan 30 13:54:03.469271 containerd[1972]: 2025-01-30 13:54:03.269 [INFO][3300] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="aafc45943d6f60beca2d2c91a6f3494d01198ee6af024d8f10311006b85680c5" HandleID="k8s-pod-network.aafc45943d6f60beca2d2c91a6f3494d01198ee6af024d8f10311006b85680c5" Workload="172.31.29.156-k8s-csi--node--driver--zjwh8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ed5b0), Attrs:map[string]string{"namespace":"calico-system", "node":"172.31.29.156", "pod":"csi-node-driver-zjwh8", "timestamp":"2025-01-30 13:54:03.238132468 +0000 UTC"}, Hostname:"172.31.29.156", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:54:03.469271 containerd[1972]: 2025-01-30 13:54:03.269 [INFO][3300] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:54:03.469271 containerd[1972]: 2025-01-30 13:54:03.269 [INFO][3300] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:54:03.469271 containerd[1972]: 2025-01-30 13:54:03.269 [INFO][3300] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.29.156' Jan 30 13:54:03.469271 containerd[1972]: 2025-01-30 13:54:03.280 [INFO][3300] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.aafc45943d6f60beca2d2c91a6f3494d01198ee6af024d8f10311006b85680c5" host="172.31.29.156" Jan 30 13:54:03.469271 containerd[1972]: 2025-01-30 13:54:03.301 [INFO][3300] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.29.156" Jan 30 13:54:03.469271 containerd[1972]: 2025-01-30 13:54:03.334 [INFO][3300] ipam/ipam.go 489: Trying affinity for 192.168.3.192/26 host="172.31.29.156" Jan 30 13:54:03.469271 containerd[1972]: 2025-01-30 13:54:03.339 [INFO][3300] ipam/ipam.go 155: Attempting to load block cidr=192.168.3.192/26 host="172.31.29.156" Jan 30 13:54:03.469271 containerd[1972]: 2025-01-30 13:54:03.343 [INFO][3300] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.3.192/26 host="172.31.29.156" Jan 30 13:54:03.469271 containerd[1972]: 2025-01-30 13:54:03.343 [INFO][3300] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.3.192/26 handle="k8s-pod-network.aafc45943d6f60beca2d2c91a6f3494d01198ee6af024d8f10311006b85680c5" host="172.31.29.156" Jan 30 13:54:03.469271 containerd[1972]: 2025-01-30 13:54:03.347 [INFO][3300] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.aafc45943d6f60beca2d2c91a6f3494d01198ee6af024d8f10311006b85680c5 Jan 30 13:54:03.469271 containerd[1972]: 2025-01-30 13:54:03.375 [INFO][3300] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.3.192/26 handle="k8s-pod-network.aafc45943d6f60beca2d2c91a6f3494d01198ee6af024d8f10311006b85680c5" host="172.31.29.156" Jan 30 13:54:03.469271 containerd[1972]: 2025-01-30 13:54:03.392 [INFO][3300] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.3.193/26] block=192.168.3.192/26 handle="k8s-pod-network.aafc45943d6f60beca2d2c91a6f3494d01198ee6af024d8f10311006b85680c5" host="172.31.29.156" Jan 30 13:54:03.469271 containerd[1972]: 2025-01-30 13:54:03.392 [INFO][3300] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.3.193/26] handle="k8s-pod-network.aafc45943d6f60beca2d2c91a6f3494d01198ee6af024d8f10311006b85680c5" host="172.31.29.156" Jan 30 13:54:03.469271 containerd[1972]: 2025-01-30 13:54:03.392 [INFO][3300] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:54:03.469271 containerd[1972]: 2025-01-30 13:54:03.392 [INFO][3300] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.3.193/26] IPv6=[] ContainerID="aafc45943d6f60beca2d2c91a6f3494d01198ee6af024d8f10311006b85680c5" HandleID="k8s-pod-network.aafc45943d6f60beca2d2c91a6f3494d01198ee6af024d8f10311006b85680c5" Workload="172.31.29.156-k8s-csi--node--driver--zjwh8-eth0" Jan 30 13:54:03.471943 containerd[1972]: 2025-01-30 13:54:03.401 [INFO][3291] cni-plugin/k8s.go 386: Populated endpoint ContainerID="aafc45943d6f60beca2d2c91a6f3494d01198ee6af024d8f10311006b85680c5" Namespace="calico-system" Pod="csi-node-driver-zjwh8" WorkloadEndpoint="172.31.29.156-k8s-csi--node--driver--zjwh8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.29.156-k8s-csi--node--driver--zjwh8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a329e2c4-51a9-4843-a9e8-b48284b269b5", ResourceVersion:"1043", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 53, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.29.156", ContainerID:"", Pod:"csi-node-driver-zjwh8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.3.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidd06851b23a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:54:03.471943 containerd[1972]: 2025-01-30 13:54:03.402 [INFO][3291] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.3.193/32] ContainerID="aafc45943d6f60beca2d2c91a6f3494d01198ee6af024d8f10311006b85680c5" Namespace="calico-system" Pod="csi-node-driver-zjwh8" WorkloadEndpoint="172.31.29.156-k8s-csi--node--driver--zjwh8-eth0" Jan 30 13:54:03.471943 containerd[1972]: 2025-01-30 13:54:03.402 [INFO][3291] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidd06851b23a ContainerID="aafc45943d6f60beca2d2c91a6f3494d01198ee6af024d8f10311006b85680c5" Namespace="calico-system" Pod="csi-node-driver-zjwh8" WorkloadEndpoint="172.31.29.156-k8s-csi--node--driver--zjwh8-eth0" Jan 30 13:54:03.471943 containerd[1972]: 2025-01-30 13:54:03.421 [INFO][3291] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="aafc45943d6f60beca2d2c91a6f3494d01198ee6af024d8f10311006b85680c5" Namespace="calico-system" Pod="csi-node-driver-zjwh8" WorkloadEndpoint="172.31.29.156-k8s-csi--node--driver--zjwh8-eth0" Jan 30 13:54:03.471943 containerd[1972]: 2025-01-30 13:54:03.422 [INFO][3291] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="aafc45943d6f60beca2d2c91a6f3494d01198ee6af024d8f10311006b85680c5" Namespace="calico-system" Pod="csi-node-driver-zjwh8" WorkloadEndpoint="172.31.29.156-k8s-csi--node--driver--zjwh8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.29.156-k8s-csi--node--driver--zjwh8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a329e2c4-51a9-4843-a9e8-b48284b269b5", ResourceVersion:"1043", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 53, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.29.156", ContainerID:"aafc45943d6f60beca2d2c91a6f3494d01198ee6af024d8f10311006b85680c5", Pod:"csi-node-driver-zjwh8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.3.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidd06851b23a", MAC:"ae:72:99:37:a1:0c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:54:03.471943 containerd[1972]: 2025-01-30 13:54:03.463 [INFO][3291] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="aafc45943d6f60beca2d2c91a6f3494d01198ee6af024d8f10311006b85680c5" Namespace="calico-system" Pod="csi-node-driver-zjwh8" WorkloadEndpoint="172.31.29.156-k8s-csi--node--driver--zjwh8-eth0" Jan 30 13:54:03.537014 containerd[1972]: time="2025-01-30T13:54:03.535248274Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:54:03.537014 containerd[1972]: time="2025-01-30T13:54:03.535345963Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:54:03.537014 containerd[1972]: time="2025-01-30T13:54:03.535391355Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:54:03.537014 containerd[1972]: time="2025-01-30T13:54:03.535500456Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:54:03.572648 systemd[1]: run-containerd-runc-k8s.io-aafc45943d6f60beca2d2c91a6f3494d01198ee6af024d8f10311006b85680c5-runc.dSy2i6.mount: Deactivated successfully. Jan 30 13:54:03.581669 systemd[1]: Started cri-containerd-aafc45943d6f60beca2d2c91a6f3494d01198ee6af024d8f10311006b85680c5.scope - libcontainer container aafc45943d6f60beca2d2c91a6f3494d01198ee6af024d8f10311006b85680c5. Jan 30 13:54:03.591390 kubelet[2420]: I0130 13:54:03.588777 2420 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:54:03.626331 containerd[1972]: time="2025-01-30T13:54:03.626189337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zjwh8,Uid:a329e2c4-51a9-4843-a9e8-b48284b269b5,Namespace:calico-system,Attempt:1,} returns sandbox id \"aafc45943d6f60beca2d2c91a6f3494d01198ee6af024d8f10311006b85680c5\"" Jan 30 13:54:03.630841 containerd[1972]: time="2025-01-30T13:54:03.630800259Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 30 13:54:04.361277 kubelet[2420]: E0130 13:54:04.361216 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:04.710719 systemd-networkd[1849]: calidd06851b23a: Gained IPv6LL Jan 30 13:54:05.014488 containerd[1972]: time="2025-01-30T13:54:05.012427632Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:05.014845 containerd[1972]: time="2025-01-30T13:54:05.014508473Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 30 13:54:05.016542 containerd[1972]: time="2025-01-30T13:54:05.016478039Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:05.024907 containerd[1972]: time="2025-01-30T13:54:05.024832727Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:05.025789 containerd[1972]: time="2025-01-30T13:54:05.025741477Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.39489137s" Jan 30 13:54:05.025898 containerd[1972]: time="2025-01-30T13:54:05.025794650Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 30 13:54:05.045664 containerd[1972]: time="2025-01-30T13:54:05.045610187Z" level=info msg="CreateContainer within sandbox \"aafc45943d6f60beca2d2c91a6f3494d01198ee6af024d8f10311006b85680c5\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 30 13:54:05.113523 containerd[1972]: time="2025-01-30T13:54:05.112389828Z" level=info msg="CreateContainer within sandbox \"aafc45943d6f60beca2d2c91a6f3494d01198ee6af024d8f10311006b85680c5\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"ec3e107676aec03d92a3251f5d77694550a793e80df3e55d80cfb793c9c42239\"" Jan 30 13:54:05.117264 containerd[1972]: time="2025-01-30T13:54:05.117184589Z" level=info msg="StartContainer for \"ec3e107676aec03d92a3251f5d77694550a793e80df3e55d80cfb793c9c42239\"" Jan 30 13:54:05.190573 systemd[1]: Started cri-containerd-ec3e107676aec03d92a3251f5d77694550a793e80df3e55d80cfb793c9c42239.scope - libcontainer container ec3e107676aec03d92a3251f5d77694550a793e80df3e55d80cfb793c9c42239. Jan 30 13:54:05.275061 containerd[1972]: time="2025-01-30T13:54:05.274572330Z" level=info msg="StartContainer for \"ec3e107676aec03d92a3251f5d77694550a793e80df3e55d80cfb793c9c42239\" returns successfully" Jan 30 13:54:05.278350 containerd[1972]: time="2025-01-30T13:54:05.278309890Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 30 13:54:05.361907 kubelet[2420]: E0130 13:54:05.361838 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:06.112251 update_engine[1951]: I20250130 13:54:06.112161 1951 update_attempter.cc:509] Updating boot flags... Jan 30 13:54:06.199490 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3465) Jan 30 13:54:06.362339 kubelet[2420]: E0130 13:54:06.362224 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:06.835377 containerd[1972]: time="2025-01-30T13:54:06.835312229Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:06.836790 containerd[1972]: time="2025-01-30T13:54:06.836659476Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 30 13:54:06.838902 containerd[1972]: time="2025-01-30T13:54:06.838025313Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:06.840377 containerd[1972]: time="2025-01-30T13:54:06.840328484Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:06.841123 containerd[1972]: time="2025-01-30T13:54:06.841076158Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.562389228s" Jan 30 13:54:06.841285 containerd[1972]: time="2025-01-30T13:54:06.841120030Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 30 13:54:06.845497 containerd[1972]: time="2025-01-30T13:54:06.845461612Z" level=info msg="CreateContainer within sandbox \"aafc45943d6f60beca2d2c91a6f3494d01198ee6af024d8f10311006b85680c5\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 30 13:54:06.869234 containerd[1972]: time="2025-01-30T13:54:06.869188965Z" level=info msg="CreateContainer within sandbox \"aafc45943d6f60beca2d2c91a6f3494d01198ee6af024d8f10311006b85680c5\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"544d2d4b79d5366cc67077e5c3b012ba26cfb26e411caef74647cb00cab4b01f\"" Jan 30 13:54:06.869824 containerd[1972]: time="2025-01-30T13:54:06.869796984Z" level=info msg="StartContainer for \"544d2d4b79d5366cc67077e5c3b012ba26cfb26e411caef74647cb00cab4b01f\"" Jan 30 13:54:06.949545 systemd[1]: Started cri-containerd-544d2d4b79d5366cc67077e5c3b012ba26cfb26e411caef74647cb00cab4b01f.scope - libcontainer container 544d2d4b79d5366cc67077e5c3b012ba26cfb26e411caef74647cb00cab4b01f. Jan 30 13:54:07.007761 containerd[1972]: time="2025-01-30T13:54:07.007714128Z" level=info msg="StartContainer for \"544d2d4b79d5366cc67077e5c3b012ba26cfb26e411caef74647cb00cab4b01f\" returns successfully" Jan 30 13:54:07.363759 kubelet[2420]: E0130 13:54:07.363718 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:07.494164 kubelet[2420]: I0130 13:54:07.494134 2420 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 30 13:54:07.494164 kubelet[2420]: I0130 13:54:07.494175 2420 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 30 13:54:07.576575 ntpd[1942]: Listen normally on 7 vxlan.calico 192.168.3.192:123 Jan 30 13:54:07.576667 ntpd[1942]: Listen normally on 8 vxlan.calico [fe80::6483:59ff:fe91:1619%3]:123 Jan 30 13:54:07.577137 ntpd[1942]: 30 Jan 13:54:07 ntpd[1942]: Listen normally on 7 vxlan.calico 192.168.3.192:123 Jan 30 13:54:07.577137 ntpd[1942]: 30 Jan 13:54:07 ntpd[1942]: Listen normally on 8 vxlan.calico [fe80::6483:59ff:fe91:1619%3]:123 Jan 30 13:54:07.577137 ntpd[1942]: 30 Jan 13:54:07 ntpd[1942]: Listen normally on 9 calidd06851b23a [fe80::ecee:eeff:feee:eeee%6]:123 Jan 30 13:54:07.576727 ntpd[1942]: Listen normally on 9 calidd06851b23a [fe80::ecee:eeff:feee:eeee%6]:123 Jan 30 13:54:08.364279 kubelet[2420]: E0130 13:54:08.364235 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:09.364806 kubelet[2420]: E0130 13:54:09.364745 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:09.486566 containerd[1972]: time="2025-01-30T13:54:09.486067793Z" level=info msg="StopPodSandbox for \"878ac55d6bd2e8783eb4ed5fcdfd5f4cd3945618dd7790a33d23ef6b83dc2086\"" Jan 30 13:54:09.616189 kubelet[2420]: I0130 13:54:09.615816 2420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-zjwh8" podStartSLOduration=30.402028144 podStartE2EDuration="33.615797816s" podCreationTimestamp="2025-01-30 13:53:36 +0000 UTC" firstStartedPulling="2025-01-30 13:54:03.629556245 +0000 UTC m=+28.000640938" lastFinishedPulling="2025-01-30 13:54:06.843325912 +0000 UTC m=+31.214410610" observedRunningTime="2025-01-30 13:54:07.678285629 +0000 UTC m=+32.049370340" watchObservedRunningTime="2025-01-30 13:54:09.615797816 +0000 UTC m=+33.986882526" Jan 30 13:54:09.713919 containerd[1972]: 2025-01-30 13:54:09.618 [INFO][3604] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="878ac55d6bd2e8783eb4ed5fcdfd5f4cd3945618dd7790a33d23ef6b83dc2086" Jan 30 13:54:09.713919 containerd[1972]: 2025-01-30 13:54:09.619 [INFO][3604] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="878ac55d6bd2e8783eb4ed5fcdfd5f4cd3945618dd7790a33d23ef6b83dc2086" iface="eth0" netns="/var/run/netns/cni-b93e4126-7f33-42da-d75c-a764b41358ad" Jan 30 13:54:09.713919 containerd[1972]: 2025-01-30 13:54:09.620 [INFO][3604] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="878ac55d6bd2e8783eb4ed5fcdfd5f4cd3945618dd7790a33d23ef6b83dc2086" iface="eth0" netns="/var/run/netns/cni-b93e4126-7f33-42da-d75c-a764b41358ad" Jan 30 13:54:09.713919 containerd[1972]: 2025-01-30 13:54:09.620 [INFO][3604] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="878ac55d6bd2e8783eb4ed5fcdfd5f4cd3945618dd7790a33d23ef6b83dc2086" iface="eth0" netns="/var/run/netns/cni-b93e4126-7f33-42da-d75c-a764b41358ad" Jan 30 13:54:09.713919 containerd[1972]: 2025-01-30 13:54:09.620 [INFO][3604] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="878ac55d6bd2e8783eb4ed5fcdfd5f4cd3945618dd7790a33d23ef6b83dc2086" Jan 30 13:54:09.713919 containerd[1972]: 2025-01-30 13:54:09.620 [INFO][3604] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="878ac55d6bd2e8783eb4ed5fcdfd5f4cd3945618dd7790a33d23ef6b83dc2086" Jan 30 13:54:09.713919 containerd[1972]: 2025-01-30 13:54:09.682 [INFO][3610] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="878ac55d6bd2e8783eb4ed5fcdfd5f4cd3945618dd7790a33d23ef6b83dc2086" HandleID="k8s-pod-network.878ac55d6bd2e8783eb4ed5fcdfd5f4cd3945618dd7790a33d23ef6b83dc2086" Workload="172.31.29.156-k8s-nginx--deployment--85f456d6dd--5b59x-eth0" Jan 30 13:54:09.713919 containerd[1972]: 2025-01-30 13:54:09.683 [INFO][3610] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:54:09.713919 containerd[1972]: 2025-01-30 13:54:09.683 [INFO][3610] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:54:09.713919 containerd[1972]: 2025-01-30 13:54:09.708 [WARNING][3610] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="878ac55d6bd2e8783eb4ed5fcdfd5f4cd3945618dd7790a33d23ef6b83dc2086" HandleID="k8s-pod-network.878ac55d6bd2e8783eb4ed5fcdfd5f4cd3945618dd7790a33d23ef6b83dc2086" Workload="172.31.29.156-k8s-nginx--deployment--85f456d6dd--5b59x-eth0" Jan 30 13:54:09.713919 containerd[1972]: 2025-01-30 13:54:09.708 [INFO][3610] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="878ac55d6bd2e8783eb4ed5fcdfd5f4cd3945618dd7790a33d23ef6b83dc2086" HandleID="k8s-pod-network.878ac55d6bd2e8783eb4ed5fcdfd5f4cd3945618dd7790a33d23ef6b83dc2086" Workload="172.31.29.156-k8s-nginx--deployment--85f456d6dd--5b59x-eth0" Jan 30 13:54:09.713919 containerd[1972]: 2025-01-30 13:54:09.711 [INFO][3610] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:54:09.713919 containerd[1972]: 2025-01-30 13:54:09.712 [INFO][3604] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="878ac55d6bd2e8783eb4ed5fcdfd5f4cd3945618dd7790a33d23ef6b83dc2086" Jan 30 13:54:09.716537 containerd[1972]: time="2025-01-30T13:54:09.714800142Z" level=info msg="TearDown network for sandbox \"878ac55d6bd2e8783eb4ed5fcdfd5f4cd3945618dd7790a33d23ef6b83dc2086\" successfully" Jan 30 13:54:09.716537 containerd[1972]: time="2025-01-30T13:54:09.714838446Z" level=info msg="StopPodSandbox for \"878ac55d6bd2e8783eb4ed5fcdfd5f4cd3945618dd7790a33d23ef6b83dc2086\" returns successfully" Jan 30 13:54:09.716537 containerd[1972]: time="2025-01-30T13:54:09.715927203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-5b59x,Uid:21b007a3-ebbb-4563-8ea1-756127933ab6,Namespace:default,Attempt:1,}" Jan 30 13:54:09.728597 systemd[1]: run-netns-cni\x2db93e4126\x2d7f33\x2d42da\x2dd75c\x2da764b41358ad.mount: Deactivated successfully. Jan 30 13:54:09.995620 systemd-networkd[1849]: cali323a990d68d: Link UP Jan 30 13:54:09.996396 systemd-networkd[1849]: cali323a990d68d: Gained carrier Jan 30 13:54:10.002200 (udev-worker)[3635]: Network interface NamePolicy= disabled on kernel command line. Jan 30 13:54:10.020488 containerd[1972]: 2025-01-30 13:54:09.879 [INFO][3618] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.29.156-k8s-nginx--deployment--85f456d6dd--5b59x-eth0 nginx-deployment-85f456d6dd- default 21b007a3-ebbb-4563-8ea1-756127933ab6 1087 0 2025-01-30 13:53:51 +0000 UTC map[app:nginx pod-template-hash:85f456d6dd projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.29.156 nginx-deployment-85f456d6dd-5b59x eth0 default [] [] [kns.default ksa.default.default] cali323a990d68d [] []}} ContainerID="2bae822853cf26b03d68c3061acf269b791db91461beb5bb1c56f0a4d87f9773" Namespace="default" Pod="nginx-deployment-85f456d6dd-5b59x" WorkloadEndpoint="172.31.29.156-k8s-nginx--deployment--85f456d6dd--5b59x-" Jan 30 13:54:10.020488 containerd[1972]: 2025-01-30 13:54:09.879 [INFO][3618] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2bae822853cf26b03d68c3061acf269b791db91461beb5bb1c56f0a4d87f9773" Namespace="default" Pod="nginx-deployment-85f456d6dd-5b59x" WorkloadEndpoint="172.31.29.156-k8s-nginx--deployment--85f456d6dd--5b59x-eth0" Jan 30 13:54:10.020488 containerd[1972]: 2025-01-30 13:54:09.919 [INFO][3628] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2bae822853cf26b03d68c3061acf269b791db91461beb5bb1c56f0a4d87f9773" HandleID="k8s-pod-network.2bae822853cf26b03d68c3061acf269b791db91461beb5bb1c56f0a4d87f9773" Workload="172.31.29.156-k8s-nginx--deployment--85f456d6dd--5b59x-eth0" Jan 30 13:54:10.020488 containerd[1972]: 2025-01-30 13:54:09.942 [INFO][3628] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2bae822853cf26b03d68c3061acf269b791db91461beb5bb1c56f0a4d87f9773" HandleID="k8s-pod-network.2bae822853cf26b03d68c3061acf269b791db91461beb5bb1c56f0a4d87f9773" Workload="172.31.29.156-k8s-nginx--deployment--85f456d6dd--5b59x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00030ecb0), Attrs:map[string]string{"namespace":"default", "node":"172.31.29.156", "pod":"nginx-deployment-85f456d6dd-5b59x", "timestamp":"2025-01-30 13:54:09.91939243 +0000 UTC"}, Hostname:"172.31.29.156", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:54:10.020488 containerd[1972]: 2025-01-30 13:54:09.942 [INFO][3628] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:54:10.020488 containerd[1972]: 2025-01-30 13:54:09.942 [INFO][3628] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:54:10.020488 containerd[1972]: 2025-01-30 13:54:09.942 [INFO][3628] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.29.156' Jan 30 13:54:10.020488 containerd[1972]: 2025-01-30 13:54:09.948 [INFO][3628] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2bae822853cf26b03d68c3061acf269b791db91461beb5bb1c56f0a4d87f9773" host="172.31.29.156" Jan 30 13:54:10.020488 containerd[1972]: 2025-01-30 13:54:09.955 [INFO][3628] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.29.156" Jan 30 13:54:10.020488 containerd[1972]: 2025-01-30 13:54:09.960 [INFO][3628] ipam/ipam.go 489: Trying affinity for 192.168.3.192/26 host="172.31.29.156" Jan 30 13:54:10.020488 containerd[1972]: 2025-01-30 13:54:09.962 [INFO][3628] ipam/ipam.go 155: Attempting to load block cidr=192.168.3.192/26 host="172.31.29.156" Jan 30 13:54:10.020488 containerd[1972]: 2025-01-30 13:54:09.965 [INFO][3628] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.3.192/26 host="172.31.29.156" Jan 30 13:54:10.020488 containerd[1972]: 2025-01-30 13:54:09.965 [INFO][3628] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.3.192/26 handle="k8s-pod-network.2bae822853cf26b03d68c3061acf269b791db91461beb5bb1c56f0a4d87f9773" host="172.31.29.156" Jan 30 13:54:10.020488 containerd[1972]: 2025-01-30 13:54:09.966 [INFO][3628] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2bae822853cf26b03d68c3061acf269b791db91461beb5bb1c56f0a4d87f9773 Jan 30 13:54:10.020488 containerd[1972]: 2025-01-30 13:54:09.972 [INFO][3628] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.3.192/26 handle="k8s-pod-network.2bae822853cf26b03d68c3061acf269b791db91461beb5bb1c56f0a4d87f9773" host="172.31.29.156" Jan 30 13:54:10.020488 containerd[1972]: 2025-01-30 13:54:09.987 [INFO][3628] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.3.194/26] block=192.168.3.192/26 handle="k8s-pod-network.2bae822853cf26b03d68c3061acf269b791db91461beb5bb1c56f0a4d87f9773" host="172.31.29.156" Jan 30 13:54:10.020488 containerd[1972]: 2025-01-30 13:54:09.987 [INFO][3628] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.3.194/26] handle="k8s-pod-network.2bae822853cf26b03d68c3061acf269b791db91461beb5bb1c56f0a4d87f9773" host="172.31.29.156" Jan 30 13:54:10.020488 containerd[1972]: 2025-01-30 13:54:09.988 [INFO][3628] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:54:10.020488 containerd[1972]: 2025-01-30 13:54:09.988 [INFO][3628] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.3.194/26] IPv6=[] ContainerID="2bae822853cf26b03d68c3061acf269b791db91461beb5bb1c56f0a4d87f9773" HandleID="k8s-pod-network.2bae822853cf26b03d68c3061acf269b791db91461beb5bb1c56f0a4d87f9773" Workload="172.31.29.156-k8s-nginx--deployment--85f456d6dd--5b59x-eth0" Jan 30 13:54:10.023765 containerd[1972]: 2025-01-30 13:54:09.989 [INFO][3618] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2bae822853cf26b03d68c3061acf269b791db91461beb5bb1c56f0a4d87f9773" Namespace="default" Pod="nginx-deployment-85f456d6dd-5b59x" WorkloadEndpoint="172.31.29.156-k8s-nginx--deployment--85f456d6dd--5b59x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.29.156-k8s-nginx--deployment--85f456d6dd--5b59x-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"21b007a3-ebbb-4563-8ea1-756127933ab6", ResourceVersion:"1087", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 53, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.29.156", ContainerID:"", Pod:"nginx-deployment-85f456d6dd-5b59x", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.3.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali323a990d68d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:54:10.023765 containerd[1972]: 2025-01-30 13:54:09.990 [INFO][3618] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.3.194/32] ContainerID="2bae822853cf26b03d68c3061acf269b791db91461beb5bb1c56f0a4d87f9773" Namespace="default" Pod="nginx-deployment-85f456d6dd-5b59x" WorkloadEndpoint="172.31.29.156-k8s-nginx--deployment--85f456d6dd--5b59x-eth0" Jan 30 13:54:10.023765 containerd[1972]: 2025-01-30 13:54:09.990 [INFO][3618] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali323a990d68d ContainerID="2bae822853cf26b03d68c3061acf269b791db91461beb5bb1c56f0a4d87f9773" Namespace="default" Pod="nginx-deployment-85f456d6dd-5b59x" WorkloadEndpoint="172.31.29.156-k8s-nginx--deployment--85f456d6dd--5b59x-eth0" Jan 30 13:54:10.023765 containerd[1972]: 2025-01-30 13:54:09.999 [INFO][3618] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2bae822853cf26b03d68c3061acf269b791db91461beb5bb1c56f0a4d87f9773" Namespace="default" Pod="nginx-deployment-85f456d6dd-5b59x" WorkloadEndpoint="172.31.29.156-k8s-nginx--deployment--85f456d6dd--5b59x-eth0" Jan 30 13:54:10.023765 containerd[1972]: 2025-01-30 13:54:10.001 [INFO][3618] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2bae822853cf26b03d68c3061acf269b791db91461beb5bb1c56f0a4d87f9773" Namespace="default" Pod="nginx-deployment-85f456d6dd-5b59x" WorkloadEndpoint="172.31.29.156-k8s-nginx--deployment--85f456d6dd--5b59x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.29.156-k8s-nginx--deployment--85f456d6dd--5b59x-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"21b007a3-ebbb-4563-8ea1-756127933ab6", ResourceVersion:"1087", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 53, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.29.156", ContainerID:"2bae822853cf26b03d68c3061acf269b791db91461beb5bb1c56f0a4d87f9773", Pod:"nginx-deployment-85f456d6dd-5b59x", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.3.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali323a990d68d", MAC:"ce:a8:ed:44:b7:f0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:54:10.023765 containerd[1972]: 2025-01-30 13:54:10.018 [INFO][3618] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2bae822853cf26b03d68c3061acf269b791db91461beb5bb1c56f0a4d87f9773" Namespace="default" Pod="nginx-deployment-85f456d6dd-5b59x" WorkloadEndpoint="172.31.29.156-k8s-nginx--deployment--85f456d6dd--5b59x-eth0" Jan 30 13:54:10.075388 containerd[1972]: time="2025-01-30T13:54:10.075174236Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:54:10.075388 containerd[1972]: time="2025-01-30T13:54:10.075294137Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:54:10.075388 containerd[1972]: time="2025-01-30T13:54:10.075312101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:54:10.076438 containerd[1972]: time="2025-01-30T13:54:10.076315038Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:54:10.113264 systemd[1]: Started cri-containerd-2bae822853cf26b03d68c3061acf269b791db91461beb5bb1c56f0a4d87f9773.scope - libcontainer container 2bae822853cf26b03d68c3061acf269b791db91461beb5bb1c56f0a4d87f9773. Jan 30 13:54:10.162816 containerd[1972]: time="2025-01-30T13:54:10.162772276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-5b59x,Uid:21b007a3-ebbb-4563-8ea1-756127933ab6,Namespace:default,Attempt:1,} returns sandbox id \"2bae822853cf26b03d68c3061acf269b791db91461beb5bb1c56f0a4d87f9773\"" Jan 30 13:54:10.170838 containerd[1972]: time="2025-01-30T13:54:10.170616518Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 30 13:54:10.364951 kubelet[2420]: E0130 13:54:10.364894 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:11.367381 kubelet[2420]: E0130 13:54:11.365585 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:11.561208 systemd-networkd[1849]: cali323a990d68d: Gained IPv6LL Jan 30 13:54:12.366352 kubelet[2420]: E0130 13:54:12.366287 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:13.367486 kubelet[2420]: E0130 13:54:13.367448 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:13.471335 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4116742818.mount: Deactivated successfully. Jan 30 13:54:13.576754 ntpd[1942]: Listen normally on 10 cali323a990d68d [fe80::ecee:eeff:feee:eeee%7]:123 Jan 30 13:54:13.577133 ntpd[1942]: 30 Jan 13:54:13 ntpd[1942]: Listen normally on 10 cali323a990d68d [fe80::ecee:eeff:feee:eeee%7]:123 Jan 30 13:54:14.368549 kubelet[2420]: E0130 13:54:14.368512 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:15.370188 kubelet[2420]: E0130 13:54:15.370154 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:15.385467 containerd[1972]: time="2025-01-30T13:54:15.385420904Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:15.387914 containerd[1972]: time="2025-01-30T13:54:15.387691824Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=71015561" Jan 30 13:54:15.394032 containerd[1972]: time="2025-01-30T13:54:15.393980369Z" level=info msg="ImageCreate event name:\"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:15.401023 containerd[1972]: time="2025-01-30T13:54:15.400566593Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:15.402392 containerd[1972]: time="2025-01-30T13:54:15.401902129Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"71015439\" in 5.231234408s" Jan 30 13:54:15.402392 containerd[1972]: time="2025-01-30T13:54:15.401951181Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\"" Jan 30 13:54:15.417993 containerd[1972]: time="2025-01-30T13:54:15.417949356Z" level=info msg="CreateContainer within sandbox \"2bae822853cf26b03d68c3061acf269b791db91461beb5bb1c56f0a4d87f9773\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 30 13:54:15.448082 containerd[1972]: time="2025-01-30T13:54:15.448032101Z" level=info msg="CreateContainer within sandbox \"2bae822853cf26b03d68c3061acf269b791db91461beb5bb1c56f0a4d87f9773\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"1686c4e8fe5051c32803e29276048f942d0871955a5c763cbc0712be876fd288\"" Jan 30 13:54:15.449169 containerd[1972]: time="2025-01-30T13:54:15.449136580Z" level=info msg="StartContainer for \"1686c4e8fe5051c32803e29276048f942d0871955a5c763cbc0712be876fd288\"" Jan 30 13:54:15.523611 systemd[1]: Started cri-containerd-1686c4e8fe5051c32803e29276048f942d0871955a5c763cbc0712be876fd288.scope - libcontainer container 1686c4e8fe5051c32803e29276048f942d0871955a5c763cbc0712be876fd288. Jan 30 13:54:15.565062 containerd[1972]: time="2025-01-30T13:54:15.564837681Z" level=info msg="StartContainer for \"1686c4e8fe5051c32803e29276048f942d0871955a5c763cbc0712be876fd288\" returns successfully" Jan 30 13:54:15.766520 kubelet[2420]: I0130 13:54:15.766352 2420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-5b59x" podStartSLOduration=19.525173731 podStartE2EDuration="24.764211759s" podCreationTimestamp="2025-01-30 13:53:51 +0000 UTC" firstStartedPulling="2025-01-30 13:54:10.164515244 +0000 UTC m=+34.535599934" lastFinishedPulling="2025-01-30 13:54:15.403553271 +0000 UTC m=+39.774637962" observedRunningTime="2025-01-30 13:54:15.756330767 +0000 UTC m=+40.127415480" watchObservedRunningTime="2025-01-30 13:54:15.764211759 +0000 UTC m=+40.135296470" Jan 30 13:54:16.327386 kubelet[2420]: E0130 13:54:16.327288 2420 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:16.370767 kubelet[2420]: E0130 13:54:16.370711 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:17.371525 kubelet[2420]: E0130 13:54:17.371472 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:18.372147 kubelet[2420]: E0130 13:54:18.371715 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:19.373065 kubelet[2420]: E0130 13:54:19.372953 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:20.374467 kubelet[2420]: E0130 13:54:20.374419 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:21.375744 kubelet[2420]: E0130 13:54:21.375692 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:22.174403 kubelet[2420]: I0130 13:54:22.174341 2420 topology_manager.go:215] "Topology Admit Handler" podUID="afcb8804-1b9a-4617-a3a4-c961bc951e6d" podNamespace="default" podName="nfs-server-provisioner-0" Jan 30 13:54:22.184065 systemd[1]: Created slice kubepods-besteffort-podafcb8804_1b9a_4617_a3a4_c961bc951e6d.slice - libcontainer container kubepods-besteffort-podafcb8804_1b9a_4617_a3a4_c961bc951e6d.slice. Jan 30 13:54:22.334827 kubelet[2420]: I0130 13:54:22.334783 2420 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87chq\" (UniqueName: \"kubernetes.io/projected/afcb8804-1b9a-4617-a3a4-c961bc951e6d-kube-api-access-87chq\") pod \"nfs-server-provisioner-0\" (UID: \"afcb8804-1b9a-4617-a3a4-c961bc951e6d\") " pod="default/nfs-server-provisioner-0" Jan 30 13:54:22.335016 kubelet[2420]: I0130 13:54:22.334872 2420 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/afcb8804-1b9a-4617-a3a4-c961bc951e6d-data\") pod \"nfs-server-provisioner-0\" (UID: \"afcb8804-1b9a-4617-a3a4-c961bc951e6d\") " pod="default/nfs-server-provisioner-0" Jan 30 13:54:22.376898 kubelet[2420]: E0130 13:54:22.376844 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:22.490194 containerd[1972]: time="2025-01-30T13:54:22.490065835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:afcb8804-1b9a-4617-a3a4-c961bc951e6d,Namespace:default,Attempt:0,}" Jan 30 13:54:22.731823 systemd-networkd[1849]: cali60e51b789ff: Link UP Jan 30 13:54:22.732130 systemd-networkd[1849]: cali60e51b789ff: Gained carrier Jan 30 13:54:22.737342 (udev-worker)[3816]: Network interface NamePolicy= disabled on kernel command line. Jan 30 13:54:22.750586 containerd[1972]: 2025-01-30 13:54:22.581 [INFO][3798] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.29.156-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default afcb8804-1b9a-4617-a3a4-c961bc951e6d 1141 0 2025-01-30 13:54:22 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 172.31.29.156 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="488a14e57df8edf9032175179e35ee77d40ee64454e0976819a737d8c396fe01" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.29.156-k8s-nfs--server--provisioner--0-" Jan 30 13:54:22.750586 containerd[1972]: 2025-01-30 13:54:22.582 [INFO][3798] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="488a14e57df8edf9032175179e35ee77d40ee64454e0976819a737d8c396fe01" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.29.156-k8s-nfs--server--provisioner--0-eth0" Jan 30 13:54:22.750586 containerd[1972]: 2025-01-30 13:54:22.626 [INFO][3809] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="488a14e57df8edf9032175179e35ee77d40ee64454e0976819a737d8c396fe01" HandleID="k8s-pod-network.488a14e57df8edf9032175179e35ee77d40ee64454e0976819a737d8c396fe01" Workload="172.31.29.156-k8s-nfs--server--provisioner--0-eth0" Jan 30 13:54:22.750586 containerd[1972]: 2025-01-30 13:54:22.654 [INFO][3809] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="488a14e57df8edf9032175179e35ee77d40ee64454e0976819a737d8c396fe01" HandleID="k8s-pod-network.488a14e57df8edf9032175179e35ee77d40ee64454e0976819a737d8c396fe01" Workload="172.31.29.156-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290b70), Attrs:map[string]string{"namespace":"default", "node":"172.31.29.156", "pod":"nfs-server-provisioner-0", "timestamp":"2025-01-30 13:54:22.626093742 +0000 UTC"}, Hostname:"172.31.29.156", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:54:22.750586 containerd[1972]: 2025-01-30 13:54:22.654 [INFO][3809] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:54:22.750586 containerd[1972]: 2025-01-30 13:54:22.654 [INFO][3809] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:54:22.750586 containerd[1972]: 2025-01-30 13:54:22.654 [INFO][3809] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.29.156' Jan 30 13:54:22.750586 containerd[1972]: 2025-01-30 13:54:22.666 [INFO][3809] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.488a14e57df8edf9032175179e35ee77d40ee64454e0976819a737d8c396fe01" host="172.31.29.156" Jan 30 13:54:22.750586 containerd[1972]: 2025-01-30 13:54:22.676 [INFO][3809] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.29.156" Jan 30 13:54:22.750586 containerd[1972]: 2025-01-30 13:54:22.684 [INFO][3809] ipam/ipam.go 489: Trying affinity for 192.168.3.192/26 host="172.31.29.156" Jan 30 13:54:22.750586 containerd[1972]: 2025-01-30 13:54:22.687 [INFO][3809] ipam/ipam.go 155: Attempting to load block cidr=192.168.3.192/26 host="172.31.29.156" Jan 30 13:54:22.750586 containerd[1972]: 2025-01-30 13:54:22.692 [INFO][3809] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.3.192/26 host="172.31.29.156" Jan 30 13:54:22.750586 containerd[1972]: 2025-01-30 13:54:22.692 [INFO][3809] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.3.192/26 handle="k8s-pod-network.488a14e57df8edf9032175179e35ee77d40ee64454e0976819a737d8c396fe01" host="172.31.29.156" Jan 30 13:54:22.750586 containerd[1972]: 2025-01-30 13:54:22.696 [INFO][3809] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.488a14e57df8edf9032175179e35ee77d40ee64454e0976819a737d8c396fe01 Jan 30 13:54:22.750586 containerd[1972]: 2025-01-30 13:54:22.708 [INFO][3809] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.3.192/26 handle="k8s-pod-network.488a14e57df8edf9032175179e35ee77d40ee64454e0976819a737d8c396fe01" host="172.31.29.156" Jan 30 13:54:22.750586 containerd[1972]: 2025-01-30 13:54:22.721 [INFO][3809] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.3.195/26] block=192.168.3.192/26 handle="k8s-pod-network.488a14e57df8edf9032175179e35ee77d40ee64454e0976819a737d8c396fe01" host="172.31.29.156" Jan 30 13:54:22.750586 containerd[1972]: 2025-01-30 13:54:22.721 [INFO][3809] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.3.195/26] handle="k8s-pod-network.488a14e57df8edf9032175179e35ee77d40ee64454e0976819a737d8c396fe01" host="172.31.29.156" Jan 30 13:54:22.750586 containerd[1972]: 2025-01-30 13:54:22.722 [INFO][3809] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:54:22.750586 containerd[1972]: 2025-01-30 13:54:22.722 [INFO][3809] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.3.195/26] IPv6=[] ContainerID="488a14e57df8edf9032175179e35ee77d40ee64454e0976819a737d8c396fe01" HandleID="k8s-pod-network.488a14e57df8edf9032175179e35ee77d40ee64454e0976819a737d8c396fe01" Workload="172.31.29.156-k8s-nfs--server--provisioner--0-eth0" Jan 30 13:54:22.752261 containerd[1972]: 2025-01-30 13:54:22.723 [INFO][3798] cni-plugin/k8s.go 386: Populated endpoint ContainerID="488a14e57df8edf9032175179e35ee77d40ee64454e0976819a737d8c396fe01" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.29.156-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.29.156-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"afcb8804-1b9a-4617-a3a4-c961bc951e6d", ResourceVersion:"1141", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 54, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.29.156", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.3.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:54:22.752261 containerd[1972]: 2025-01-30 13:54:22.724 [INFO][3798] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.3.195/32] ContainerID="488a14e57df8edf9032175179e35ee77d40ee64454e0976819a737d8c396fe01" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.29.156-k8s-nfs--server--provisioner--0-eth0" Jan 30 13:54:22.752261 containerd[1972]: 2025-01-30 13:54:22.724 [INFO][3798] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="488a14e57df8edf9032175179e35ee77d40ee64454e0976819a737d8c396fe01" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.29.156-k8s-nfs--server--provisioner--0-eth0" Jan 30 13:54:22.752261 containerd[1972]: 2025-01-30 13:54:22.730 [INFO][3798] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="488a14e57df8edf9032175179e35ee77d40ee64454e0976819a737d8c396fe01" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.29.156-k8s-nfs--server--provisioner--0-eth0" Jan 30 13:54:22.752858 containerd[1972]: 2025-01-30 13:54:22.731 [INFO][3798] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="488a14e57df8edf9032175179e35ee77d40ee64454e0976819a737d8c396fe01" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.29.156-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.29.156-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"afcb8804-1b9a-4617-a3a4-c961bc951e6d", ResourceVersion:"1141", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 54, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.29.156", ContainerID:"488a14e57df8edf9032175179e35ee77d40ee64454e0976819a737d8c396fe01", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.3.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"d6:75:06:7f:a3:e3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:54:22.752858 containerd[1972]: 2025-01-30 13:54:22.748 [INFO][3798] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="488a14e57df8edf9032175179e35ee77d40ee64454e0976819a737d8c396fe01" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.29.156-k8s-nfs--server--provisioner--0-eth0" Jan 30 13:54:22.804600 containerd[1972]: time="2025-01-30T13:54:22.804349120Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:54:22.804600 containerd[1972]: time="2025-01-30T13:54:22.804526783Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:54:22.804600 containerd[1972]: time="2025-01-30T13:54:22.804545536Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:54:22.810444 containerd[1972]: time="2025-01-30T13:54:22.804638803Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:54:22.844577 systemd[1]: Started cri-containerd-488a14e57df8edf9032175179e35ee77d40ee64454e0976819a737d8c396fe01.scope - libcontainer container 488a14e57df8edf9032175179e35ee77d40ee64454e0976819a737d8c396fe01. Jan 30 13:54:22.943009 containerd[1972]: time="2025-01-30T13:54:22.942882613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:afcb8804-1b9a-4617-a3a4-c961bc951e6d,Namespace:default,Attempt:0,} returns sandbox id \"488a14e57df8edf9032175179e35ee77d40ee64454e0976819a737d8c396fe01\"" Jan 30 13:54:22.953994 containerd[1972]: time="2025-01-30T13:54:22.953815392Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 30 13:54:23.381905 kubelet[2420]: E0130 13:54:23.381845 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:23.478128 systemd[1]: run-containerd-runc-k8s.io-488a14e57df8edf9032175179e35ee77d40ee64454e0976819a737d8c396fe01-runc.DVnA0F.mount: Deactivated successfully. Jan 30 13:54:24.382554 kubelet[2420]: E0130 13:54:24.382438 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:24.746984 systemd-networkd[1849]: cali60e51b789ff: Gained IPv6LL Jan 30 13:54:25.382814 kubelet[2420]: E0130 13:54:25.382701 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:25.905459 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2693465639.mount: Deactivated successfully. Jan 30 13:54:26.384086 kubelet[2420]: E0130 13:54:26.384048 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:27.384399 kubelet[2420]: E0130 13:54:27.384232 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:27.578886 ntpd[1942]: Listen normally on 11 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%8]:123 Jan 30 13:54:27.579477 ntpd[1942]: 30 Jan 13:54:27 ntpd[1942]: Listen normally on 11 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%8]:123 Jan 30 13:54:28.384848 kubelet[2420]: E0130 13:54:28.384790 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:29.268185 containerd[1972]: time="2025-01-30T13:54:29.268102744Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:29.270650 containerd[1972]: time="2025-01-30T13:54:29.270576694Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Jan 30 13:54:29.274954 containerd[1972]: time="2025-01-30T13:54:29.274352039Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:29.281963 containerd[1972]: time="2025-01-30T13:54:29.281911137Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:29.283514 containerd[1972]: time="2025-01-30T13:54:29.283464967Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 6.329597756s" Jan 30 13:54:29.283709 containerd[1972]: time="2025-01-30T13:54:29.283686005Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 30 13:54:29.380617 containerd[1972]: time="2025-01-30T13:54:29.380564346Z" level=info msg="CreateContainer within sandbox \"488a14e57df8edf9032175179e35ee77d40ee64454e0976819a737d8c396fe01\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 30 13:54:29.386034 kubelet[2420]: E0130 13:54:29.385990 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:29.425693 containerd[1972]: time="2025-01-30T13:54:29.425635851Z" level=info msg="CreateContainer within sandbox \"488a14e57df8edf9032175179e35ee77d40ee64454e0976819a737d8c396fe01\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"fc0152bdb595bd369ad93ba37cc9d57a9637d0d90aaecbf8a7f037c75ea42ab9\"" Jan 30 13:54:29.426593 containerd[1972]: time="2025-01-30T13:54:29.426554726Z" level=info msg="StartContainer for \"fc0152bdb595bd369ad93ba37cc9d57a9637d0d90aaecbf8a7f037c75ea42ab9\"" Jan 30 13:54:29.486483 systemd[1]: Started cri-containerd-fc0152bdb595bd369ad93ba37cc9d57a9637d0d90aaecbf8a7f037c75ea42ab9.scope - libcontainer container fc0152bdb595bd369ad93ba37cc9d57a9637d0d90aaecbf8a7f037c75ea42ab9. Jan 30 13:54:29.558256 containerd[1972]: time="2025-01-30T13:54:29.557071136Z" level=info msg="StartContainer for \"fc0152bdb595bd369ad93ba37cc9d57a9637d0d90aaecbf8a7f037c75ea42ab9\" returns successfully" Jan 30 13:54:29.939498 kubelet[2420]: I0130 13:54:29.939288 2420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.590547705 podStartE2EDuration="7.937765617s" podCreationTimestamp="2025-01-30 13:54:22 +0000 UTC" firstStartedPulling="2025-01-30 13:54:22.953443612 +0000 UTC m=+47.324528303" lastFinishedPulling="2025-01-30 13:54:29.300661523 +0000 UTC m=+53.671746215" observedRunningTime="2025-01-30 13:54:29.907283978 +0000 UTC m=+54.278368690" watchObservedRunningTime="2025-01-30 13:54:29.937765617 +0000 UTC m=+54.308850327" Jan 30 13:54:30.386266 kubelet[2420]: E0130 13:54:30.386206 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:31.387183 kubelet[2420]: E0130 13:54:31.387133 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:32.388223 kubelet[2420]: E0130 13:54:32.388168 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:33.389012 kubelet[2420]: E0130 13:54:33.388971 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:34.389838 kubelet[2420]: E0130 13:54:34.389796 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:35.390881 kubelet[2420]: E0130 13:54:35.390826 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:36.326794 kubelet[2420]: E0130 13:54:36.326737 2420 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:36.363513 containerd[1972]: time="2025-01-30T13:54:36.363472703Z" level=info msg="StopPodSandbox for \"5fe40c59fc65eac8b73a850f326d7b0d4b5c86ad5f8d73df478b78cf00b3511d\"" Jan 30 13:54:36.391992 kubelet[2420]: E0130 13:54:36.391904 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:36.513170 containerd[1972]: 2025-01-30 13:54:36.444 [WARNING][3999] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5fe40c59fc65eac8b73a850f326d7b0d4b5c86ad5f8d73df478b78cf00b3511d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.29.156-k8s-csi--node--driver--zjwh8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a329e2c4-51a9-4843-a9e8-b48284b269b5", ResourceVersion:"1079", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 53, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.29.156", ContainerID:"aafc45943d6f60beca2d2c91a6f3494d01198ee6af024d8f10311006b85680c5", Pod:"csi-node-driver-zjwh8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.3.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidd06851b23a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:54:36.513170 containerd[1972]: 2025-01-30 13:54:36.444 [INFO][3999] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5fe40c59fc65eac8b73a850f326d7b0d4b5c86ad5f8d73df478b78cf00b3511d" Jan 30 13:54:36.513170 containerd[1972]: 2025-01-30 13:54:36.444 [INFO][3999] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5fe40c59fc65eac8b73a850f326d7b0d4b5c86ad5f8d73df478b78cf00b3511d" iface="eth0" netns="" Jan 30 13:54:36.513170 containerd[1972]: 2025-01-30 13:54:36.444 [INFO][3999] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5fe40c59fc65eac8b73a850f326d7b0d4b5c86ad5f8d73df478b78cf00b3511d" Jan 30 13:54:36.513170 containerd[1972]: 2025-01-30 13:54:36.444 [INFO][3999] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5fe40c59fc65eac8b73a850f326d7b0d4b5c86ad5f8d73df478b78cf00b3511d" Jan 30 13:54:36.513170 containerd[1972]: 2025-01-30 13:54:36.488 [INFO][4005] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5fe40c59fc65eac8b73a850f326d7b0d4b5c86ad5f8d73df478b78cf00b3511d" HandleID="k8s-pod-network.5fe40c59fc65eac8b73a850f326d7b0d4b5c86ad5f8d73df478b78cf00b3511d" Workload="172.31.29.156-k8s-csi--node--driver--zjwh8-eth0" Jan 30 13:54:36.513170 containerd[1972]: 2025-01-30 13:54:36.488 [INFO][4005] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:54:36.513170 containerd[1972]: 2025-01-30 13:54:36.488 [INFO][4005] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:54:36.513170 containerd[1972]: 2025-01-30 13:54:36.502 [WARNING][4005] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5fe40c59fc65eac8b73a850f326d7b0d4b5c86ad5f8d73df478b78cf00b3511d" HandleID="k8s-pod-network.5fe40c59fc65eac8b73a850f326d7b0d4b5c86ad5f8d73df478b78cf00b3511d" Workload="172.31.29.156-k8s-csi--node--driver--zjwh8-eth0" Jan 30 13:54:36.513170 containerd[1972]: 2025-01-30 13:54:36.502 [INFO][4005] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5fe40c59fc65eac8b73a850f326d7b0d4b5c86ad5f8d73df478b78cf00b3511d" HandleID="k8s-pod-network.5fe40c59fc65eac8b73a850f326d7b0d4b5c86ad5f8d73df478b78cf00b3511d" Workload="172.31.29.156-k8s-csi--node--driver--zjwh8-eth0" Jan 30 13:54:36.513170 containerd[1972]: 2025-01-30 13:54:36.506 [INFO][4005] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:54:36.513170 containerd[1972]: 2025-01-30 13:54:36.509 [INFO][3999] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5fe40c59fc65eac8b73a850f326d7b0d4b5c86ad5f8d73df478b78cf00b3511d" Jan 30 13:54:36.514104 containerd[1972]: time="2025-01-30T13:54:36.513182609Z" level=info msg="TearDown network for sandbox \"5fe40c59fc65eac8b73a850f326d7b0d4b5c86ad5f8d73df478b78cf00b3511d\" successfully" Jan 30 13:54:36.514104 containerd[1972]: time="2025-01-30T13:54:36.513209204Z" level=info msg="StopPodSandbox for \"5fe40c59fc65eac8b73a850f326d7b0d4b5c86ad5f8d73df478b78cf00b3511d\" returns successfully" Jan 30 13:54:36.528865 containerd[1972]: time="2025-01-30T13:54:36.528519500Z" level=info msg="RemovePodSandbox for \"5fe40c59fc65eac8b73a850f326d7b0d4b5c86ad5f8d73df478b78cf00b3511d\"" Jan 30 13:54:36.528865 containerd[1972]: time="2025-01-30T13:54:36.528574082Z" level=info msg="Forcibly stopping sandbox \"5fe40c59fc65eac8b73a850f326d7b0d4b5c86ad5f8d73df478b78cf00b3511d\"" Jan 30 13:54:36.680774 containerd[1972]: 2025-01-30 13:54:36.596 [WARNING][4025] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5fe40c59fc65eac8b73a850f326d7b0d4b5c86ad5f8d73df478b78cf00b3511d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.29.156-k8s-csi--node--driver--zjwh8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a329e2c4-51a9-4843-a9e8-b48284b269b5", ResourceVersion:"1079", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 53, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.29.156", ContainerID:"aafc45943d6f60beca2d2c91a6f3494d01198ee6af024d8f10311006b85680c5", Pod:"csi-node-driver-zjwh8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.3.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidd06851b23a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:54:36.680774 containerd[1972]: 2025-01-30 13:54:36.597 [INFO][4025] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5fe40c59fc65eac8b73a850f326d7b0d4b5c86ad5f8d73df478b78cf00b3511d" Jan 30 13:54:36.680774 containerd[1972]: 2025-01-30 13:54:36.597 [INFO][4025] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5fe40c59fc65eac8b73a850f326d7b0d4b5c86ad5f8d73df478b78cf00b3511d" iface="eth0" netns="" Jan 30 13:54:36.680774 containerd[1972]: 2025-01-30 13:54:36.597 [INFO][4025] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5fe40c59fc65eac8b73a850f326d7b0d4b5c86ad5f8d73df478b78cf00b3511d" Jan 30 13:54:36.680774 containerd[1972]: 2025-01-30 13:54:36.597 [INFO][4025] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5fe40c59fc65eac8b73a850f326d7b0d4b5c86ad5f8d73df478b78cf00b3511d" Jan 30 13:54:36.680774 containerd[1972]: 2025-01-30 13:54:36.657 [INFO][4031] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5fe40c59fc65eac8b73a850f326d7b0d4b5c86ad5f8d73df478b78cf00b3511d" HandleID="k8s-pod-network.5fe40c59fc65eac8b73a850f326d7b0d4b5c86ad5f8d73df478b78cf00b3511d" Workload="172.31.29.156-k8s-csi--node--driver--zjwh8-eth0" Jan 30 13:54:36.680774 containerd[1972]: 2025-01-30 13:54:36.658 [INFO][4031] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:54:36.680774 containerd[1972]: 2025-01-30 13:54:36.659 [INFO][4031] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:54:36.680774 containerd[1972]: 2025-01-30 13:54:36.672 [WARNING][4031] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5fe40c59fc65eac8b73a850f326d7b0d4b5c86ad5f8d73df478b78cf00b3511d" HandleID="k8s-pod-network.5fe40c59fc65eac8b73a850f326d7b0d4b5c86ad5f8d73df478b78cf00b3511d" Workload="172.31.29.156-k8s-csi--node--driver--zjwh8-eth0" Jan 30 13:54:36.680774 containerd[1972]: 2025-01-30 13:54:36.673 [INFO][4031] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5fe40c59fc65eac8b73a850f326d7b0d4b5c86ad5f8d73df478b78cf00b3511d" HandleID="k8s-pod-network.5fe40c59fc65eac8b73a850f326d7b0d4b5c86ad5f8d73df478b78cf00b3511d" Workload="172.31.29.156-k8s-csi--node--driver--zjwh8-eth0" Jan 30 13:54:36.680774 containerd[1972]: 2025-01-30 13:54:36.675 [INFO][4031] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:54:36.680774 containerd[1972]: 2025-01-30 13:54:36.678 [INFO][4025] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5fe40c59fc65eac8b73a850f326d7b0d4b5c86ad5f8d73df478b78cf00b3511d" Jan 30 13:54:36.683064 containerd[1972]: time="2025-01-30T13:54:36.680763024Z" level=info msg="TearDown network for sandbox \"5fe40c59fc65eac8b73a850f326d7b0d4b5c86ad5f8d73df478b78cf00b3511d\" successfully" Jan 30 13:54:36.715456 containerd[1972]: time="2025-01-30T13:54:36.715386681Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5fe40c59fc65eac8b73a850f326d7b0d4b5c86ad5f8d73df478b78cf00b3511d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:54:36.715619 containerd[1972]: time="2025-01-30T13:54:36.715488949Z" level=info msg="RemovePodSandbox \"5fe40c59fc65eac8b73a850f326d7b0d4b5c86ad5f8d73df478b78cf00b3511d\" returns successfully" Jan 30 13:54:36.716283 containerd[1972]: time="2025-01-30T13:54:36.716249072Z" level=info msg="StopPodSandbox for \"878ac55d6bd2e8783eb4ed5fcdfd5f4cd3945618dd7790a33d23ef6b83dc2086\"" Jan 30 13:54:36.863497 containerd[1972]: 2025-01-30 13:54:36.780 [WARNING][4050] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="878ac55d6bd2e8783eb4ed5fcdfd5f4cd3945618dd7790a33d23ef6b83dc2086" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.29.156-k8s-nginx--deployment--85f456d6dd--5b59x-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"21b007a3-ebbb-4563-8ea1-756127933ab6", ResourceVersion:"1107", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 53, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.29.156", ContainerID:"2bae822853cf26b03d68c3061acf269b791db91461beb5bb1c56f0a4d87f9773", Pod:"nginx-deployment-85f456d6dd-5b59x", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.3.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali323a990d68d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:54:36.863497 containerd[1972]: 2025-01-30 13:54:36.780 [INFO][4050] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="878ac55d6bd2e8783eb4ed5fcdfd5f4cd3945618dd7790a33d23ef6b83dc2086" Jan 30 13:54:36.863497 containerd[1972]: 2025-01-30 13:54:36.780 [INFO][4050] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="878ac55d6bd2e8783eb4ed5fcdfd5f4cd3945618dd7790a33d23ef6b83dc2086" iface="eth0" netns="" Jan 30 13:54:36.863497 containerd[1972]: 2025-01-30 13:54:36.780 [INFO][4050] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="878ac55d6bd2e8783eb4ed5fcdfd5f4cd3945618dd7790a33d23ef6b83dc2086" Jan 30 13:54:36.863497 containerd[1972]: 2025-01-30 13:54:36.780 [INFO][4050] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="878ac55d6bd2e8783eb4ed5fcdfd5f4cd3945618dd7790a33d23ef6b83dc2086" Jan 30 13:54:36.863497 containerd[1972]: 2025-01-30 13:54:36.839 [INFO][4056] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="878ac55d6bd2e8783eb4ed5fcdfd5f4cd3945618dd7790a33d23ef6b83dc2086" HandleID="k8s-pod-network.878ac55d6bd2e8783eb4ed5fcdfd5f4cd3945618dd7790a33d23ef6b83dc2086" Workload="172.31.29.156-k8s-nginx--deployment--85f456d6dd--5b59x-eth0" Jan 30 13:54:36.863497 containerd[1972]: 2025-01-30 13:54:36.839 [INFO][4056] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:54:36.863497 containerd[1972]: 2025-01-30 13:54:36.839 [INFO][4056] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:54:36.863497 containerd[1972]: 2025-01-30 13:54:36.854 [WARNING][4056] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="878ac55d6bd2e8783eb4ed5fcdfd5f4cd3945618dd7790a33d23ef6b83dc2086" HandleID="k8s-pod-network.878ac55d6bd2e8783eb4ed5fcdfd5f4cd3945618dd7790a33d23ef6b83dc2086" Workload="172.31.29.156-k8s-nginx--deployment--85f456d6dd--5b59x-eth0" Jan 30 13:54:36.863497 containerd[1972]: 2025-01-30 13:54:36.855 [INFO][4056] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="878ac55d6bd2e8783eb4ed5fcdfd5f4cd3945618dd7790a33d23ef6b83dc2086" HandleID="k8s-pod-network.878ac55d6bd2e8783eb4ed5fcdfd5f4cd3945618dd7790a33d23ef6b83dc2086" Workload="172.31.29.156-k8s-nginx--deployment--85f456d6dd--5b59x-eth0" Jan 30 13:54:36.863497 containerd[1972]: 2025-01-30 13:54:36.860 [INFO][4056] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:54:36.863497 containerd[1972]: 2025-01-30 13:54:36.862 [INFO][4050] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="878ac55d6bd2e8783eb4ed5fcdfd5f4cd3945618dd7790a33d23ef6b83dc2086" Jan 30 13:54:36.864125 containerd[1972]: time="2025-01-30T13:54:36.863542615Z" level=info msg="TearDown network for sandbox \"878ac55d6bd2e8783eb4ed5fcdfd5f4cd3945618dd7790a33d23ef6b83dc2086\" successfully" Jan 30 13:54:36.864125 containerd[1972]: time="2025-01-30T13:54:36.863572561Z" level=info msg="StopPodSandbox for \"878ac55d6bd2e8783eb4ed5fcdfd5f4cd3945618dd7790a33d23ef6b83dc2086\" returns successfully" Jan 30 13:54:36.864125 containerd[1972]: time="2025-01-30T13:54:36.864091670Z" level=info msg="RemovePodSandbox for \"878ac55d6bd2e8783eb4ed5fcdfd5f4cd3945618dd7790a33d23ef6b83dc2086\"" Jan 30 13:54:36.864252 containerd[1972]: time="2025-01-30T13:54:36.864125421Z" level=info msg="Forcibly stopping sandbox \"878ac55d6bd2e8783eb4ed5fcdfd5f4cd3945618dd7790a33d23ef6b83dc2086\"" Jan 30 13:54:36.984565 containerd[1972]: 2025-01-30 13:54:36.928 [WARNING][4074] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="878ac55d6bd2e8783eb4ed5fcdfd5f4cd3945618dd7790a33d23ef6b83dc2086" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.29.156-k8s-nginx--deployment--85f456d6dd--5b59x-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"21b007a3-ebbb-4563-8ea1-756127933ab6", ResourceVersion:"1107", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 53, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.29.156", ContainerID:"2bae822853cf26b03d68c3061acf269b791db91461beb5bb1c56f0a4d87f9773", Pod:"nginx-deployment-85f456d6dd-5b59x", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.3.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali323a990d68d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:54:36.984565 containerd[1972]: 2025-01-30 13:54:36.928 [INFO][4074] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="878ac55d6bd2e8783eb4ed5fcdfd5f4cd3945618dd7790a33d23ef6b83dc2086" Jan 30 13:54:36.984565 containerd[1972]: 2025-01-30 13:54:36.928 [INFO][4074] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="878ac55d6bd2e8783eb4ed5fcdfd5f4cd3945618dd7790a33d23ef6b83dc2086" iface="eth0" netns="" Jan 30 13:54:36.984565 containerd[1972]: 2025-01-30 13:54:36.929 [INFO][4074] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="878ac55d6bd2e8783eb4ed5fcdfd5f4cd3945618dd7790a33d23ef6b83dc2086" Jan 30 13:54:36.984565 containerd[1972]: 2025-01-30 13:54:36.929 [INFO][4074] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="878ac55d6bd2e8783eb4ed5fcdfd5f4cd3945618dd7790a33d23ef6b83dc2086" Jan 30 13:54:36.984565 containerd[1972]: 2025-01-30 13:54:36.964 [INFO][4080] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="878ac55d6bd2e8783eb4ed5fcdfd5f4cd3945618dd7790a33d23ef6b83dc2086" HandleID="k8s-pod-network.878ac55d6bd2e8783eb4ed5fcdfd5f4cd3945618dd7790a33d23ef6b83dc2086" Workload="172.31.29.156-k8s-nginx--deployment--85f456d6dd--5b59x-eth0" Jan 30 13:54:36.984565 containerd[1972]: 2025-01-30 13:54:36.964 [INFO][4080] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:54:36.984565 containerd[1972]: 2025-01-30 13:54:36.964 [INFO][4080] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:54:36.984565 containerd[1972]: 2025-01-30 13:54:36.976 [WARNING][4080] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="878ac55d6bd2e8783eb4ed5fcdfd5f4cd3945618dd7790a33d23ef6b83dc2086" HandleID="k8s-pod-network.878ac55d6bd2e8783eb4ed5fcdfd5f4cd3945618dd7790a33d23ef6b83dc2086" Workload="172.31.29.156-k8s-nginx--deployment--85f456d6dd--5b59x-eth0" Jan 30 13:54:36.984565 containerd[1972]: 2025-01-30 13:54:36.976 [INFO][4080] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="878ac55d6bd2e8783eb4ed5fcdfd5f4cd3945618dd7790a33d23ef6b83dc2086" HandleID="k8s-pod-network.878ac55d6bd2e8783eb4ed5fcdfd5f4cd3945618dd7790a33d23ef6b83dc2086" Workload="172.31.29.156-k8s-nginx--deployment--85f456d6dd--5b59x-eth0" Jan 30 13:54:36.984565 containerd[1972]: 2025-01-30 13:54:36.978 [INFO][4080] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:54:36.984565 containerd[1972]: 2025-01-30 13:54:36.981 [INFO][4074] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="878ac55d6bd2e8783eb4ed5fcdfd5f4cd3945618dd7790a33d23ef6b83dc2086" Jan 30 13:54:36.984565 containerd[1972]: time="2025-01-30T13:54:36.983471652Z" level=info msg="TearDown network for sandbox \"878ac55d6bd2e8783eb4ed5fcdfd5f4cd3945618dd7790a33d23ef6b83dc2086\" successfully" Jan 30 13:54:36.991493 containerd[1972]: time="2025-01-30T13:54:36.990315589Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"878ac55d6bd2e8783eb4ed5fcdfd5f4cd3945618dd7790a33d23ef6b83dc2086\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:54:36.991493 containerd[1972]: time="2025-01-30T13:54:36.991429841Z" level=info msg="RemovePodSandbox \"878ac55d6bd2e8783eb4ed5fcdfd5f4cd3945618dd7790a33d23ef6b83dc2086\" returns successfully" Jan 30 13:54:37.392698 kubelet[2420]: E0130 13:54:37.392651 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:38.393862 kubelet[2420]: E0130 13:54:38.393701 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:39.394612 kubelet[2420]: E0130 13:54:39.394558 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:40.395494 kubelet[2420]: E0130 13:54:40.395440 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:41.397280 kubelet[2420]: E0130 13:54:41.397230 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:42.398104 kubelet[2420]: E0130 13:54:42.398060 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:43.398247 kubelet[2420]: E0130 13:54:43.398193 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:44.399262 kubelet[2420]: E0130 13:54:44.399204 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:45.399752 kubelet[2420]: E0130 13:54:45.399697 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:46.400795 kubelet[2420]: E0130 13:54:46.400752 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:47.401952 kubelet[2420]: E0130 13:54:47.401907 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:48.402195 kubelet[2420]: E0130 13:54:48.402146 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:49.403091 kubelet[2420]: E0130 13:54:49.403029 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:50.404220 kubelet[2420]: E0130 13:54:50.404182 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:51.404414 kubelet[2420]: E0130 13:54:51.404342 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:52.404955 kubelet[2420]: E0130 13:54:52.404899 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:53.405856 kubelet[2420]: E0130 13:54:53.405801 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:54.406515 kubelet[2420]: E0130 13:54:54.406457 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:54.556684 kubelet[2420]: I0130 13:54:54.556641 2420 topology_manager.go:215] "Topology Admit Handler" podUID="93255a4e-f053-4332-a6b9-2658cab151dd" podNamespace="default" podName="test-pod-1" Jan 30 13:54:54.565833 systemd[1]: Created slice kubepods-besteffort-pod93255a4e_f053_4332_a6b9_2658cab151dd.slice - libcontainer container kubepods-besteffort-pod93255a4e_f053_4332_a6b9_2658cab151dd.slice. Jan 30 13:54:54.733138 kubelet[2420]: I0130 13:54:54.732991 2420 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5231c366-586e-4208-9143-93cc94013e11\" (UniqueName: \"kubernetes.io/nfs/93255a4e-f053-4332-a6b9-2658cab151dd-pvc-5231c366-586e-4208-9143-93cc94013e11\") pod \"test-pod-1\" (UID: \"93255a4e-f053-4332-a6b9-2658cab151dd\") " pod="default/test-pod-1" Jan 30 13:54:54.733138 kubelet[2420]: I0130 13:54:54.733052 2420 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldbxw\" (UniqueName: \"kubernetes.io/projected/93255a4e-f053-4332-a6b9-2658cab151dd-kube-api-access-ldbxw\") pod \"test-pod-1\" (UID: \"93255a4e-f053-4332-a6b9-2658cab151dd\") " pod="default/test-pod-1" Jan 30 13:54:54.898569 kernel: FS-Cache: Loaded Jan 30 13:54:55.012545 kernel: RPC: Registered named UNIX socket transport module. Jan 30 13:54:55.012766 kernel: RPC: Registered udp transport module. Jan 30 13:54:55.012806 kernel: RPC: Registered tcp transport module. Jan 30 13:54:55.013680 kernel: RPC: Registered tcp-with-tls transport module. Jan 30 13:54:55.013815 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 30 13:54:55.407377 kubelet[2420]: E0130 13:54:55.407264 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:55.517385 kernel: NFS: Registering the id_resolver key type Jan 30 13:54:55.517517 kernel: Key type id_resolver registered Jan 30 13:54:55.517549 kernel: Key type id_legacy registered Jan 30 13:54:55.561615 nfsidmap[4126]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Jan 30 13:54:55.574582 nfsidmap[4127]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Jan 30 13:54:55.793584 containerd[1972]: time="2025-01-30T13:54:55.793537253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:93255a4e-f053-4332-a6b9-2658cab151dd,Namespace:default,Attempt:0,}" Jan 30 13:54:56.042444 (udev-worker)[4112]: Network interface NamePolicy= disabled on kernel command line. Jan 30 13:54:56.045859 systemd-networkd[1849]: cali5ec59c6bf6e: Link UP Jan 30 13:54:56.051893 systemd-networkd[1849]: cali5ec59c6bf6e: Gained carrier Jan 30 13:54:56.076233 containerd[1972]: 2025-01-30 13:54:55.889 [INFO][4129] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.29.156-k8s-test--pod--1-eth0 default 93255a4e-f053-4332-a6b9-2658cab151dd 1236 0 2025-01-30 13:54:23 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.29.156 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="3567aa3b4036e483139656594c3401f91e59d82115437b76582ec80453b9026b" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.29.156-k8s-test--pod--1-" Jan 30 13:54:56.076233 containerd[1972]: 2025-01-30 13:54:55.889 [INFO][4129] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3567aa3b4036e483139656594c3401f91e59d82115437b76582ec80453b9026b" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.29.156-k8s-test--pod--1-eth0" Jan 30 13:54:56.076233 containerd[1972]: 2025-01-30 13:54:55.963 [INFO][4139] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3567aa3b4036e483139656594c3401f91e59d82115437b76582ec80453b9026b" HandleID="k8s-pod-network.3567aa3b4036e483139656594c3401f91e59d82115437b76582ec80453b9026b" Workload="172.31.29.156-k8s-test--pod--1-eth0" Jan 30 13:54:56.076233 containerd[1972]: 2025-01-30 13:54:55.978 [INFO][4139] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3567aa3b4036e483139656594c3401f91e59d82115437b76582ec80453b9026b" HandleID="k8s-pod-network.3567aa3b4036e483139656594c3401f91e59d82115437b76582ec80453b9026b" Workload="172.31.29.156-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003ca170), Attrs:map[string]string{"namespace":"default", "node":"172.31.29.156", "pod":"test-pod-1", "timestamp":"2025-01-30 13:54:55.962134097 +0000 UTC"}, Hostname:"172.31.29.156", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:54:56.076233 containerd[1972]: 2025-01-30 13:54:55.978 [INFO][4139] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:54:56.076233 containerd[1972]: 2025-01-30 13:54:55.979 [INFO][4139] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:54:56.076233 containerd[1972]: 2025-01-30 13:54:55.979 [INFO][4139] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.29.156' Jan 30 13:54:56.076233 containerd[1972]: 2025-01-30 13:54:55.981 [INFO][4139] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3567aa3b4036e483139656594c3401f91e59d82115437b76582ec80453b9026b" host="172.31.29.156" Jan 30 13:54:56.076233 containerd[1972]: 2025-01-30 13:54:55.987 [INFO][4139] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.29.156" Jan 30 13:54:56.076233 containerd[1972]: 2025-01-30 13:54:55.992 [INFO][4139] ipam/ipam.go 489: Trying affinity for 192.168.3.192/26 host="172.31.29.156" Jan 30 13:54:56.076233 containerd[1972]: 2025-01-30 13:54:55.997 [INFO][4139] ipam/ipam.go 155: Attempting to load block cidr=192.168.3.192/26 host="172.31.29.156" Jan 30 13:54:56.076233 containerd[1972]: 2025-01-30 13:54:56.000 [INFO][4139] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.3.192/26 host="172.31.29.156" Jan 30 13:54:56.076233 containerd[1972]: 2025-01-30 13:54:56.000 [INFO][4139] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.3.192/26 handle="k8s-pod-network.3567aa3b4036e483139656594c3401f91e59d82115437b76582ec80453b9026b" host="172.31.29.156" Jan 30 13:54:56.076233 containerd[1972]: 2025-01-30 13:54:56.004 [INFO][4139] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3567aa3b4036e483139656594c3401f91e59d82115437b76582ec80453b9026b Jan 30 13:54:56.076233 containerd[1972]: 2025-01-30 13:54:56.020 [INFO][4139] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.3.192/26 handle="k8s-pod-network.3567aa3b4036e483139656594c3401f91e59d82115437b76582ec80453b9026b" host="172.31.29.156" Jan 30 13:54:56.076233 containerd[1972]: 2025-01-30 13:54:56.032 [INFO][4139] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.3.196/26] block=192.168.3.192/26 handle="k8s-pod-network.3567aa3b4036e483139656594c3401f91e59d82115437b76582ec80453b9026b" host="172.31.29.156" Jan 30 13:54:56.076233 containerd[1972]: 2025-01-30 13:54:56.033 [INFO][4139] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.3.196/26] handle="k8s-pod-network.3567aa3b4036e483139656594c3401f91e59d82115437b76582ec80453b9026b" host="172.31.29.156" Jan 30 13:54:56.076233 containerd[1972]: 2025-01-30 13:54:56.033 [INFO][4139] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:54:56.076233 containerd[1972]: 2025-01-30 13:54:56.033 [INFO][4139] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.3.196/26] IPv6=[] ContainerID="3567aa3b4036e483139656594c3401f91e59d82115437b76582ec80453b9026b" HandleID="k8s-pod-network.3567aa3b4036e483139656594c3401f91e59d82115437b76582ec80453b9026b" Workload="172.31.29.156-k8s-test--pod--1-eth0" Jan 30 13:54:56.079042 containerd[1972]: 2025-01-30 13:54:56.036 [INFO][4129] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3567aa3b4036e483139656594c3401f91e59d82115437b76582ec80453b9026b" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.29.156-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.29.156-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"93255a4e-f053-4332-a6b9-2658cab151dd", ResourceVersion:"1236", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 54, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.29.156", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.3.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:54:56.079042 containerd[1972]: 2025-01-30 13:54:56.037 [INFO][4129] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.3.196/32] ContainerID="3567aa3b4036e483139656594c3401f91e59d82115437b76582ec80453b9026b" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.29.156-k8s-test--pod--1-eth0" Jan 30 13:54:56.079042 containerd[1972]: 2025-01-30 13:54:56.037 [INFO][4129] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="3567aa3b4036e483139656594c3401f91e59d82115437b76582ec80453b9026b" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.29.156-k8s-test--pod--1-eth0" Jan 30 13:54:56.079042 containerd[1972]: 2025-01-30 13:54:56.051 [INFO][4129] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3567aa3b4036e483139656594c3401f91e59d82115437b76582ec80453b9026b" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.29.156-k8s-test--pod--1-eth0" Jan 30 13:54:56.079042 containerd[1972]: 2025-01-30 13:54:56.052 [INFO][4129] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3567aa3b4036e483139656594c3401f91e59d82115437b76582ec80453b9026b" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.29.156-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.29.156-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"93255a4e-f053-4332-a6b9-2658cab151dd", ResourceVersion:"1236", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 54, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.29.156", ContainerID:"3567aa3b4036e483139656594c3401f91e59d82115437b76582ec80453b9026b", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.3.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"a2:1c:0c:79:12:ad", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:54:56.079042 containerd[1972]: 2025-01-30 13:54:56.072 [INFO][4129] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3567aa3b4036e483139656594c3401f91e59d82115437b76582ec80453b9026b" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.29.156-k8s-test--pod--1-eth0" Jan 30 13:54:56.113504 containerd[1972]: time="2025-01-30T13:54:56.113346911Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:54:56.113678 containerd[1972]: time="2025-01-30T13:54:56.113535677Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:54:56.113678 containerd[1972]: time="2025-01-30T13:54:56.113596045Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:54:56.114019 containerd[1972]: time="2025-01-30T13:54:56.113811179Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:54:56.152615 systemd[1]: Started cri-containerd-3567aa3b4036e483139656594c3401f91e59d82115437b76582ec80453b9026b.scope - libcontainer container 3567aa3b4036e483139656594c3401f91e59d82115437b76582ec80453b9026b. Jan 30 13:54:56.215777 containerd[1972]: time="2025-01-30T13:54:56.215718859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:93255a4e-f053-4332-a6b9-2658cab151dd,Namespace:default,Attempt:0,} returns sandbox id \"3567aa3b4036e483139656594c3401f91e59d82115437b76582ec80453b9026b\"" Jan 30 13:54:56.251597 containerd[1972]: time="2025-01-30T13:54:56.251554313Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 30 13:54:56.327182 kubelet[2420]: E0130 13:54:56.327026 2420 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:56.408050 kubelet[2420]: E0130 13:54:56.408005 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:56.555380 containerd[1972]: time="2025-01-30T13:54:56.555139908Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:56.557134 containerd[1972]: time="2025-01-30T13:54:56.557063218Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 30 13:54:56.569146 containerd[1972]: time="2025-01-30T13:54:56.569095456Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"71015439\" in 317.395934ms" Jan 30 13:54:56.569146 containerd[1972]: time="2025-01-30T13:54:56.569143791Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\"" Jan 30 13:54:56.572029 containerd[1972]: time="2025-01-30T13:54:56.571989731Z" level=info msg="CreateContainer within sandbox \"3567aa3b4036e483139656594c3401f91e59d82115437b76582ec80453b9026b\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 30 13:54:56.598331 containerd[1972]: time="2025-01-30T13:54:56.598201871Z" level=info msg="CreateContainer within sandbox \"3567aa3b4036e483139656594c3401f91e59d82115437b76582ec80453b9026b\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"e001fc8870a599939166e3d8ec6c919a4ea0ba975d28703f11fd01c595266497\"" Jan 30 13:54:56.599842 containerd[1972]: time="2025-01-30T13:54:56.599742637Z" level=info msg="StartContainer for \"e001fc8870a599939166e3d8ec6c919a4ea0ba975d28703f11fd01c595266497\"" Jan 30 13:54:56.634603 systemd[1]: Started cri-containerd-e001fc8870a599939166e3d8ec6c919a4ea0ba975d28703f11fd01c595266497.scope - libcontainer container e001fc8870a599939166e3d8ec6c919a4ea0ba975d28703f11fd01c595266497. Jan 30 13:54:56.671781 containerd[1972]: time="2025-01-30T13:54:56.671732853Z" level=info msg="StartContainer for \"e001fc8870a599939166e3d8ec6c919a4ea0ba975d28703f11fd01c595266497\" returns successfully" Jan 30 13:54:57.126636 systemd-networkd[1849]: cali5ec59c6bf6e: Gained IPv6LL Jan 30 13:54:57.409033 kubelet[2420]: E0130 13:54:57.408860 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:58.409529 kubelet[2420]: E0130 13:54:58.409467 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:59.410496 kubelet[2420]: E0130 13:54:59.410441 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:59.576570 ntpd[1942]: Listen normally on 12 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%9]:123 Jan 30 13:54:59.577067 ntpd[1942]: 30 Jan 13:54:59 ntpd[1942]: Listen normally on 12 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%9]:123